Crawling knowledge planet with python

Last year, we did a community activity called "question and answer for seniors", which precipitated a lot of valuable interactive information, and then it was terminated for various reasons. Today, I started to talk with Tu Teng, and thought that the information was so quiet and wasteful. So try to use python to crawl the content of t ...

Posted by kylera on Sat, 07 Dec 2019 21:05:17 -0800

Building a project with gulp

I. environmental preparation 1.1 introduction to gulp gulp is a front-end automatic construction tool based on the file flow of node.js. It can be used to build automatic workflow and simplify workload Official definition: build system based on file flow Core files: gulpfile.js and package.json gulpfile.js: task script package.json: task con ...

Posted by wee_eric on Sat, 07 Dec 2019 20:45:11 -0800

Coin station log 1 -- Python 3 crawler crawling blockchain news

Coin station log 1 -- Python 3 crawler crawling blockchain news Blockchain is very popular recently, so I want to be a news crawling and analysis type media website. I can do it as soon as I say, but I always need a data source to be a media website. Where does the data source come from? I'll write this thing later. First, crawling... Anyway, ...

Posted by sotusotusotu on Sat, 07 Dec 2019 19:02:41 -0800

Bold prediction: docker app will be an alternative to docker compose

Docker 19.03 introduces an experimental feature: app, which is an instruction of docker, such as image, run, exec, swarm Official documents: https://docs.docker.com/engine/reference/commandline/app/ Docker app arranges the docker container as a bundle, named application application. You want to package a set of docker containers as an applicati ...

Posted by harinath on Sat, 07 Dec 2019 15:06:18 -0800

[code Notes] JS keeps function single responsibility and flexible combination

For example, in the following code, the order data requested from the server is as follows, which needs to be processed as follows1. Display corresponding value according to status (0-in progress, 1-completed, 2-order exception)2. Display startTime as yyyy MM DD from time stamp3. If the field value is an empty string, set the field value to '-- ...

Posted by iii on Sat, 07 Dec 2019 14:32:37 -0800

Online Process Recording for Network Deployment

Online Process Recording for Network Deployment Take the example of recommending similar items: reco-similar-product Project Directory Dockerfile Jenkinsfile README.md config deploy.sh deployment.yaml index offline recsys requirements.txt service stat You can see that there are many files in the directory. Here is a brief description of the rol ...

Posted by tomasd on Sat, 07 Dec 2019 13:38:56 -0800

Automatic generation of golang struct from mysql table structure

a lib for golang , generate mysql table schema to golang struct Automatic generation of golang struct from mysql table structure github address https://github.com/gohouse/converter install Download the executable directly: Download address golang source package: go get github.com/gohouse/converter Sample table structure CREATE TABLE `prefix ...

Posted by scast on Sat, 07 Dec 2019 13:30:25 -0800

[Xuefeng magnetic needle stone blog] python 3.7 quick start tutorial 7 Internet

Contents of this tutorial 7 Internet Internet access urllib urllib is a Python module for opening URL s. import urllib.request # open a connection to a URL using urllib webUrl = urllib.request.urlopen('https://china-testing.github.io/address.html') #get the result code and print it print ("result code: " + str(webUrl.getcode())) # read the d ...

Posted by esport on Sat, 07 Dec 2019 09:41:08 -0800

Development of django - configuration and use of mongodb

Today, I sorted out how to use mongodb in django project. The environment is as follows:ubuntu18.04, django2.0.5, drf3.9, mongoengine0.16 Step 1: configure mongodb and mysql in settings.py as follows (you can use both mysql and mongodb): DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', # database engine 'NA ...

Posted by nimbus on Sat, 07 Dec 2019 05:53:20 -0800

Scrapy Crawler and Case Analysis

Due to the rapid development of the Internet, all the information is in a state of massive accumulation. We need to obtain a large amount of data from the outside world and filter the useless data in a large amount of data.We need to specify a crawl for our useful data, so there are now crawling techniques that allow us to quickly get the data ...

Posted by klance on Sat, 07 Dec 2019 04:35:57 -0800