Rough implementation of scratch
only in the follow-up study can we find that this case is very unstructured and superficial, but it is very good as a learning example of the preliminary introduction of the strategy. In particular, let me have a general understanding of the specific framework of scratch.
the url used in this example is as follows
url = "http://quote.eastmoney.com/stocklist.html"
url = "https://gupiao.baidu.com/stock/"
Steps:
- Step 1: build the project and Spider template
- Step 2: write Spider
- Step 3: write ITEM Pipelines
Step 1
This step is the brainless building project
scrapy startproject BaiduStocks
cd BaiduStocks
scrapy genspider stocks baidu.com
Step 2
- Configure the stocks.py file
- Modify the processing of the returned page
- Modify the handling of new url crawling
# -*- coding: utf-8 -*-
import scrapy
import re
class StocksSpider(scrapy.Spider):
name = "stocks"
start_urls = ['http://quote.eastmoney.com/stocklist.html']
def parse(self, response):
for href in response.css('a::attr(href)').extract():
try:
stock = re.findall(r"[s][hz]\d{6}", href)[0]
url = 'https://gupiao.baidu.com/stock/' + stock + '.html'
yield scrapy.Request(url, callback=self.parse_stock)
except:
continue
def parse_stock(self, response):
infoDict = {}
stockInfo = response.css('.stock-bets')
name = stockInfo.css('.bets-name').extract()[0]
keyList = stockInfo.css('dt').extract()
valueList = stockInfo.css('dd').extract()
for i in range(len(keyList)):
key = re.findall(r'>.*</dt>', keyList[i])[0][1:-5]
try:
val = re.findall(r'\d+\.?.*</dd>', valueList[i])[0][0:-5]
except:
val = '--'
infoDict[key] = val
infoDict.update(
{'Stock name': re.findall('\s.*\(', name)[0].split()[0] + \
re.findall('\>.*\<', name)[0][1:-1]})
yield infoDict
Step 3: write Piplines
- Configure the pipelines.py file
- Define the processing class for the crawled item
#Pipeline.py
# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
class BaidustocksPipeline(object):
def process_item(self, item, spider):
return item
class BaidustocksInfoPipeline(object):
def open_spider(self, spider):
self.f = open('BaiduStockInfo.txt', 'w')
def close_spider(self, spider):
self.f.close()
def process_item(self, item, spider):
try:
line = str(dict(item)) + '\n'
self.f.write(line)
except:
pass
return item
#setting.py
ITEM_PIPELINES = {
'BaiduStocks.pipelines.BaidustocksInfoPipeline': 300,
}
Finally start the crawler
scrapy crawl stocks
Configure concurrent connection options
setting.py file
Concurrent menu requests: the maximum number of concurrent requests to download for Downloader, 32 by default
Current ITEM: ITEM pipeline maximum concurrent ITEM processing quantity default 100
Concurrent > requests > per > domain the maximum number of concurrent requests per target domain name, default 8
Concurrent > requests > per > IP the maximum number of concurrent requests per target IP, default 0, not 0 valid