scrapy利用selenium實(shí)現(xiàn)爬取豆瓣閱讀的方法教程?針對(duì)這個(gè)問題,這篇文章詳細(xì)介紹了相對(duì)應(yīng)的分析和解答,希望可以幫助更多想解決這個(gè)問題的小伙伴找到更簡單易行的方法。
創(chuàng)新互聯(lián)公司專注于任丘網(wǎng)站建設(shè)服務(wù)及定制,我們擁有豐富的企業(yè)做網(wǎng)站經(jīng)驗(yàn)。 熱誠為您提供任丘營銷型網(wǎng)站建設(shè),任丘網(wǎng)站制作、任丘網(wǎng)頁設(shè)計(jì)、任丘網(wǎng)站官網(wǎng)定制、微信平臺(tái)小程序開發(fā)服務(wù),打造任丘網(wǎng)絡(luò)公司原創(chuàng)品牌,更為您提供任丘網(wǎng)站排名全網(wǎng)營銷落地服務(wù)。首先創(chuàng)建scrapy項(xiàng)目
命令:scrapy startproject douban_read
創(chuàng)建spider
命令:scrapy genspider douban_spider url
網(wǎng)址:https://read.douban.com/charts
關(guān)鍵注釋代碼中有,若有不足,請(qǐng)多指教
scrapy項(xiàng)目目錄結(jié)構(gòu)如下
douban_spider.py文件代碼
爬蟲文件
import scrapy import re, json from ..items import DoubanReadItem class DoubanSpiderSpider(scrapy.Spider): name = 'douban_spider' # allowed_domains = ['www'] start_urls = ['https://read.douban.com/charts'] def parse(self, response): # print(response.text) # 獲取圖書分類的url type_urls = response.xpath('//div[@class="rankings-nav"]/a[position()>1]/@href').extract() # print(type_urls) for type_url in type_urls: # /charts?type=unfinished_column&index=featured&dcs=charts&dcm=charts-nav part_param = re.search(r'charts\?(.*?)&dcs', type_url).group(1) # https://read.douban.com/j/index//charts?type=intermediate_finalized&index=science_fiction&verbose=1 ajax_url = 'https://read.douban.com/j/index//charts?{}&verbose=1'.format(part_param) yield scrapy.Request(ajax_url, callback=self.parse_ajax, encoding='utf-8', meta={'request_type': 'ajax'}) def parse_ajax(self, response): # print(response.text) # 獲取分類中圖書的json數(shù)據(jù) json_data = json.loads(response.text) for data in json_data['list']: item = DoubanReadItem() item['book_id'] = data['works']['id'] item['book_url'] = data['works']['url'] item['book_title'] = data['works']['title'] item['book_author'] = data['works']['author'] item['book_cover_image'] = data['works']['cover'] item['book_abstract'] = data['works']['abstract'] item['book_wordCount'] = data['works']['wordCount'] item['book_kinds'] = data['works']['kinds'] # 把item yield給Itempipeline yield item
分享題目:scrapy利用selenium實(shí)現(xiàn)爬取豆瓣閱讀的方法教程-創(chuàng)新互聯(lián)
當(dāng)前網(wǎng)址:http://m.rwnh.cn/article44/dsdpee.html
成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供網(wǎng)站建設(shè)、網(wǎng)頁設(shè)計(jì)公司、營銷型網(wǎng)站建設(shè)、品牌網(wǎng)站建設(shè)、網(wǎng)站改版、軟件開發(fā)
聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請(qǐng)盡快告知,我們將會(huì)在第一時(shí)間刪除。文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如需處理請(qǐng)聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時(shí)需注明來源: 創(chuàng)新互聯(lián)
猜你還喜歡下面的內(nèi)容