Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to write the source code of Python crawler crawling bucket bar

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

In this issue, the editor will bring you how to write the source code of the Python crawler crawl map bar. The article is rich in content and analyzes and narrates it from a professional point of view. I hope you can get something after reading this article.

Are you still searching the Internet for no memes? Today, let's use python to get the emojis we want.

Project source code

From pyquery import PyQuery as pqimport requestsfrom redis import StrictRediso=0headers= {'user-agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36 SE 2.x MetaSr 1.0'

}

Def xpan (canshu): try: url = 'https://www.doutula.com/photo/list/?page=' + str (canshu) sp = requests.get (url, headers=headers) .text

Doc=pq (sp) item=doc ('.col-xs-6 img'). Items ()

For i in item: cu=i.attr ('data-original')

# r (/% s.jpg'%u of cu,'F:/ bucket map) # uplink 1 # print ('d pieces currently saved'% u) # print (cu) redis = StrictRedis (host='172.18.200.5', port=6379, db=1, password='') redis.sadd ('URL', cu for downloading pictures from Doutu)

Except Exception as e:

Print ('error', e.args) a=int (input ('how many pages do you need to crawl:') print ('crawling the link and saving it to the redis database.') for i in range (a):

Xpan (iTun1)

Redis = StrictRedis (host='172.18.200.5', port=6379, db=1, password='') redis.srem ('URL','None' for downloading pictures on Doutu.com) for i in redis.smembers (' URL' for downloading pictures on Doutu.com): t=str (iRegent encodingdownload pictures utf8') # print (t) r=t.split ('/') [- 1] dizhi='F:/ Toutu /'+ r req=requests.get (t) with open (dizhi) 'wb') as p: p.write (req.content) print (' saved% d pieces'% o) ocrawl1 above is how the source code of the Python crawler crawling bucket bar shared by the editor is compiled. If you happen to have similar doubts, you might as well refer to the above analysis to understand. If you want to know more about it, you are welcome to follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report