In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
Editor to share with you how to use Python to automatically crawl pictures and save, I believe most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's go to know it!
I. preparatory work
Use python to crawl and save Baidu images. Take emotional pictures as an example, Baidu search can get the following figure.
F12 open source code
You can see here that the basic information of the image we are going to climb this time is in img-scr.
Second, code implementation
This crawl mainly uses the following third-party libraries
Import reimport timeimport requestsfrom bs4 import BeautifulSoupimport os
Simple ideas can be divided into three small parts.
1. Get web content
two。 Parsing web pages
3. Save the picture to the appropriate location
Let's look at the first part: get the content of the web page.
Baseurl = 'https://cn.bing.com/images/search?q=%E6%83%85%E7%BB%AA%E5%9B%BE%E7%89%87&qpvt=%e6%83%85%e7%bb%aa%e5%9b%be%e7%89%87&form=IGRE&first=1&cw=418&ch=652&tsc=ImageBasicHover'head = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64) X64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36 Edg/92.0.902.67 "} response = requests.get (baseurl, headers=head) # get web page information html = response.text # convert web page information into text form
Is it so easy?
In the second part, parsing web pages is the big head.
Look at the code.
Img = re.compile (rsprinimg.accounsrc = "(. *?)') # regular expression matches picture soup = BeautifulSoup (html," html.parser ") # BeautifulSoup parses html # I = 0 # counter initial value data = [] # list of hyperlinks to store pictures for item in soup.find_all ('img' Src= ""): # soup.find_all iterates over img-src in a web page item = str (item) # converts to str type Picture = re.findall (Img, item) # combines re regular expression and BeautifulSoup to return only the hyperlink for b in Picture: data.append (b) # I = I + 1 return data [- 1] # print (I)
Here we apply the relevant knowledge of BeautifulSoup and re regular expressions, which needs to have a certain foundation.
Here's the third part: save the picture.
For m in getdata (baseurl=' https://cn.bing.com/images/search?q=%E6%83%85%E7%BB%AA%E5%9B%BE%E7%89%87&qpvt=%e6%83%85%e7%bb%aa%e5%9b%be%e7%89%87&form=IGRE&first=1&cw=418&ch=652&tsc=ImageBasicHover'): resp = requests.get (m) # get web page information byte = resp.content # convert to content binary print (os.getcwd ()) # output current path in the os library I = I + 1 # increment # img_path = os.path.join (m) with open ("path {} .jpg" .format (I) "wb") as f: # File is written to f.write (byte) time.sleep (0.5) # download a picture every 0.5 seconds and put it into print / emotional picture test print ("the {} picture crawled successfully!" .format (I))
The explanation of each line of code has been written in the comments. If you don't understand, you can send a private message or comment directly.
Here is the complete code
Import reimport timeimport requestsfrom bs4 import BeautifulSoupimport os # m = 'https://tse2-mm.cn.bing.net/th/id/OIP-C.uihwmxDdgfK4FlCIXx-3jgHaPc?w=115&h=183&c=7&r=0&o=5&pid=1.7''''resp = requests.get (m) byte = resp.contentprint (os.getcwd ()) img_path = os.path.join (m)' 'def main (): baseurl =' https://cn.bing.com/images/search?q=%E6%83 % 85% E7% BB% AA% E5% 9B% be% E7% 89% 87roomqpvt% e6% 83% 85% E7% BB% AA% E5% 9b% be% e7% 9b% bey% e7% 89% 87 formalized IGREFSTABILITY: 652% permanent CWBASHover1 datalist = getdata (baseurl) def getdata (baseurl): Img = re.compile (rroomimg.roomsrc = "(. *?)") # regular expression matches the picture datalist = [] head = { "User-Agent": "Mozilla/5.0 (Windows NT 10.0 Win64 X64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36 Edg/92.0.902.67 "} response = requests.get (baseurl, headers=head) # get web page information html = response.text # convert web page information into text form soup = BeautifulSoup (html "html.parser") # BeautifulSoup parses html # I = 0 # counter initial value data = [] # list of stored picture hyperlinks for item in soup.find_all ('img', src= ""): # soup.find_all iterates over the img-src in the web page item = str (item) # converts to str type Picture = re.findall (Img Item) # combines re regular expression with BeautifulSoup to return only the hyperlink for b in Picture: # traversal list Take the last result data.append (b) # I = I + 1 datalist.append (data [- 1]) return datalist # return a new list containing hyperlinks # print (I)''with open ("img_path.jpg") "wb") as f: f.write (byte)'if _ _ name__ = ='_ _ main__': os.chdir ("Dazzard / emotional picture test") main () I = incremental picture name for m in getdata (baseurl=' https://cn.bing.com/images/search?q=%E6%83%85%E7%BB%AA%E5%9B%BE%E7) % 89% 87 percent qpvtages% e6% 83% 85% E7% BB% AA% E5% 9b% be% E7% 89% 87 percent formalization IGRESIGRESIGRESFirstImages: resp = requests.get (m) # get web page information byte = resp.content # convert to content binary print (os.getcwd ()) # output current path in os library I = I + 1 # increment # img_path = os.path.join (m) with open ("path {} .jpg" .format (I) "wb") as f: # File is written to f.write (byte) time.sleep (0.5) # download a picture every 0.5 seconds and put it into print / emotional picture test print ("the {} picture crawled successfully!" .format (I))
Final screenshot of the operation
The above is all the contents of the article "how to use Python to automatically crawl pictures and save them". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.