In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces Python how to climb cartoon pictures, has a certain reference value, interested friends can refer to, I hope you can learn a lot after reading this article, the following let the editor take you to understand it.
Development environment:
Python 3.6
Pycharm
Destination address
Https://www.dmzj.com/info/yaoshenji.html
Code
Import tool
Import requestsimport osimport refrom bs4 import BeautifulSoupfrom contextlib import closingfrom tqdmimport tqdmimport time
Get the anime chapter link and chapter name
R = requests.get (url=target_url) bs = BeautifulSoup (r.text, 'lxml') list_con_li = bs.find (' ul', class_= "list_con_li") cartoon_list = list_con_li.find_all ('a') chapter_names = [] chapter_urls = [] for cartoon in cartoon_list: href = cartoon.get ('href') name = cartoon.text chapter_names.insert (0, name) chapter_urls.insert (0 Href) print (chapter_urls)
Download cartoons
For iKHTML url in enumerate (tqdm (chapter_urls)): print (iMagne url) download_header = {'Referer':url,' User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36'} name = chapter_ namespace [I] # removed. While'. In name: name = name.replace ('.',') chapter_save_dir = os.path.join (save_dir, name) if name not in os.listdir (save_dir): os.mkdir (chapter_save_dir) r = requests.get (url=url) html = BeautifulSoup (r.text, 'lxml') script_info = html.script pics = re.findall ('\ d {13, 14}', str (script_info)) for j Pic in enumerate (pics): if len (pic) = = 13: pics [j] = pic +'0' pics = sorted (pics, key=lambda x: int (x)) chapterpic_hou = re.findall ('\ | (\ d {5})\ |', str (script_info)) [0] chapterpic_qian = re.findall ('\ | (\ d {4})\ |', str (script_info)) [0] for idx Pic in enumerate (pics): if pic [- 1] = '0mm: url =' https://images.dmzj.com/img/chapterpic/' + chapterpic_qian +'/'+ chapterpic_hou +'/'+ pic [ :-1] + '.jpg' else: url = 'https://images.dmzj.com/img/chapterpic/' + chapterpic_qian +' /'+ chapterpic_hou +'/'+ pic + '.jpg' pic_name = 'd.jpg'% (idx + 1) pic_save_path = os.path.join (chapter_save_dir) Pic_name) print (url) response = requests.get (url,headers=download_header) # with closing (requests.get (url,headers=download_header, stream=True)) as response: # chunk_size = 1024 # content_size = int (response.headers ['content-length']) print (response) if response.status_code = = with open (pic_save_path) "wb") as file: # for data in response.iter_content (chunk_size=chunk_size): file.write (response.content) else: print ('link exception') time.sleep (2)
Create a save directory
Save_dir = 'Devil God' if save_dir not in os.listdir ('. /'): os.mkdir (save_dir) target_url = "Thank you for reading this article carefully. I hope the article" how to crawl comic pictures by Python "shared by the editor will be helpful to you. At the same time, I also hope that you will support and follow the industry information channel. More related knowledge is waiting for you to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.