In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article is about how java crawlers crawl data from the TOP list of cat's eye movies. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.
How the crawler crawled the TOP list of cat's eye movies. The main contents are ranking, picture, movie name, leading role, release time and rating information. Before crawling, let's open the cat's eye movie TOP100 page, study and analyze the page, find the location of the information we need, and then grab it.
The code is as follows:
Import json
Import requests
From requests.exceptions import RequestException
Import re
Import time
Def get_one_page (url):
Try: headers= {'User-Agent':' agent Information'} response = requests.get (url, headers=headers) if response.status_code = = 200: return response.text return Noneexcept RequestException: return None
Def parse_one_page (html):
Pattern = re.compile ('. *? board-index.*? > (\ d +). *? data-src= "(. *?)". *? name "> (. *?). *? star" > (. *?)
. *? releasetime "> (. *?)
'+'. *? integer "> (. *?). Fraction" > (. *?). *?', re.S) items = re.findall (pattern, html) for item in items: yield {'index': item [0],' image': item [1], 'title': item [2],' actor': item [3] .strip () [3:] 'time': item [4] .strip () [5:], 'score': item [5] + item [6]}
Def write_to_file (content):
With open ('result.txt',' asides, encoding='utf-8') as f: f.write (json.dumps (content, ensure_ascii=False) +'\ n')
Def main (offset):
Url = 'http://maoyan.com/board/4?offset=' + str (offset) html = get_one_page (url) for item in parse_one_page (html): print (item) write_to_file (item)
If name = 'main':
For i in range (10): main (offset=i * 10) time.sleep (1)
Through the above code, we can get the data information of the TOP list of cat's eye movies.
Thank you for reading! This is the end of this article on "how java crawlers crawl TOP list data of cat's eye movies". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it for more people to see!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.