Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How Python crawls news and information

2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly shows you "Python how to crawl news", the content is easy to understand, clear, hope to help you solve doubts, the following let the editor lead you to study and learn "Python how to crawl news" this article.

Preface

A simple Python information collection case, list page to details page, to data preservation, save as txt documents, website page structure is relatively regular, simple and clear, information news content collection and preservation!

The information is in the group file waiting for you to pick it up.

Library to which it is applied

Requests,time,re,UserAgent,etree

Import requests,time,refrom fake_useragent import UserAgentfrom lxml import etree

List page, link xpath parsing

Href_list=req.xpath ('/ / ul [@ class= "news-list"] / li/a/@href')

Details page

Content xpath parsing

H3=req.xpath ('/ / div [@ class= "title-box"] / h3/text ()') [0] author=req.xpath ('/ / div [@ class= "title-box"] / span [@ class= "news-from"] / text ()') [0] details=req.xpath ('/ / div [@ class= "content-l detail"] / p/text ()')

Content formatting processing

Detail='\ n'.join (details)

Title formatting to replace illegal characters

Pattern = r "[\ /\\:\ *\?\"\ |] "new_title = re.sub (pattern," _ ", title) # replace with an underscore

Save data, save as txt text

Def save (self,h3, author, detail): with open (f'{h3} .txt', 'wicked pencils) as f: f.write ('% s%%) print (f "saved {h3} .txt text successfully!")

Traversal data acquisition, yield processing

Def get_tasks (self): data_list = self.parse_home_list (self.url) for item in data_list: yield item

Program running effect

Attached source code reference:

# Information Collection for Postgraduate entrance examination # 20200710 by Wechat: huguo00289#-*-coding: UTF-8-*-import requests,time,refrom fake_useragent import UserAgentfrom lxml import etreeclass RandomHeaders (object): ua=UserAgent () @ property def random_headers (self): return {'User-Agent': self.ua.random,} class Spider (RandomHeaders): def _ init__ (self) Url): self.url=url def parse_home_list (self,url): response=requests.get (url Headers=self.random_headers) .content.decode ('utf-8') req=etree.HTML (response) href_list=req.xpath (' / / ul [@ class= "news-list"] / li/a/@href') print (href_list) for href in href_list: item = self.parse_detail (f 'https://yz.chsi.com.cn{href}') yield item def parse_detail (self) Url): print (f "> > is crawling {url}") try: response = requests.get (url Headers=self.random_headers) .content.decode ('utf-8') time.sleep (2) except Exception as e: print (e.args) self.parse_detail (url) else: req = etree.HTML (response) try: h3=req.xpath (' / / div [@ class= "title-box"] / h3 / Text ()') [0] h3=self.validate_title (h3) author=req.xpath ('/ / div [@ class= "title-box"] / span [@ class= "news-from"] / text ()') [0] details=req.xpath ('/ / div [@ class= "content-l detail"] / p/text ()') detail='\ n'.join (details) ) print (h3 Author, detail) self.save (h3, author, detail) return h3, author, detail except IndexError: print ("> acquisition error needs delay) Try again after 5s. ") Time.sleep (5) self.parse_detail (url) @ staticmethod def validate_title (title): pattern = r "[[\ /\\:\ *\?\"\ |] "new_title = re.sub (pattern," _ ", title) # replace with an underscore return new_title def save (self,h3, author, detail): with open (f'{h3} .txt','w') Encoding='utf-8') as f: f.write (% s%% s%%)) print (f "saved {h3} .txt text successfully!") Def get_tasks (self): data_list = self.parse_home_list (self.url) for item in data_list: yield itemif _ _ name__== "_ _ main__": url= "https://yz.chsi.com.cn/kyzx/jyxd/" spider=Spider (url) for data in spider.get_tasks (): print (data) above is" how Python crawls news All the contents of this article "Information" Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report