In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces "python how to crawl lxml library to analyze xpath web pages". In daily operation, I believe many people have doubts about how python crawler lxml library parses xpath web pages. Xiaobian consulted all kinds of materials and sorted out simple and easy-to-use methods of operation. I hope it will be helpful for you to answer the doubts of "python how crawler lxml library parses xpath web pages". Next, please follow the editor to study!
(1) what is xpath
Xpath is a language for finding information in XML documents. Xpath can be used to traverse elements and attributes in XML documents. Mainstream browsers support xpath because html pages are represented as XHTML documents in DOM.
The xpath language is based on the tree structure of XML documents and provides the ability to browse the tree and select nodes by a variety of criteria. To find the data we want.
First we need to install the xpath plug-in in the chrome browser.
You can search for downloads at the Google App Store.
After installation, restart the browser and press the shortcut key Ctrl + Shift+X. A black box will appear on the web page to show that it is successful!
(2) the basic grammar path query of xpath.
/ /: find all descendant nodes, regardless of hierarchy
/: find the direct child node
Predicate query
/ / div [@ id]
/ / div [@ id= "maincontent"]
Attribute query
/ / @ class
Fuzzy query
/ / div [contains (@ id, "he")]
/ / div [starts-with (@ id, "he")]
Content query
/ / div/h2/text ()
(3) lxml library
Lxml is a parsing library of python, which supports the parsing of HTML and XML, supports XPath parsing, and is very efficient.
We need to install the lxml library in pycharm before using it.
Just enter the command at the terminal:
Pip install lxml-I https://pypi.douban.com/simple
Note: it must be installed in the environment we are currently using
(4) use of lxml library to import lxml.etreefrom lxml import etree
Parsing local files
Tree = etree.parse (xxx.html) # parsing local files
Parse the server response file
Tree = etree.HTML (content) # parsing web page files
Return the result
Result = tree.xpath ('/ / div/div/@aria-label') [0]
Note: the result type returned by xpath is a list. When the result has many values, we can use the subscript to get the value we want.
(5) demonstration of import urllib.requestfrom lxml import etreeimport urllib.parseurl = 'https://www.baidu.com/s?'headers = {' User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64) X64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.102 Safari/537.36'} cre_data = {'wd':' write keywords here'} data = urllib.parse.urlencode (cre_data) url = url + datarequest = urllib.request.Request (url = url Headers = headers) response = urllib.request.urlopen (request) content = response.read (). Decode ('utf-8') print (content) # tree = etree.parse (xxx.html) # parse the local file tree = etree.HTML (content) # parse the web file result = tree.xpath (' / div/div/@aria-label') [0] print (result) here The study on "how python crawler lxml library parses xpath web pages" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.