In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces "what are the basic knowledge points of Python crawler". In daily operation, I believe that many people have doubts about the basic knowledge points of Python crawler. The editor consulted all kinds of materials and sorted out simple and easy-to-use methods of operation. I hope it will be helpful to answer the questions of "what are the basic knowledge points of Python crawler?" Next, please follow the editor to study!
Web page knowledge
Although the knowledge of html,js,css,xpath is simple, it must be understood. You need to know how these pages are constructed before you can decompose them.
HTTP knowledge
General crawler you need to simulate the operation of the browser in order to get the information of the web page
If some websites need to log in to get more information, you have to log in, and you have to submit the login account password.
Some websites need to save cookie information after login before they can continue to get more information.
Regular expression
With regular expressions, we can better segment web information and get the data we want, so regular expressions also need to be understood.
Some important reptile libraries
Url,url2
Beautiul Soup
Database
We need a place to store the crawled data. We can use files or databases. Here I will use mysql, a MongoDB database that is more suitable for crawlers, and a redis database for distributed use.
Crawler frame
The two crawler frameworks, PySpider and Scrapy, are very NB. Simple crawlers can be done using urllib and urllib2 and regular expressions, but advanced crawlers also need to use these two frameworks. These two frameworks need to be installed separately.
Anti-reptile
Sometimes if you want to prohibit others from crawling your website data, you can do some anti-crawler processing. For example, Baidu is unable to find the data on Taobao, so it avoids the competition from search engines, and Taobao can make its own competitive ranking.
Distributed crawler
Use multiple redis instances to cache crawled data on each host.
At this point, the study of "what are the basic knowledge points of Python crawler" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.