In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article is about how to deal with the blocking of the crawler agent ip. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.
1. To reduce the access speed, the crawler should first test the limited speed threshold set by the website and set a reasonable access speed according to the speed limit.
Since the fast access speed mentioned above will cause IP to be blocked, the most intuitive way is to slow down the access speed, which can prevent our IP from being blocked. However, if you reduce the speed, the efficiency of the crawler will be reduced, the key or to what extent should it be reduced?
First of all, the speed limit threshold set by the website should be tested, and the reasonable access speed should be set according to the speed limit.
It is recommended not to set a fixed access speed, which can be set within a range to prevent it from being detected by the system too regularly, resulting in IP being blocked.
Reduce the access speed, inevitably affect the crawling efficiency, can not crawl efficiently, what is the difference between such crawling speed and manual crawling? No longer have the advantage of using crawlers.
two。 Crawlers switch IP access and use multiple crawlers to crawl at the same time.
Since the speed of a single crawler is controlled, we can use multiple reptiles to grab it at the same time.
We can use multi-thread, multi-process, here to cooperate with the use of agents, different threads use different IP addresses, as if there are different users visiting at the same time, which can greatly improve the crawling efficiency of the crawler.
PS: in addition, you need to know something else:
(1) appropriate support for robots.txt.
(2) automatic throttling based on original server bandwidth and load estimation.
(3) automatic throttling based on the estimation of the frequency of the original content change.
(4) site administrator interface, where site owners can register, verify, and control the rate and frequency of crawls.
(5) understand the virtual host and throttle through the original IP address.
(6) support some form of machine-readable site map.
(7) correct crawl queue priority and ordering.
(8) reasonable duplicate domain and duplicate content detection to avoid re-crawling the same site in different domains.
(last.fm and lastfm.com, and 1 million other sites that use multiple domains for the same content. )
(9) learn about the GET parameters and what the "search results" are in many site-specific search engines.
For example, some pages may use certain GET parameters to link to search results pages in another site's internal search. You may not want to crawl these results pages.
(10) learn about other common link formats, such as login / logout links.
Thank you for reading! This is the end of the article on "how to deal with the blocking of reptile agent ip". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it for more people to see!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.