In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
How to solve the problem that requests.gPython uses requests.get to get empty web pages? in view of this problem, this article introduces the corresponding analysis and solution in detail, hoping to help more partners who want to solve this problem to find a more simple and feasible method.
Let's start with an example:
Import requestsresult=requests.get ("http://data.10jqka.com.cn/financial/yjyg/")result"
Output result:
Indicates that the request has been processed successfully, and this status code is usually returned. A report of 200 means no problem.
Continue to run, it is found that a null value is returned. When crawling a web page is requested, words such as "sorry" and "unable to access" will appear in the output text information. This is the prohibition of crawling, and this problem needs to be solved through the anti-crawling mechanism. Headers is one of the ways to solve requests request anti-crawling, which is equivalent to going into the server itself of the web page and pretending that we are crawling data. For anti-crawler web pages, you can set up some headers information to simulate browsers to visit the website.
How to set up headers
Take two commonly used browsers as examples:
1 、 QQ Browser
Interface F12
Click network and type CTRL+R
Click the first one at the bottom is what we need to set it to headers to solve the problem.
2. Miscrosft edge II. Microsoft has its own browser.
Similarly, F12 opens developer tools.
Click on the network, CTRL+R
Previous code modification:
Import requestsur= "http://data.10jqka.com.cn/financial/yjyg/"headers = {'User-Agent':' Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.25 Safari/537.36 Core/1.70.3880.400 QQBrowser/10.8.4554.400'} result = requests.get (ur, headers=headers) result.text
Successfully solved the problem of unable to crawl.
This is the answer to the question of how to solve the problem that requests.gPython uses requests.get to get empty web content. I hope the above content can be of some help to you. If you still have a lot of doubts to be solved, you can follow the industry information channel for more related knowledge.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.