In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
Python how to collect fund data, in view of this problem, this article introduces the corresponding analysis and answer in detail, hoping to help more partners who want to solve this problem to find a more simple and feasible method.
Case realization process
Analysis of ideas:
What data do you need? Where is the required data?
Code implementation:
Send a request
Get data
Parsing data
Multi-page crawl
Save data
Knowledge points:
Requests sends request
The use of developer tools
Json type data parsing
The use of regular expressions
Development environment:
Version: python 3.8
Editor: pycharm 2021.2
This goal:
First, analyze the website
Step 1: open the developer tool, press F12, or right-click to check
Step 2: refresh the website, click the search tool, enter the fund code in the search box, and click search
Step 3: find the real url where the data is located
Second, start code
Import module:
Import requests import reimport csv
Send request:
Url = f 'http://fund.eastmoney.com/data/rankhandler.aspx?op=ph&dt=kf&ft=all&rs=&gs=0&sc=6yzf&st=desc&sd=2020-12-16&ed=2021-12-16&qdii=&tabSubtype=,&pi=1&pn=50&dx=1'headers = {' Cookie': 'HAList=a-sz-300059-%u4E1C%u65B9%u8D22%u5BCC; em_hq_fls=js; qgqp_b_id=7b7cfe791fce1724e930884be192c85e; _ adsame_fullscreen_16928=1; st_si=59966688853664; st_asi=delete; st_pvi=79368259778985 St_sp=2021-12-07% 2014% 3A33% 3A35; st_inirUrl=https%3A%2F%2Fwww.baidu.com%2Flink; st_sn=3; st_psi=20211216201351423-112200312936-0028256540; ASP.NET_SessionId=miyivgzxegpjaya5waosifrb', 'Host':' fund.eastmoney.com', 'Referer':' http://fund.eastmoney.com/data/fundranking.html', 'User-Agent':' Mozilla/5.0 (Windows NT 10.0; Win64 X64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36',} response = requests.get (url=url, headers=headers)
Get data:
Data = response.text
Parsing data filtering data:
Data_str = re.findall ('\ [(. *)\]', data) [0]
Change the data type:
Tuple_data = eval (data_str) for td in tuple_data: # turn td into a list td_list = td.split (',')
Turn the page:
Analyze the change law of url with different page number
For page in range (1,193): print (fallow-crawling the contents of page {page} -') url = f 'http://fund.eastmoney.com/data/rankhandler.aspx?op=ph&dt=kf&ft=all&rs=&gs=0&sc=6yzf&st=desc&sd=2020-12-16&ed=2021-12-16&qdii=&tabSubtype=, , & pi= {page} & pn=50&dx=1'
Save the data:
With open ('fund .csv', mode='a', encoding='utf-8', newline='') as f: csv_write = csv.writer (f) csv_write.writerow (td_list) print (td) 3. Run the code to get the data
The answer to the question about how Python collects fund data is shared here. I hope the above content can be of some help to you. If you still have a lot of doubts to be solved, you can follow the industry information channel to learn more about it.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.