In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article will explain in detail how Python3 crawls all the hero skins of League of Legends. The content of the article is of high quality, so the editor shares it for you as a reference. I hope you will have a certain understanding of the relevant knowledge after reading this article.
Open League of Legends's official website, click on the game materials, continue to press F12, press F5 to refresh, you will find a champion.js file, copy the address of this js file. Unlike Arena of Valor, this is js and the king is json. There are heroes' numbers and names in js. Take out the data in keys.
The content of the response is obtained through the get method of requests. Pat_js is the regular rule. The compile function creates the pattern object of the string of the regular expression contained, and calls the findall method directly. What is returned is that the matching string is displayed as a list. Eval converts it to a dictionary
Def path_js (url_js):
Res_js = requests.get (url_js, verify= False) .content
Html_js = res_js.decode ("gbk")
Pat_js = r'"keys": (. *?), "data"'
Enc = re.compile (pat_js)
List_js = enc.findall (html_js)
Dict_js = eval (list_js [0])
Print (dict_js)
-
Click on the hero profile in the middle of the page. There is no hero skin url. You need to right-click, open it on the new tab, and get the connection http://ossweb-img.qq.com/images/lol/web201310/skin/big266000.jpg.
According to the obtained link analysis, the first three numbers of big represent the number of heroes, and the last three represent the number of skins, according to which the links to get skin pictures are stitched together. Each hero has no more than 20 skins to cycle through to get the stitching. (the acquired links will have a large number of unresponsive links)
Def path_url (dict_js):
Pic_list = []
For key in dict_js:
For i in range (20):
Xuhao = str (I)
If len (xuhao) = = 1:
Num_houxu = "00" + xuhao
Elif len (xuhao) = = 2:
Num_houxu = "0" + xuhao
NumStr = key + num_houxu
Url = r 'http://ossweb-img.qq.com/images/lol/web201310/skin/big' + numStr +' .jpg'
Pic_list.append (url)
Print (pic_list)
Return pic_list
After you get the link, start downloading the skin according to the link
Sir, the saving path of the file.
''
Get the hero's name according to the value of the dictionary and use it as the file name and save path
''
Def name_pic (dict_js, path):
List_filePath = []
For name in dict_js.values ():
For i in range (20):
File_path = path + name + str (I) + '.jpg'
List_filePath.append (file_path)
Print (list_filePath)
Return list_filePath
The next step is to download the picture and write it to the file. (solve a large number of unresponsive links) or get the response through the get method of requests. If the content of the text of the response is 404, the loop ends, and if not, the picture is written to the file to save. In this way, a large number of empty pictures that cannot be opened will not be downloaded.
Def writing (url_list, list_filePath):
Try:
For i in range (len (url_list)):
Res = requests.get (url_list [I], verify=False)
If '404 page not found' in res.text:
Print ("the hero skin has been downloaded"), I
Continue
With open (list_filePath [I], "wb") as f:
F.write (res.content)
Except Exception as e:
Print ("error downloading picture, s" (e))
Return False
996 skins were obtained.
At this point, the skin has been obtained. Of course, you can also optimize, you can try to use multi-threading to improve the program, too many pictures, single-thread is too slow. There is also the problem of skin link generation, consider whether there is a better solution, will not generate a large number of useless links. The program will request these useless links, resulting in a lot of waste of resources.
About Python3 how to climb League of Legends all the hero skin to share here, I hope the above content can be of some help to you, can learn more knowledge. If you think the article is good, you can share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.