In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly introduces "how to download the whole novel with the Python crawler". In the daily operation, I believe that many people have doubts about how to use the Python crawler to download the whole novel. The editor consulted all kinds of materials and sorted out a simple and easy-to-use method of operation. I hope it will be helpful to answer the doubts about "how to download the whole novel with the Python crawler". Next, please follow the editor to study!
1. The first step is to import the two packages we need
# html used to obtain web pages
From urllib import request
# used to parse html
From bs4 import BeautifulSoup2. Let's analyze the page of the novel that we're going to crawl.
(the advertisement is quite conspicuous.)
Let's take a look at this interface and then look at the html source code.
We will find that the place where I am framed is exactly what we need, but pay attention to the latest chapter above and our official novel catalogue, which is below, they are all in the middle, and you can see how I handle it later.
Then let's take a look at the interface of novel reading:
The interface is simple. Let's take a look at the HTML source code:
It's easy to see that it's the title of each chapter in the label.
What is in the label is the text.
OK, after our preliminary analysis, we can start writing code!
3. First of all, let's write a way to get the html source code of a web page:
# get the html of the web page
Def getHtml (url):
Url = url
Res = request.urlopen (url)
Res = res.read () .decode ()
# print (res)
Return res
This method passes in a url and returns a html source code.
4. And then let's write about how to get links to all the chapters of the whole novel:
# parse the chapter page of the novel and get the sub-link def jsoupUrl (html) of all chapters:
# get soup object
Url_xiaoshuo = BeautifulSoup (html)
# because we are going to take class as the div in box1
Class_dict = {'class':' box1'}
Url_xiaoshuo = url_xiaoshuo.find_all ('div', attrs=class_dict)
# because analyzing the code in html, we can find that there are two class of div as box1, and the above code returns a result in list format, so the following index should be 1
# We want to get the value in li, so find_all, this method returns a list collection
Url_xiaoshuo = url_xiaoshuo [1] .find _ all ('li')
# print (url_xiaoshuo)
# create a collection to hold the links for each chapter
Url_xs = []
For item in url_xiaoshuo:
# get the href value in each element
Url = item.a ['href']
# pass the value into the url_xs collection
Url_xs.append (url)
Return url_xs
The specific explanation has been written in the notes. If you don't understand, you can leave a message on the official account.
5. After we get the link to each chapter, we need to take down the content of each chapter and write it to the txt text and the title of each text is the title of the chapter.
# analyze the main contents of each chapter of the novel
Def jsoupXiaoshuo (list):
For item in list:
Html = getHtml (item)
Html = BeautifulSoup (html)
# get the title of the novel
Title = html.h2.get_text ()
Xiaoshuo = html.find_all ('p')
For item in xiaoshuo:
Str = item.get_text ()
The second parameter in # open is to concatenate each string to the previous string, which must not be w
With open (title + '.txt','a') as f:
F.write (str+'\ n')
6. Finally, we can run these methods in the main method:
If _ _ name__ = ='_ _ main__':
Html = getHtml ("http://www.136book.com/dadaozhaotian/")
Url_xs = jsoupUrl (html)
JsoupXiaoshuo (url_xs)
The result of the run:
At this point, the study on "how to download the whole novel with Python crawler" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.