Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use Python3 to make a novel crawler with GUI interface

2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly introduces how to use Python3 to make a novel crawler tool with GUI interface, which is very detailed and has certain reference value. Friends who are interested must finish reading it!

Effect picture

Recently, I helped my friend write a simple crawler. By the way, I organized it into a novel crawler tool with GUI interface, which is used to crawl a novel from the Pavilion of interest.

The interface after development is completed

Acquisition process interface

Storage after collection

Main function

1. Multi-thread collection, a thread to collect a novel

two。 Support the use of proxies, especially for multithreaded data collection. Not using proxies may block ip.

3. Real-time output of acquisition results

Use threading.BoundedSemaphore () pool_sema.acquire () pool_sema.release () to limit the number of threads and prevent concurrent threads from passing through. The specific limit can be entered in the software interface. There are 5 threads by default.

# pool_sema.threading.BoundedSemaphore (5) before all thread tasks start # specifically lock pool_sema.acquire () before each thread starts. # Thread task execution ends releasing the third-party module pip install requestspip install pysimpleguipip install lxmlpip install pyinstaller used by pol_sema.release ()

GUI interface uses a tkinter wrapper library PySimpleGUI, very easy to use, although the interface is not beautiful, but the advantage is simple, very suitable for the development of gadgets. Https://pysimplegui.readthedocs.io/en/latest/, such as the layout of this interface, only needs a few simple list.

Layout = [sg.Text ('enter the URL of the novel to be crawled Click here to open Pavilion site copy', font= ("Microsoft Accord", 12), key= "openwebsite", enable_events=True, tooltip= "Click to open in browser")], [sg.Text ("novel catalogue page url, one line:")], [sg.Multiline ('', key= "url", size= (120,6), autoscroll=True, expand_x=True, right_click_menu= ['& Right'] ['paste'])], [sg.Text (visible=False, text_color= "# ff0000", key= "error")], [sg.Button (button_text=' starts collecting', key= "start", size= (20,1)), sg.Button (button_text=' opens download directory', key= "opendir") Size= (20,1), button_color= "# 999999")], [sg.Text ('fill in ip agent) Password format username: password @ ip: Port, no password format ip: Port. Such as demo:123456@123.1.2.8:8580'), [sg.Input ('', key= "proxy"), sg.Text ('number of threads:'), sg.Input ('5threads, key= "threadnum"),], [sg.Multiline (' waiting for collection', key= "res", disabled=True, border_width=0, background_color= "# ffffff") Size= (120,6), no_scrollbar=False, autoscroll=True, expand_x=True, expand_y=True, font= ("Verdana", 10), text_color= "# 999999")],] packaged as exe command pyinstaller-Fw start.py all source code import timeimport requestsimport osimport sysimport reimport randomfrom lxml import etreeimport webbrowserimport PySimpleGUI as sgimport threading# user-agentheader = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0) Win64 X64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36 "} # proxy proxies = {} # remove the special symbol in the book title # Pen Kei base address baseurl = 'number of https://www.xbiquwx.la/'# threads threadNum = 6pool_sema = NoneTHREAD_EVENT ='-THREAD-'cjstatus = False# txt storage directory filePath = os.path.abspath (os.path.join (os.getcwd ()) 'txt')) if not os.path.exists (filePath): os.mkdir (filePath) # Delete the special character def deletetag (text): return re.sub (r' [\ [\] # /\: *\ \?\ "\'\ |\ (\)" &\ ^! ~ =%\ {\} @! : . ! $. ()]',', text) # entry def main (): global cjstatus, proxies, threadNum, pool_sema sg.theme ("reddit") layout = [sg.Text ('enter the URL of the novel to be crawled Click here to open Pavilion site copy', font= ("Microsoft Accord", 12), key= "openwebsite", enable_events=True, tooltip= "Click to open in browser")], [sg.Text ("novel catalogue page url, one line:")], [sg.Multiline ('', key= "url", size= (120,6), autoscroll=True, expand_x=True, right_click_menu= ['& Right'] ['paste'])], [sg.Text (visible=False, text_color= "# ff0000", key= "error")], [sg.Button (button_text=' starts collecting', key= "start", size= (20,1)), sg.Button (button_text=' opens download directory', key= "opendir") Size= (20,1), button_color= "# 999999")], [sg.Text ('fill in ip agent) Password format username: password @ ip: Port, no password format ip: Port. Such as demo:123456@123.1.2.8:8580'), [sg.Input ('', key= "proxy"), sg.Text ('number of threads:'), sg.Input ('5threads, key= "threadnum"),], [sg.Multiline (' waiting for collection', key= "res", disabled=True, border_width=0, background_color= "# ffffff") Size= (120,6), no_scrollbar=False, autoscroll=True, expand_x=True, expand_y=True, font= ("Song style", 10), text_color= "# 999999")],] window = sg.Window ('collecting Biqige novels', layout, size= (800,500), resizable=True,) while True: event Values = window.read () if event = = sg.WIN_CLOSED or event = = 'close': # if user closes window or clicks cancel break if event = = "openwebsite": webbrowser.open ('% s'% baseurl) elif event = = 'opendir': os.system (' start explorer'+ filePath) elif event = = 'start': if cjstatus: Cjstatus = False window ['start'] .update (' stopped... Click restart') continue window ['error'] .update (",", visible=False) urls = values [' url'] .strip () .split ("\ n") lenth = len (urls) for k, url in enumerate (urls): if (not re.match (ringing% s\ dink _\ dink _'% baseurl) Url.strip (): if len (url.strip ()) > 0: window ['error'] .update ("address error:% s"% url, visible=True) del urls [k] if len (urls)

< 1: window['error'].update( "每行地址需符合 %s84_84370/ 形式" % baseurlr, visible=True) continue # 代理 if len(values['proxy']) >

Proxies = {"http": "http://%s"% values ['proxy'] "https": "http://%s"% values ['proxy']} # number of threads if values [' threadnum'] and int (values ['threadnum']) > 0: threadNum = int (values [' threadnum']) pool_sema = threading.BoundedSemaphore (threadNum) cjstatus = True window ['start'] .update (' collection... Click stop') window ['res'] .update (' start collection') for url in urls: threading.Thread (target=downloadbybook, args= (url.strip (), window,), daemon=True) .start () elif event = = "paste": window ['url'] .update (sg.clipboard_get ()) print ("event") Event) if event = = THREAD_EVENT: strtext = values [thread _ EVENT] [1] window ['res'] .update (window [' res'] .get () + "\ n" + strtext) cjstatus = False window.close () # download def downloadbybook (page_url, window): try: bookpage = requests.get (url=page_url, headers=header) Proxies=proxies) except Exception as e: window.write_event_value ('- THREAD-', (threading.current_thread () .name,'\ nrequest% s error) Reason:% s'% (page_url, e)) return if not cjstatus: return # lock thread pool_sema.acquire () if bookpage.status_code! = 200: window.write_event_value ('- THREAD-', (threading.current_thread (). Name,'\ nrequest% s error Reason:% s'% (page_url Page.reason)) return bookpage.encoding = 'utf-8' page_tree = etree.HTML (bookpage.text) bookname = page_tree.xpath (' / / div [@ id= "info"] / h2/text ()') [0] bookfilename = filePath +'/'+ deletetag (bookname) + '.txt' zj_list = page_tree.xpath ('/ / div [@ class= "box_con"] / div [@ id= "list"] / dl/dd') for _ in zj_list: if not cjstatus: break zjurl = page_url + _ .xpath ('. / a _ zjpage _ ref') [0] zjname = _ .xpath ('. / a _ Universe title') [0] try: zjpage = requests.get (zjurl) Headers=header, proxies=proxies) except Exception as e: window.write_event_value ('- THREAD-', (threading.current_thread (). Name,'\ nrequest% slug% s error) Reason:% s'% (zjname, zjurl, zjpage.reason)) continue if zjpage.status_code! = 200: window.write_event_value ('- THREAD-', (threading.current_thread (). Name,'\ nrequest% slug% s error) Reason:% s'% (zjname, zjurl Zjpage.reason)) return zjpage.encoding = 'utf-8' zjpage_content = etree.HTML (zjpage.text) .XPath (' / / div [@ id= "content"] / text ()') content = "\ n [" + zjname+ "]\ n" for _ in zjpage_content: content + = _ .strip () +'\ n' with open (bookfilename, 'averse') Encoding='utf-8') as fs: fs.write (content) window.write_event_value ('- THREAD-', (threading.current_thread (). Name,'\ n%s:%s collected successfully'% (bookname, zjname)) time.sleep (random.uniform (0.05,0.2)) # download completed window.write_event_value ('- THREAD-') (threading.current_thread () .name,'\ nrequest% s to end'% page_url)) pool_sema.release () if _ _ name__ = ='_ _ main__': main () these are all the contents of the article "how to use Python3 to make a novel crawler tool with GUI interface" Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report