Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How do python crawlers crawl English documents and save them as PDF, and automatically translate documents when reading PDF?

2025-01-20 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

This article is about how python crawlers crawl English documents and save them as PDF and read the contents of PDF automatically translated documents. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.

I have climbed the official document of Python these days, but it is full of English, only numbers, no Chinese characters, forgive me this English scum do not understand, can only rely on translation, if it is copied to Baidu translation is too slow, time-consuming. So we just use crawlers to translate documents automatically.

This is the page translated by Baidu.

At first I wanted to do it with urllib, but I was told that my browser version was too low. I guess I had to add headers and UA. Too troublesome, think of selenium, directly use selenium properly, the following are the detailed steps.

Let's start by climbing Python's official website.

Just grab this page of data. This is simple, you can grab it directly using requests or urllib, and then convert it to pdf. Mine is in the framework, it's a little troublesome, if you feel troublesome, you can ask directly.

Install the Python library: pip3 install pdfkit

Install the plug-in wkhtmltopdf

Https://wkhtmltopdf.org/downloads.html official website address

Import scrapy

Import pdfkit

Class so_python3_spider (scrapy.Spider):

Name = 'doc'

Def start_requests (self):

Url = 'https://docs.python.org/3.8/whatsnew/3.8.html'

Yield scrapy.Request (url=url, callback=self.parse)

Def parse (self, response):

Body = response.selector.xpath ('/ / div [@ class= "section"]') .extract ()

Title = response.selector.xpath ('/ / div [@ class= "section"] / h2/text ()') .extract ()

Html_template = ""

{content}

"

For i in range (len (body)):

Html = html_template.format (content= Body [I])

With open (title [I] + '.html', 'averse, encoding='utf8') as f:

F.write (html)

Options = {

'page-size': 'Letter'

'encoding': "UTF-8"

'custom-header': [

('Accept-Encoding',' gzip')

]

}

Path_wk = rudder:\ Program Files\ wkhtmltopdf\ bin\ wkhtmltopdf.exe' # installation location

Config = pdfkit.configuration (wkhtmltopdf=path_wk)

Pdfkit.from_file (title [I] + '.html', title [I] + '.pdf', options=options, configuration=config)

I just take down all the contents of a div, then splice a new html, and convert the new HTML into PDF.

The second stage is to open the pdf, read the document, send it to the box of Baidu translation, get the translation result, and save it again.

-read the document-

Def read_pdf_to_text (self):

Fp = open ("What's New In Python 3.8.pdf", "rb") # read in binary mode

# if it's url

# fp=request.urlopen (url) # URL

# create an interpreter associated with the document

Parser = PDFParser (fp)

# create an pdf document object

Doc = PDFDocument ()

# connecting interpreter and document object

Parser.set_document (doc)

Doc.set_parser (parser)

# initialize the document

Doc.initialize (") # file is an empty string without a password

# create a pdf explorer

Resouse = PDFResourceManager ()

# create a parameter analyzer

Lap = LAParams ()

# create an aggregator

Device = PDFPageAggregator (resouse, laparams=lap)

# create a page interpreter

Interpreter = PDFPageInterpreter (resouse, device)

# start reading content

For page in doc.get_pages ():

# call the page interpreter to explain

Interpreter.process_page (page)

# use aggregators to get content

Layout = device.get_result ()

For out in layout:

If hasattr (out, "get_text"):

Content = out.get_text ()

Read the document and send it to Baidu translation https://fanyi.baidu.com/?aldtype=16047#en/zh

Find the location of the input box and output box

Translation module (copied to Wechat, there is something wrong with the format, please correct it yourself)

Def baidu_fanyi (self, content):

Time.sleep (5)

# find the location of the input box and send the content to that location

Self.browser.find_element_by_id ('baidu_translate_input'). Send_keys (content)

Time.sleep (5)

# get the contents of the output box

Con = self.browser.find_element_by_class_name ('output-bd')

# write to a file

With open ('python3.8.txt',' asides, encoding='utf8') as f:

# because the content content has a carriage return character, it is not necessary\ n

F.write (content + con.text +'\ n')

# empty the input box and wait for the next input

Self.browser.find_element_by_id ('baidu_translate_input') .clear ()

Time.sleep (5)

This is the output after translation.

Of course, there are parts that can be optimized and enhanced, such as using pyqt to do an interface, packaged into exe can be used as a program. Leave a message if you have any suggestions.

Thank you for reading! About "python crawler how to crawl English documents saved as PDF, read PDF automatic translation documents" this article is shared here, I hope the above content can be of some help to you, so that you can learn more knowledge, if you think the article is good, you can share it out for more people to see it!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report