Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use python to crawl website tutorials and make PDF documents

2025-01-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly introduces "how to use python to crawl website tutorials and make PDF documents", in daily operation, I believe many people in how to use python to crawl website tutorials and make PDF documents on the question there are doubts, Xiaobian consulted all kinds of information, sorted out simple and easy to use operation methods, hope to answer "how to use python to crawl website tutorials and make PDF documents" doubts helpful! Next, please follow the small series to learn together!

Function: grab python tutorial of liao xuefeng website, save html page to local, and then convert html to pdf with pdfkit package.

Run requirements: install python pdfkit package and wkhtmltopdf under windows.

The code is as follows:

# -*- coding: utf-8

import urllib2

import urllib

import re,os

import time

import pdfkit

import sys

reload(sys)

sys.setdefaultencoding('utf-8')

class Liao:

def __init__(self,init_url,out_put_path):

self.init_url = init_url

self.out_put_path = out_put_path

self.user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'

self.headers = { 'User-Agent' : self.user_agent }

self.file_name = 'liao.pdf'

self.html_template = """

{content}

#Get Web Content

def get_content(self,url):

try:

request = urllib2.Request(url,headers=self.headers)

response = urllib2.urlopen(request)

content = response.read().decode('utf-8').encode('GBK')

return content

except urllib2.URLError, e:

if hasattr(e,"reason"):

print u"Connection page failed, error reason",e.reason

return None

#Get URL list of article directories

def get_article_url(self):

content = self.get_content(self.init_url)

#print content

pattern = re.compile(r'.*? (.*?).*? ',re.S)

# pattern = re.compile(r'',re.S)

result = re.findall(pattern,content)

i = 1

print len(result)

# for x in result:

# print x[0],x[1]

# i+=1

return result

#Get the article content, pass in the url of the article, return the article content

def get_article_detail(self):

url_list = self.get_article_url()

pattern = re.compile(r'(.*?) ',re.S)

# patt_title = re.compile(r'(.*?) ')

patt_img = re.compile(r'

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report