Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use python to capture Lianjia net second-hand house data

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article introduces the relevant knowledge of "how to use python to capture Lianjia net second-hand house data". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

#-*-coding: utf-8

Import urllib2

Import urllib

Import re,os

Import time

# from bs4 import BeautifulSoup

Import sys

Reload (sys)

Sys.setdefaultencoding ('utf-8')

Class HomeLink:

# initialization data

Def _ init__ (self,base_url):

Self.base_url = base_url

Self.page = 1

Self.out_put_file = 'DJV _ apt _ pythonax _ TestUniverse _ house.txt'

Self.user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'

Self.headers = {'User-Agent': self.user_agent}

# get page content

Def get_content (self,url):

Try:

Request = urllib2.Request (url,headers=self.headers)

Response = urllib2.urlopen (request)

Act_url = response.geturl ()

Print 'init url=',url,'act url=',act_url

If url = = act_url:

Content = response.read ()

Return content

Else:

Return None

Except urllib2.URLError, e:

If hasattr (e, "reason"):

Print u "failed to connect to the page, error reason", e.reason

Return None

# get the starting url link address of each zone

Def get_region_url (self):

D_region_url = {}

Content = self.get_content (self.base_url)

Pattern = re.compile ('(. *?)', re.S)

Result = re.findall (pattern,content)

If result:

For x in result:

D_region_url [x [1]] = x [0]

Else:

Pass

Return d_region_url

# get the list of url addresses for all pages in each zone

Def get_region_url_list (self,region_url):

Page_num = self.get_page_num (region_url)

L_url = [region_url+'pg'+str (I) +'/ 'for i in range (2)

Return l_url

# get the total number of pages

Def get_page_num (self,url):

Content = self.get_content (url)

Pattern = re.compile ('{"totalPage": (\ d +), "curPage": 1}', re.S)

Result = re.search (pattern,content)

If result:

Return int (result.group (1). Strip ())

Else:

Return None

# get the house price information of each house

Def get_house_info (self,url,region):

Content = self.get_content (url)

Pattern = re.compile (''+

'(. *?) (. *?).

+'. *? (\ d +) (\ S+)', re.S)

Result = re.findall (pattern,content)

If result:

For x in result:

L = x [1] .split ('|')

Rooms,area,direct,other = l [1], l [2], l [3], l [4]

S_str ='| '.join ([region,x [0], rooms,area,direct,other,x [2], x [3]])

Self.writeStr2File (self.out_put_file,s_str)

Else:

Return None

# start grabbing Lianjia's house price data

Def start_scrapy (self):

D_region_url = self.get_region_url ()

For k in d_region_url:

Region = k

Region_init_url = 'http://bj.lianjia.com' + dumped regionurl [region]

L_region_url = self.get_region_url_list (region_init_url)

For url in l_region_url:

Time.sleep (1)

Url = url.strip ()

Self.get_house_info (url,region)

# write a file

Def writeStr2File (self,out_put_file,str1,append ='a'):

# remove the file and keep the path. For example,'a _

SubPath = out_put_file [: self.out_put_file.rfind ('/')]

# if the folder does not exist in the given path, create

If not os.path.exists (subPath):

Os.makedirs (subPath)

# Open the file and write the str content to the given file

With open (out_put_file, append) as f:

F.write (str1.strip () +'\ n')

Url = 'http://bj.lianjia.com/ershoufang/'

Home = HomeLink (url)

Home.start_scrapy ()

"how to use python to grab Lianjia net second-hand room data" content is introduced here, thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report