In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "Python how to achieve micro blog dynamic image crawling," interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Let's let Xiaobian take you to learn "Python how to achieve microblogging dynamic picture crawling"!
We found microblogging in the browser above for mobile debugging APL, how to find it?
1. simulated search user
Search for an api acquired by a user:
https://m.weibo.cn/api/container/getIndex? containerid=100103type=1&q= half &page_type=searchall
1.1 Processing parameters in api
containerid=100103type=1&q= half--> containerid=100103type%3D1%26q%3D%E5%8D%8A%E5%8D%8A%E5%AD%90_
This parameter needs to be transcoded in advance, otherwise the data cannot be obtained.
1.2 Judge the user name and extract uid after passing
2. Get more parameters
GET
api : https://m.weibo.cn/profile/info? uid=2830125342
2.1 Extract and process more Parameters
3.循环提取图片id
GET
api : https://m.weibo.cn/api/container/getIndex?containerid=2304132830125342_-_WEIBO_SECOND_PROFILE_WEIBO&page_type=03&page=1
3.1 提取图片id-->pic_id
3.2 获取发送图片用户
3.3 根据动态创建时间生成用户唯一识别码
4.下载图片
我们从浏览器抓包中就会获取到后台服务器发给浏览器的图片链接
https://wx2.sinaimg.cn/large/pic_id.jpg
浏览器打开这个链接就可以直接下载图片
爬取完整代码:
import osimport sysimport timefrom urllib.parse import quote import requests headers = { 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36'} def time_to_str(c_at): ti = time.strptime(c_at, '%a %b %d %H:%M:%S +0800 %Y') time_str = time.strftime('%Y%m%d%H%M%S', ti) return time_str # 1. 搜索用户,获取uid# 2. 通过uid获取空间动态关键参数# 3. 获取动态内容# 4. 提取图片参数# 5. 下载图片 # 1. 搜索用户,获取uid# ========= 用户名 =========# 输入不同的用户名可切换下载的用户图片# 用户名需要完全匹配name = '半半子_'# ========================= con_id = f'100103type=1&q={name}'# 这个条件需要转码con_id = quote(con_id, 'utf-8')user_url = f'https://m.weibo.cn/api/container/getIndex?containerid={con_id}&page_type=searchall'user_json = requests.get(url=user_url, headers=headers).json()user_cards = user_json['data']['cards']for card_num in range(len(user_cards)): if 'mblog' in user_cards[card_num]: if user_cards[card_num]['mblog']['user']['screen_name'] == name: print(f'正在获取{name}的空间') # 2. 通过uid获取空间动态关键参数 user_id = user_cards[card_num]['mblog']['user']['id'] info_url = f'https://m.weibo.cn/profile/info?uid={user_id}' info_json = requests.get(url=info_url, headers=headers).json() more_card = info_json['data']['more'].split("/")[-1] breakfile_name = 'weibo'if not os.path.exists(file_name): os.mkdir(file_name) if len(more_card) == 0: sys.exit()page_type = '03'page = 0while True: # 3. 获取动态内容 page += 1 url = f'https://m.weibo.cn/api/container/getIndex?containerid={more_card}&page_type={page_type}&page={page}' param = requests.get(url=url, headers=headers).json() cards = param['data']['cards'] print(f'第 {page} 页') for i in range(len(cards)): card = cards[i] if card['card_type'] != 9: continue mb_log = card['mblog'] # 4. 提取图片参数 # 获取本人的图片 pic_ids = mb_log['pic_ids'] user_name = mb_log['user']['screen_name'] created_at = mb_log['created_at'] if len(pic_ids) == 0: # 获取转发的图片 if 'retweeted_status' not in mb_log: continue if 'pic_ids' not in mb_log['retweeted_status']: continue pic_ids = mb_log['retweeted_status']['pic_ids'] user_name = mb_log['retweeted_status']['user']['screen_name'] created_at = mb_log['retweeted_status']['created_at'] time_name = time_to_str(created_at) pic_num = 1 print(f'======== {user_name} ========') # 5. 下载图片 for pic_id in pic_ids: pic_url = f'https://wx2.sinaimg.cn/large/{pic_id}.jpg' pic_data = requests.get(pic_url, headers) # 文件名 用户名_日期(年月日时分秒)_编号.jpg # 例:半半子__20220212120146_1.jpg with open(f'{file_name}/{user_name}_{time_name}_{pic_num}.jpg', mode='wb') as f: f.write(pic_data.content) print(f' 正在下载:{pic_id}.jpg') pic_num += 1 time.sleep(2)到此,相信大家对"Python怎么实现微博动态图片爬取"有了更深的了解,不妨来实际操作一番吧!这里是网站,更多相关内容可以进入相关频道进行查询,关注我们,继续学习!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.