In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article mainly introduces "how to use Python programming to achieve little sister dancing and generate word cloud video". In daily operation, I believe many people have doubts about how to use Python programming to achieve little sister dance and generate word cloud video. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts of "how to use Python programming to achieve little sister dancing and generate word cloud video". Next, please follow the editor to study!
Catalogue
The production process is divided into the following parts
1. Video download
2. Download bilibili on-screen comment
3. Video frame cutting and portrait segmentation
4. Make word cloud map for the segmented image.
5. Picture stitching and video synthesis
Last
Python made a word cloud video to watch the little sister dance from another angle.
The production process is divided into the following parts: 1. Video download
First of all, you need to download a video of the little sister dancing. Here I use the you-get tool, which can be installed with the help of Python's pip command.
Pip install you-get
You-get supported download platforms include: Youtube, Blili, TED, Tencent, Youku, iqiyi (covering all video platform download links)
Take youtube video as an example, you-get download command
You-get-o ~ / Videos (path to store video)-O zoo.webm (video naming) 'https://www.youtube.com/watch?v=jNQXAC9IVRw'
Here, the you-get download command is implemented through the os module. You can pass in three parameters when using it:
1, video link
2. The file path where the video is to be stored
3. Video naming
Def download (video_url,save_path,video_name):''youget download video: param video_url: video link: param save_path: save path: param video_name: video name: return:' 'cmd =' you-get-o {}-O {} {} '.format (save_path,video_name,video_url) res = os.popen (cmd) ) res.encoding = 'utf-8' print (res.read ()) # printout
For more information about the use of you-get, please refer to the official website, which describes the usage in great detail:
Https://you-get.org/#getting-started
2. Download bilibili on-screen comment
Word cloud images need to be supported by text data. Here, bilibili's on-screen comment is selected as the material. With regard to bilibili's video on-screen comment download, here is a quick way to access the API interface of the specified video with requests, and you can get all the on-screen comments under this video.
Http://comment.bilibili.com/{cid}.xml # cid is the cid number of bilibili's video.
But to construct the API interface, you need to know the cid number of the video.
How to obtain the cid number of bilibili video:
F12 opens developer mode-> NetWork- > XHR- > v2cidcid=... Link, in which there is a string of "cid= a string of numbers", in which the consecutive number after the equal sign is the cid number of the video.
Take the above video as an example. 291424805 is the cid number of this video.
After you have cid, you can get the on-screen comment data by requesting the API interface through requests.
Http://comment.bilibili.com/291424805.xmldef download_danmu (): on-screen comment download and store''cid =' 141367679 'video_id url =' http://comment.bilibili.com/{}.xml'.format(cid) f = open ('danmu.txt','w+',encoding='utf-8') # Open the txt file res = requests.get (url) res.encoding =' utf-8' soup = BeautifulSoup (res.text) 'lxml') items = soup.find_all (' d') # find the d tag for item in items: text = item.text print ('--* 10) print (text) seg_list = jieba.cut (text,cut_all = True) # to segment the string It is convenient to make the word cloud image for j in seg_list: print (j) f.write (j) f.write ('\ n') f.close () 3, video frame cutting, portrait segmentation
After downloading the video, split the video into one frame after another
Vc = cv2.VideoCapture (video_path) c = 0 if vc.isOpened (): rval,frame = vc.read () # read video frame else: rval=False while rval: rval,frame = vc.read () # read each video frame And save to the picture cv2.imwrite (os.path.join (Pic_path,' {} .jpg '.format (c)), frame) c + = 1 print (' the {} picture is stored successfully!' .format (c))
Identify and extract the little sister in each frame, that is, portrait segmentation, here with the help of Baidu API interface
APP_ID = "23633750" API_KEY = 'uqnHjMZfChbDHvPqWgjeZHCR' SECRET_KEY =' * * 'client = AipBodyAnalysis (APP_ID, API_KEY SECRET_KEY) # folder jpg_file = os.listdir (jpg_path) # folder to save for i in jpg_file: open_file = os.path.join (jpg_path,i) save_file = os.path.join (save_path,i) if not os.path.exists (save_file): # when the file does not exist Go to the next step: img = cv2.imread (open_file) # to get the image size height, width, _ = img.shape if crop_path:#. If Crop_path is not None, then do not crop crop_file = os.path.join (crop_path,i) img = IMG [100:-1 crop_path,i 300RMI 400] # the picture is too large Crop the image and set the parameters to cv2.imwrite (crop_file) according to your own conditions. Img) image= get_file_content (crop_file) else: image= get_file_content (open_file) res = client.bodySeg (image) # call Baidu API to segment the portrait labelmap = base64.b64decode (res ['labelmap']) labelimg = np.frombuffer (labelmap) Np.uint8) # convert to np array 0255 labelimg= cv2.imdecode (labelimg,1) labelimg= cv2.resize (labelimg, (width,height), interpolation=cv2.INTER_NEAREST) img_new = np.where (labelimg==1,255,labelimg) # convert 1 to 255cv2.imwrite (save_file,img_new) print (save_file,'save successfully')
Convert an image containing a portrait into a binary image, with the foreground as the character and the rest as the background
Before using API, you need to create a human body analysis application on Baidu Intelligent Cloud platform with your own account, which requires three parameters: ID, AK, and SK.
For the usage of Baidu API, please refer to the official documentation.
4. Make word cloud map for the segmented image.
According to step 3, the portrait Mask of the little sister is obtained.
Draw a word cloud image for each binary image with the help of the wordcloud word cloud database and the captured on-screen comment information (before making, make sure that each image is a binary image, all black pixel images need to be removed)
Word_list = [] with open ('danmu.txt',encoding='utf-8') as f: con = f.read (). Split ('\ n') # read the txt text word cloud text for i in con: if re.findall ('[\ u4e00 -\ u9fa5] +', str (I)) Re.S): # remove word frequency without Chinese word_list.append (I) for i in os.listdir (mask_path): open_file = os.path.join (mask_path,i) save_file = os.path.join (cloud_path) I) if not os.path.exists (save_file): # start frequency word start = random.randint (0) before random indexing 15) word_counts = collections.Counter (word_list) word_counts = dict (word_counts.most_common () [start:]) background = 255-np.array (Image.open (open_file)) wc = WordCloud (background_color='black', max_words=500, mask=background, mode = 'RGB' Font_path = "D:/Data/fonts/HGXK_CNKI.ttf", # set font path Used to set up Chinese,) .generate_from_frequencies (word_counts) wc.to_file (save_file) print (save_file,'Save Sucessfullyzed') 5, picture stitching to synthesize video
After all the word cloud images are generated, if you look at an image, it will be boring, and it will be cooler if you synthesize the processed word cloud image into a video!
In order to compare the video before and after the video, I have added one more step to stitch the original image and the word cloud image before merging, and the synthesis effect is as follows:
Num_list = [int (str (I) .split ('.') [0]) for i in os.listdir (origin_path)] fps = 2 video frame rate, larger and smoother height,width,_=cv2.imread (os.path.join (origin_path,' {} .jpg '.format (num_ list [0])) .shape # Video height and width width = width*2 # create a write operation Video_writer = cv2.VideoWriter (video_path,cv2.VideoWriter_fourcc (* 'mp4v'), fps, (width,height)) for i in sorted (num_list): I =' {} .jpg '.format (I) ori_jpg = os.path.join (origin_path,str (I)) word_jpg = os.path.join (wordart_path,str (I)) # com_jpg = os.path.join (Composite_path) Str (I)) ori_arr = cv2.imread (ori_jpg) word_arr = cv2.imread (word_jpg) # splicing with Numpy com_arr = np.hstack ((ori_arr,word_arr)) # cv2.imwrite (com_jpg Com_arr) # Composite image saves video_writer.write (com_arr) # writes each frame into the video stream print ("{} Save Sucessfully-" .format (ori_jpg))
Coupled with the background music, the video can take it to another level.
At this point, the study on "how to use Python programming to achieve little sister dancing and generate word cloud video" is over. I hope to be able to solve everyone's doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.