In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article will explain in detail the example analysis on the concurrent execution of Python scripts in Jmeter. The editor thinks it is very practical, so I share it with you for reference. I hope you can get something after reading this article.
Python implements file upload
Uploading large files involves three steps, which are:
Get file information and number of slices
Slice and upload-API
File merge-API
File path parameterization
2-1 get file information and number of slices
First, get the size of the file
Then, the total number of segments is obtained using the preset slice size
Finally, get the file name and md5 value
Import osimport mathimport hashlibdef get_file_md5 (self, file_path): "get the MD5 value of the file"with open (file_path, 'rb') as f: data = f.read () return hashlib.md5 (data). Hexdigest () def get_filename (self Filepath): "get the original name of the file"# File name with suffix filename_with_suffix = os.path.basename (filepath) # File name filename = filename_with_suffix.split ('.') [0] # suffix suffix = filename_with_suffix.split ('.') [- 1] return filename_with_suffix, filename, suffixdef get_chunk_info (self File_path): "" get segmentation information "" # get the total file size (in bytes) file_total_size = os.path.getsize (file_path) print (file_total_size) # Total segments total_chunks_num = math.ceil (file_total_size / self.chunk_size) # File name (with suffix) filename = self.get_filename (file_path) [0] # MD5 value of the file file_md5 = self.get_file_md5 (file_path) return file_total_size Total_chunks_num, filename, file_md52-2 slicing and multipart upload
Use the total number and size of segments to slice the file and call the segmented file upload interface
Import requestsdef do_chunk_and_upload (self, file_path): "segmenting the file And upload "" file_total_size, total_chunks_num, filename File_md5 = self.get_chunk_info (file_path) # traversal for index in range (total_chunks_num): print ('{} File upload '.format (index + 1)) if index + 1 = = total_chunks_num: partSize = file_total_size% chunk_size else: partSize = chunk_size # File offset Offset = index * chunk_size # generate sharded id Starting from 1, chunk_id = index + 1 print ('prepare for uploading files') print ("multipart id:", chunk_id, "File offset:", offset, ", current multipart size:", partSize,) # multipart upload file self.__upload (offset, chunk_id, file_path, file_md5, filename, partSize, total_chunks_num) def _ upload (self, offset Chunk_id, file_path, file_md5, filename, partSize, total): "" upload files in stages "" url = 'http://**/file/brust/upload' params = {' chunk': chunk_id, 'fileMD5': file_md5,' fileName': filename, 'partSize': partSize 'total': total} # based on file path and offset Read file binary data current_file = open (file_path, 'rb') current_file.seek (offset) files= {' file': current_file.read (partSize)} resp = requests.post (url, params=params, files=files). Text print (resp) 2-3 merge files
Finally, the API for merging files is called to synthesize segmented small files into large files.
Def merge_file (self, filepath): "" merge "url = 'http://**/file/brust/merge' file_total_size, total_chunks_num, filename, file_md5 = self.get_chunk_info (filepath) payload = json.dumps ({" fileMD5 ": file_md5," chunkTotal ": total_chunks_num "fileName": filename}) print (payload) headers= {"Content-Type": "application/json"} resp = requests.post (url, headers=headers, data=payload). Text print (resp) 2-4 file path parameterization
To execute concurrently, parameterize the file upload path
# fileupload.py...if _ _ name__ ='_ _ main__': filepath = sys.argv [1] # size of each slice (MB) chunk_size = 2 * 1024 * 1024 fileApi = FileApi (chunk_size) # multipart upload fileApi.do_chunk_and_upload (filepath) # merge fileApi.merge_file (filepath) Jmeter concurrent execution
Before we can create a concurrent process using Jmeter, we need to write a batch script
When executing a batch script, it needs to be executed together with the file path.
# cmd.bat@echo offset filepath=%1python C:\ Users\ xingag\ Desktop\ rpc_demo\ fileupload.py% *
Then, create a new CSV file locally and write multiple file paths
# prepare multiple file paths (csv)
C:\\ Users\\ xingag\\ Desktop\ charles-proxy-4.6.1-win64.msi
C:\\ Users\\ xingag\\ Desktop\ V2.0.pdf
C:\\ Users\\ xingag\\ Desktop\ HBuilder1.zip
C:\\ Users\\ xingag\\ Desktop\ HBuilder2.zip
Then, you can use Jmeter to create a concurrent process
The complete steps are as follows:
Create a test plan and add a thread group below
Here, the number of thread groups is the same as the number of files above.
Under thread group, add "synchronization timer"
The number of simulated user groups in the synchronization timer is consistent with the number of parameters above
Add CSV data file settings
Point to the csv data file prepared above, set the file format to UTF-8, set the variable name to file_path, and finally set the thread sharing mode to "current thread group"
Add a debug sampler to facilitate debugging
Add an OS process sampler
Select the batch file created above, and set the command line parameter to "${file_path}"
Add the number of viewing results
Last
Run the Jmeter concurrent process created above, and you can view the results of concurrent uploading files in the number of results.
Of course, we can increase the number of concurrency to simulate real usage scenarios, just modify the CSV data source and Jmeter parameters
This is the end of this article on "sample analysis of Jmeter concurrent execution of Python scripts". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, please share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.