In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
How to carry on the simple test of Python multithreading concurrency, aiming at this problem, this article introduces the corresponding analysis and solution in detail, hoping to help more partners who want to solve this problem to find a more simple and feasible method.
Some simple Python programs have been written before, but the concurrency of multi-threads has not been involved. Today, I decided to break through first and quickly master the contents of this part, and then refine and improve on this basis.
My good friend Glacier is good at Python technology, so I borrowed borrowlism and referred to his article.
Python Foundation 16-concurrent programming (1)
The program performance of Python has been criticized, but it still has great advantages in terms of function and expansibility. Some of the concepts in the program are concurrency and multithreading, so we have made a great leap forward.
When it comes to the performance of Python, GIL needs to understand, this is a global interpreter lock to ensure that only one thread is running at a time, while ensuring thread safety, performance will be affected to a certain extent. Let's simply do a case, which has already been found in the above article, let's make a simple change. Give it a test.
First, you need to prepare a file, urls.txt.
For example, in my technology blog, I randomly choose the subscript of two articles, and then I can cycle through to generate a large number of urls.txt content.
For i in {2101076..2148323}
Do
Echo "http://blog.itpub.net/23718752/viewspace-"$i
Done
To test the information of url, we need to introduce a module requests to get the feedback result by sending a request. If it is at a status value of 200300, it is accessible, otherwise it is impossible to ask.
If you need to pay attention or some tricks, we can use strip () to get a string.
> http://www.jeanron100.com".strip()
'http://www.jeanron100.com'
And then use the requests.get method to get the result value
> requests.get ('http://www.baidu.com')
The final state value can be obtained using the status_code attribute.
> requests.get ('http://www.baidu.com').status_code
two hundred
With these points in mind, the Python program will be much easier.
Attach the source program directly:
#! / usr/bin/evn python
Import requests
Import time
Def get_site_code (url):
R = requests.get (url)
Status = r.status_code
Line = url +''+ str (status)
With open ('/ tmp/site_stauts.txt', 'asides') as f:
F.writelines (line +'n')
If _ _ name__ = ='_ _ main__':
Print 'starting at:', time.ctime ()
For url in open ('urls.txt'):
Url = url.strip ()
Get_site_code (url)
Print 'Done at:', time.ctime ()
The whole process takes about 37 seconds, and there are about 30 urls.
# python a.pl
Starting at: Wed Dec 6 07:00:34 2017
Done at: Wed Dec 6 07:01:11 2017
Let's take a look at the multithreading part. There is no doubt that we need a thread-related module, which in this case is threading.
We can start multiple threads directly without controlling the granularity of threads. For example, there are 30 requests, that is, 30 threads directly. There is no thread pool mode for the time being. When initializing, you can initialize the thread in the following way.
Threading.Thread (target=get_site_code, args= (url,))
Start a thread using the start method
Thread [I] .start ()
If one thread invokes another thread during execution, it needs to wait until it is finished before continuing to execute, in this case the join method.
Thread [I] .join ()
The source program is as follows:
#! / usr/bin/evn python
Import requests
Import time
Import threading
Def get_site_code (url):
R = requests.get (url)
Status = r.status_code
Line = url +''+ str (status)
With open ('/ tmp/site_stauts.txt', 'asides') as f:
F.writelines (line +'n')
If _ _ name__ = ='_ _ main__':
Print 'starting at:', time.ctime ()
Threads = []
For url in open ('urls.txt'):
Url = url.strip ()
T = threading.Thread (target=get_site_code, args= (url,))
Threads.append (t)
For i in range (len (threads)):
Thread [I] .start ()
For i in range (len (threads)):
Thread [I] .join ()
Print 'Done at:', time.ctime ()
After using multithreading, it takes about 3 seconds, an increase of more than 10 times, and the benefit is still very large.
# python b.pl
Starting at: Wed Dec 6 07:24:36 2017
Done at: Wed Dec 6 07:24:39 2017
Continuous improvement will then be considered from other angles, and there is still a lot of room for improvement.
This is the answer to the simple test question on how to carry out Python multithreading concurrency. I hope the above content can be of some help to you. If you still have a lot of doubts to be solved, you can follow the industry information channel to learn more about it.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.