In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
Most people do not understand the knowledge points of this "Python process, Thread and Collaborative process instance Analysis" article, so the editor summarizes the following content, detailed content, clear steps, and has a certain reference value. I hope you can get something after reading this article. Let's take a look at this "Python process, Thread and Collaborative process instance Analysis" article.
Related introduction
Python is a cross-platform computer programming language. Is a high-level combination of interpretive, compiled, interactive and object-oriented scripting language. Originally designed for automated scripting (shell), as versions are constantly updated and new features of the language are added, the more they are used for independent, large-scale project development.
For example
Experimental environment
Python 3.x (object-oriented high-level language)
Multiprocessing (Python Library)
Threading (Python Library)
Asyncio (Python Library)
Time (Python Library)
Random (Python Library)
Process
Process: an instance of a program running on the operating system is called a process. The process needs the corresponding system resources: memory, time slice, pid (process number). A running program (code) is a process, the code that does not run is called a program, the process is the smallest unit of system resource allocation, and the process has its own independent memory space, so the data between processes are not shared and the overhead is high.
Create process steps:
1. First, import the Process in multiprocessing
two。 Create a Process object
3. When you create a Process object, you can pass parameters
4. Start the process using start ()
5. End the process.
Import os from multiprocessing import Processimport timedef pro_func (name,age,**kwargs): print ("process is running, name=%s, age=%d, pid=%d"% (name,age, os.getpid ()) print ('kwargs parameter value', kwargs) time.sleep (0.1) if _ name__== "_ main__": p=Process (target=pro_func,args= ('Friendship',18) Kwargs= {'hobby': 'Python'}) print (' start the process') p.start () print ('whether the process is still alive:', p.is_alive ()) # determine whether the process is still alive time.sleep (0.5) # 1 second later Immediately terminate the process print ('end the process') p.terminate () # immediately terminate the process p.join () # wait for the child process to finish print ('whether the program is still alive:', p.is_alive ()) # determine whether the process is still alive or not
Note: global variables are not shared between processes.
Multiple processes
Take a read-write program as an example, the main function is a main process, the write function is a child process, and the read function is another child process, and then two child processes read and write.
Import osimport timeimport randomfrom multiprocessing import Process,Queue# writes data function def write (Q): for value in ['I write the data function' 'Python']: print (' write% s'% value) q.put (value) time.sleep (random.random ()) # read data function def read (Q): while True: if not q.empty (): value = q.get (True) print ('read% s'% value from queue) Time.sleep (random.random ()) else: breakif _ _ name__== "_ _ main__": # main process # main process creates Queue And pass it to each child process q=Queue () # create two processes pw=Process (target=write,args= (Q,)) pr=Process (target=read,args= (Q,)) # Promoter process pw pw.start () # wait for pw to end pw.join () # Promoter process pr pr.start () # wait for pw to end pr.join () print ('endpoints')
Use process pool to operate on multiple processes from multiprocessing import Manager,Poolimport os,time,randomdef read (Q): print ("read process startup (% s), main process (% s)"% (os.getpid ()) Os.getppid ()) for i in range (q.qsize ()): print ("read process got message from Queue:% s"% q.get (True)) def write (Q): print ("write process startup (% s), main process (% s)"% (os.getpid ()) Os.getppid ()) for i in "Python": q.put (I) if _ _ name__== "_ _ main__": print ("main process (% s) start"% os.getpid ()) Q = Manager () .Queue () # use Queue # in Manager to define a process pool po = Pool () # Pool () .apply_async (target to be called, (parameter passed to target meta-ancestor) ) po.apply_async (write, (Q,)) time.sleep (1) # Let the above task store data in Queue first Then let the following task start fetching data from po.apply_async (read, (Q,)) po.close () # close the process pool, and after closing, po will no longer receive new requests po.join () # waiting for all child processes in po to complete execution, must be placed after the close statement print ("(% s) End!"% os.getpid ())
Thread
Thread: the smallest unit of scheduling execution, also known as the execution path, cannot exist independently, depending on the process there is at least one thread, called the main thread, and multiple threads share memory (data sharing, sharing global variables). As a result, the running efficiency of the program is greatly improved.
In the image above, the red box represents a process with a process number (PID) of 1624, with 118 threads.
Use _ thread module to implement import _ threadimport timeimport random# define a function def print_time (threadName): count = 0 while count < 5: time.sleep (random.random ()) count + = 1 print ("% s:% s"% (threadName, time.ctime (time.time () # create two threads try: _ thread.start_new_thread (print_time, ("Thread-1") ) _ thread.start_new_thread (print_time, ("Thread-2",)) except: print ("Error: unable to start thread") while True: pass
Use threading module to implement # create thread import threadingimport timeimport randomclass myThread (threading.Thread) using threading module: def _ _ init__ (self, threadID, name): threading.Thread.__init__ (self) self.threadID = threadID self.name = name self.delay = random.random () def run (self): print ("start thread:" + self.name) print_time (self.name) 5) print ("exit thread:" + self.name) def print_time (threadName, count): while count: time.sleep (random.random ()) print ("% s:% s"% (threadName, time.ctime (time.time () count-= create two threads thread1 = myThread (1, "Thread-1") thread2 = myThread (2 "Thread-2") # start a new thread thread1.start () thread2.start () thread1.join () thread2.join () print ("exit main thread")
Cooperative process
Cooperative program: it is a kind of lightweight thread in user mode, and the scheduling of cooperative program is completely controlled by the user. The co-program has its own register context and stack. When scheduling and switching, the register context and stack are saved somewhere else, and when it is switched back, the previously saved register context and stack are restored, while the direct operation stack has almost no kernel switching overhead, and the global variables can be accessed without locking, so the context switching is very fast.
When IO blocking occurs, it is scheduled by the scheduler of the co-program, by yield the data flow immediately (actively letting out), and record the data on the current stack, immediately restore the stack through the thread after blocking, and put the blocking result on this thread to run, which seems to be no different from writing synchronization code. This whole process can be called coroutine.
Because the pause of the co-program is completely controlled by the program, it occurs in the user state, while the blocking state of the thread is switched by the operating system kernel and occurs in the kernel state. Therefore, the cost of the co-program is much less than that of the thread.
Use asyncio module to implement import asyncioimport timeimport randomasync def work (msg): print ("received message:'{}'" .format (msg)) print ("{} 1 {}" .format ("*" * 10, "*" * 10)) # for convenience, display the result await asyncio.sleep (random.random ()) print ("{} 2 {}" .format ("*" * 10, "*" * 10)) # for convenience Show the result print (msg) async def main (): # create two task objects (collaborative program) And add Coroutines1 = asyncio.create_task (work ("hello")) Coroutines2 = asyncio.create_task (work ("Python")) print ("start time: {}" .format (time.asctime (time.localtime (time.time () await Coroutines1 #) now running Coroutines1 and Coroutines2 print ("{} 3 {}" .format ("*" * 10, "*" * 10)) # for convenience Displaying the result await Coroutines2 # await is equivalent to suspending the current task to perform other tasks, which is blocked print ("{} 4 {}" .format ("*" * 10, "*" * 10)) # for convenience, display the result print ("end time: {}" .format (time.asctime (time.localtime (time.time () asyncio.run (main ()) # asyncio.run (main () to create an event loop And take main as the main program entrance
The above is about "Python process, thread and collaborative process instance analysis" of this article, I believe we all have a certain understanding, I hope the editor to share the content to help you, if you want to know more related knowledge content, please pay attention to the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.