Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the process and process pool of Python

2025-01-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly introduces the process of Python and what is the process pool, has a certain reference value, interested friends can refer to, I hope you can learn a lot after reading this article, the following let the editor with you to understand.

Process

The process is the basic unit of the operating system to allocate resources and the boundary of program isolation.

Processes and programs

The program is just a set of instructions, it does not have any meaning to run, it is static.

The execution example of a process program is dynamic, has its own life cycle, has creation and revocation, and its existence is temporary.

Processes and programs do not correspond one to one, a program can correspond to multiple processes, and a process can execute one or more programs.

We can understand that when the code is written, there is no runtime called a program, and the running code starts one (or more) processes.

Status of the process

When our operating system works, the number of tasks is often less than the number of cpu cores, that is, there must be some tasks waiting for cpu, which leads to different states of the process.

Ready status: full of operating conditions, waiting for cpu to execute

Executive status: cpu is currently in charge

Wait status: wait for certain conditions to be full, such as a program sleep, and then wait.

Processes in Python

In Python, processes are created through the multiprocessing multiprocess module, and the multiprocessing module provides two Process classes to create process objects.

Create a resume process

Process syntax structure:

Process (group, target, name, args, kwargs)

Group: specifies the process group, which is not available "in most cases"

Target: indicates the calling object, that is, the task to be performed by the child process

Name: the name of the child process, which can be left unset

Args: parameters passed to the function specified by target, passed in the form of tuples

Kwargs: pass named parameters to the function specified by target

Common methods of Process

P.start () starts the process and calls the p.run () method in the child process

P.join (timeout): the main process waits for the execution of the slave process to finish, and timeout is an optional timeout.

Is_alive (): determines whether the process is still alive

The method that runs when the p.run () process starts, which calls the function specified by target

P.terminate () is the termination process

The constant property of the instance object created by Process

Name: the alias of the current process. By default, Process-N,N is an integer incremented from 1.

Pid: pid (process number) of the current process

Import multiprocessingimport osimport timedef work (name): print ("Child process work is running.") Time.sleep (0.5) print (name) # get the name of the process print ("child process name", multiprocessing.current_process ()) # get the process's pid print ("child process pid", multiprocessing.current_process (). Pid, os.getpid ()) # get the pid print of the parent process ("parent process pid" Os.getppid () print ("child process ends.") if _ _ name__ ='_ _ main__': print ("main process startup") # get the name of the process print ("main process name", multiprocessing.current_process ()) # get the process's pid print ("main process pid", multiprocessing.current_process () .pid) Os.getpid () # create process p = multiprocessing.Process (group=None, target=work, args= ("tigeriaf",)) # start process p.start () print ("main process ends")

Through the above code, we find that multiprocessing.Process helps us create a child process and runs it successfully, but we find that the main process is dead before the child process is finished, so the child process is an orphan process after the end of the main process, so can we let the main process wait until the child process ends? The answer is yes. That is through p.join (), the role of join () is to let the main process wait for the child process to finish execution before exiting.

Import multiprocessingimport osimport timedef work (name): print ("Child process work is running.") Time.sleep (0.5) print (name) # get the name of the process print ("child process name", multiprocessing.current_process ()) # get the process's pid print ("child process pid", multiprocessing.current_process (). Pid, os.getpid ()) # get the pid print of the parent process ("parent process pid" Os.getppid () print ("child process ends.") if _ _ name__ ='_ _ main__': print ("main process startup") # get the name of the process print ("main process name", multiprocessing.current_process ()) # get the process's pid print ("main process pid", multiprocessing.current_process () .pid) Os.getpid () # create process p = multiprocessing.Process (group=None, target=work, args= ("tigeriaf",)) # start process p.start () p.join () print ("main process ends")

Running result:

As you can see, the main process ends after the child process ends.

Global variable problem

Global variables are not shared among multiple processes, and the data between processes is independent and does not affect each other by default.

Import multiprocessing# defines the global variable num = 99def work1 (): print ("work1 is running.") Global num # declares inside the function to make the global variable num num = num + 1 # enter + 1 print ("work1 num = {}" .format (num)) def work2 (): print ("work2 is running.") Print ("work2 num = {}" .format (num)) if _ _ name__ ='_ _ main__': # creation process p1 p1 = multiprocessing.Process (group=None, target=work1) # startup process p1 p1.start () # creation process p2 p2 = multiprocessing.Process (group=None, target=work2) # startup process p2 p2.start ()

Running result:

From the result of the operation, we can see that the modification of the global variable num by the work1 () function is not obtained in work2, and the global variable is still 99. Therefore, the variables are not shared enough between processes.

Daemon process

As mentioned above, you can use p.join () to let the main process wait until the child process ends, so can you let the child process end when the main process ends? The answer is yes. We can set it using p.daemon = True or p2.terminate ():

Import multiprocessingimport timedef work1 (): print ("work1 is running.") Time.sleep (4) print ("work1 is running") def work2 (): print ("work2 is running.") Time.sleep (10) print ("work2 run complete") if _ _ name__ ='_ _ main__': # creation process p1 p1 = multiprocessing.Process (group=None, target=work1) # start process p1 p1.start () # creation process p2 p2 = multiprocessing.Process (group=None, target=work2) # set p2 Guardian main process # p2.daemon = True is set before start () Otherwise, an exception will be thrown # startup process p2 p2.start () time.sleep (2) print ("main process finished running!") # the first type of p2.terminate ()

The implementation results are as follows:

Because p2 sets up the daemon main process, after the main process is finished, the p2 child process ends, the work2 task stops, and the work1 continues to run to the end.

Process pool

When the number of processes to be created is small, you can directly facilitate multiprocessing.Process to generate multiple processes dynamically, but if you want to create many processes, the amount of "dynamic creation" will be very large, so you can go to the Pool provided by the multiprocessing module to create a process pool.

Multiprocessing.Pool constant function:

Apply_async (func, args, kwds): make "blocking" call func (task and task execution). Args is the parameter list passed to func, and kwds is the keyword parameter list passed to func.

Apply (func, args, kwds): make the "blocking" call func. You must wait for the last process to finish the task before you can execute the next process. You can understand it, but you hardly need it.

Close (): close Pool so that it no longer accepts new tasks

Terminate (): no matter whether the task is completed or not, the task is finished.

Join (): the main process is blocked, waiting for the quit process to exit, and must be enabled after close or terminate

When initializing Pool, you can specify the number of minimum processes. When a new task is submitted to the Pool, if the process pool is not yet full, a new process will be created to execute the task. But if the process pool is full (the number of processes in the pool has reached the specified minimum value), the task will wait until there are processes in the pool before a new process is created to execute the task.

From multiprocessing import Poolimport timedef work (I): print ("work' {} 'execution." .format (I), multiprocessing.current_process () .name Multiprocessing.current_process () .pid) time.sleep (2) print ("work' {} 'execution completed." .format (I)) if _ _ name__ = =' _ _ main__': # create process pool # Pool (3) means to create a process pool with a capacity of 3 processes pool = Pool (3) for i in range (10): # the process pool executes work tasks synchronously The process in the process pool will wait for the last process to execute the task before it can execute the next process # pool.apply (work, (I,)) # causes the asynchronous execution of the work task pool.apply_async (work, (I,)) # does not accept new requests after the process pool is closed pool.close () # waits for all child processes in the po to finish It must be placed after close (). If you want to execute the work task asynchronously, the main thread will no longer wait for the thread to finish executing before exiting! Pool.join ()

The execution result is:

From the results, we can see that only three sub-processes are executing the task. Here we use asynchronous (pool.apply_async (work, (I,) to execute the work task. If the task is executed synchronously (pool.apply (work, (I,)), the process in the process pool will wait for the last process to finish the task before executing the next process.

Thank you for reading this article carefully. I hope the article "what is the process and process pool of Python" shared by the editor will be helpful to you. At the same time, I also hope that you will support us and follow the industry information channel. More related knowledge is waiting for you to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report