In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
In this issue, the editor will bring you about the process and daemon of the Python stack. The article is rich in content and analyzes and narrates it from a professional point of view. I hope you can get something after reading this article.
1. Understanding process
The concept of process: (process)
A process is a running program, which is the smallest unit of resource allocation in the operating system. Resource allocation: physical resources such as cpu and memory are allocated the process number is the unique identity of the process, followed by two processes and the relationship between the two processes: data is isolated from each other and communicated through socket
Concurrency and concurrency:
Concurrency: one cpu executes multiple programs at the same time in parallel: multiple cpu executes multiple programs at the same time
The process scheduling method of cpu:
# first come, first served fcfs (first come first server): first come, first served # short job priority algorithm: assign more cpu, first calculate the short time slice rotation algorithm: execute one time slice for each task. Then execute other. # the longer the multilevel feedback queuing algorithm is, the less resources cpu allocates, and the shorter the priority is, the more resources cpu allocates
Jobs 1, 2, 3 and 4 are given 0.4 seconds, 1 job is completed, 2, 3 and 4 jobs are not completed, they will be placed in the second-level queue, and the subsequent short jobs will be placed in the first-level queue. The jobs in the second-level queue are given 0.3 seconds, job 2 completes, jobs 3 and 4 are placed in the third-level queue, 0.2 seconds, 3 jobs are completed, 4 jobs are not completed, and jobs are placed in queue 4. 0.1 seconds are given for processing. Job 1 may be a download job.
Process three-state diagram:
(1) the Ready state is called the ready state, except that CPU needs to be executed, and all other resources have been allocated. (2) Running status when cpu starts to execute the process, it is called execution state. (3) when the Blocked state cannot be executed due to waiting for an event to occur, it is the blocking state, and the cpu executes other processes. For example, waiting for Ibig O to complete the input, the request buffer cannot be satisfied, and so on.
Synchronous Asynchronous / blocking non-blocking:
The scene is synchronized in multitasking: you have to wait for me to finish, you are doing it, there is only one main line, synchronous and asynchronous: before I am done, you are doing it, there are two main lines, that is, asynchronous blocking: for example, when the code has input, that is, blocking, you must enter a string, otherwise the code will not perform non-blocking: there is no waiting. Normal code goes down. # synchronous blocking: inefficient, underutilized by cpu # Asynchronous blocking: for example, socketserver, which can connect multiple at the same time, but each has recv# synchronous non-blocking: there is no input-like code, executed from top to bottom. Default normal code # Asynchronous non-blocking: most efficient, cpu overfull, overheated liquid cooling
Daemon:
# the name of the daemon can be affixed to the child process, which will end with the completion of the main process code execution (master process daemon) (1) the daemon will terminate after the completion of the main process code execution (2) the child process can no longer be opened within the daemon, otherwise an exception will be thrown (understand)
Lock (Lock):
Lock.acquire () # Lock lock.release () # unlock # allow one lock on a process at the same time, that is, Lock locking ensures that when multiple processes modify the same piece of data, only one task can modify it at a time, that is, serial modification. Yes, the speed is slow, but the sacrifice speed ensures data security. # allowing multiple locks on multiple processes at the same time is [semaphore Semaphore] semaphore is a deformation of locks: the actual implementation is counter + lock, and multiple processes are allowed to lock # Lock: mutex is the mutual exclusion of processes. Whoever grabs the resource first will lock and change the resource content, in order to ensure the synchronization of the data. # Note: multiple locks together, do not unlock, will cause deadlock. Locking and unlocking is a pair of .2. Syntax of the process # process processimport os Time "" # ps-aux View process number # ps-aux | grep 2784 filter search 2784 this process # forcibly kill process kill-9 process number # get the current process number res = os.getpid () print (res) # get the parent process res = os.getppid () print (res) "" use of from multiprocessing import Process# (1) process "" def func (): # 1. Child process id:3561,2. The parent process id:3560 print ("1. Child process id: {}, 2. Parent process id: {} ".format (os.getpid (), os.getppid ()) if _ _ name__ = =" _ _ main__ ": # create a child process and return process object p = Process (target=func) # call child process p.start () # 3. Main process id:3560,4. Parent process id:3327 print (" 3. Main process id: {}, 4. Parent process id: {} ".format (os.getpid (), os.getppid ())"# (2) create a process with parameters"def func (n): time.sleep (1) for i in range (1dint 1): # 0 ~ nMuth1 print (I) print (" 1. Child process id: {}, 2. Parent process id: {} ".format (os.getpid (), os.getppid ()) if _ _ name__ = =" _ _ main__ ": n = 6 # target= specifies the task args= parameter tuple p = Process (target=func, args= (n,)) p.start () for i in range (1 ): print ("*" * I) "# (3) data between processes isolated from each other" total = 100def func (): global total total + = 1 print (total) if _ _ name__ = "_ _ main__": P = Process (target=func) p.start () time.sleep (1) print (total) "# (4) Asynchronous between processes" 1. There are asynchronous concurrent programs among multiple processes, because of the problem of cpu scheduling strategy, it is not necessary to execute which task first. By default, the main process executes slightly faster than the child process, because when the child process is created, the allocation of space resources may block the state, and cpu will immediately switch tasks to maximize the overall speed efficiency of the program. After the default master process waits for all the child processes to finish execution, the child process may continue to occupy cpu and memory resources in the background of the system to form a zombie process if the program is closed uniformly. In order to facilitate the management of the process, the main process waits for the child process by default. In the unified shutdown program; "def func (n): print (" 1. Child process id: {}, 2. Parent process id: {} ".format (os.getpid (), os.getppid (), n) if _ _ name__ = =" _ _ main__ ": for i in range (1jue 11): P = Process (target=func,args= (I,)) p.start () print (" main process execution ends... ", os.getpid ()) 3. Join custom process class
All the child processes have been executed, and the main process is being executed
# 1. Synchronize the main process and the child process: join "" must wait for the execution of the current child process to finish before executing the following code; used to synchronize the child parent process; "" from multiprocessing import Processimport time # (1) basic use of join "" def func (): print ("send first email: my dear leader, are you there?") If _ _ name__ = "_ main__": P = Process (target=func) p.start () # time.sleep (0.1) p.join () print ("send a second email: I want to say Def func (I): time.sleep (1) print ("send the first email {}: my kissing leader Are you there? ".format (I) if _ _ name__ = =" _ _ main__ ": lst = [] for i in range (1m 11): P = Process (target=func,args= (I) ) p.start () # join writing inside will cause the program to become synchronous lst.append (p) # put all process objects in the list and use .join for management For i in lst: i.join () print ("send a second email: I want to say that my salary rises to 60,000 a month") "# 2 create a process # (1) basic syntax import osclass MyProcess (Process): def run (self): print (" 1. Child process id: {}, 2. Parent process id: {} ".format (os.getpid (), os.getppid ()) if _ _ name__ =" _ _ main__ ": P = MyProcess () p.start () # (2) Custom process class class MyProcess (Process): def _ init__ (self,name) with parameters: # manually call the constructor of the parent class Complete the initialization of system members Super (). _ init__ () self.name = name def run (self): print ("1. Child process id: {}, 2. Parent process id: {} ".format (os.getpid (), os.getppid ()) print (self.name) if _ _ name__ =" _ _ main__ ": P = MyProcess (" I am a parameter ") p.start () 4. Daemon # daemon "daemon" daemon is the main process, and when all the code of the main process is executed, the daemon is forced to kill immediately; "" from multiprocessing import Processimport time# (1) basic syntax "" def func (): # time.sleep (1) print ("start..." Current child process ") print (" end... " Current child process ") if _ _ name__ = =" _ main__ ": P = Process (target=func) # set daemon p.daemon = True p.start () print (" main process execution ends... ")" # (2) daemon scenario for multiple child processes before the process starts The default def func1 process waits for all non-daemon processes, that is, child processes, to close the program and release the resource daemon to shut down automatically as soon as the main process code execution ends; "def func1 (): print (" start... Func1 executes the current child process. ") print (" end... Func1 ends the current child process. ") def func2 (): count = 1 while True: print (" * "* count) time.sleep (1) count + = 1if _ _ name__ = =" _ _ main__ ": p1 = Process (target=func1) p2 = Process (target=func2) # turn p2 into a daemon P2.daemon = True p1.start () p2.start () print ("main process execution ends...") "# (3) Daemon purpose: monitoring active def alive (): while True: print (" No. 3 server sends activation information to the master monitoring server: I am ok~ ") Time.sleep (1) def func (): while True: try: print ("Server 3 is responsible for resisting concurrent access to 30,000 users.") Time.sleep (3) # actively throws an exception with an execution error, triggering the except branch raise RuntimeError except: print ("Server 3 can't handle it.. come and fix me.") Break if _ _ name__ = = "_ main__": P1 = Process (target=alive) p2 = Process (target=func) p1.daemon = True p1.start () p2.start () # you must wait for the execution of the child process p2 before releasing the code under the main process # the execution of the main process code ends Kill the daemon immediately and lose the function of reporting activity P2.join () print ("main process execution ends. ") # Job:"using a multi-process approach to achieve concurrency on the tcp server"
Tips:
When the decorator is triggered to send packets continuously, the container cannot be converted into a byte stream. The json file object used in machine interaction is an iterator, and the data returned by the iterator is the creation child process returned line by line. When resources are allocated to it and are blocked, the following program will be executed. This is asynchronous execution. The technology in which the two mainlines can post data without refreshing the page is called ajax. Is an asynchronous program process is a typical asynchronous program take a look at super () this function defaults to the main process after the daemon process will be killed immediately, but will wait for the child process to run, this is what the Python full stack process and daemon process is like, if you happen to have similar doubts, please refer to the above analysis to understand. If you want to know more about it, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.