Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use Asyncio to build highly concurrent applications

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article mainly introduces "how to use Asyncio to build highly concurrent applications". In daily operation, I believe many people have doubts about how to use Asyncio to build highly concurrent applications. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful for you to answer the doubts about "how to use Asyncio to build highly concurrent applications." Next, please follow the editor to study!

C10K problem

In the early days when the Internet was not yet popular, a server with 100 users online was already a very large application, and there was no engineering challenge.

With the advent of the era of Web 2.0, the number of users is growing in geometric multiples, and the server needs stronger concurrent processing ability to carry a large number of users. At this time, the famous C10K problem was born-how to make a single server support 10,000 client connections at the same time?

The original server application programming model was based on process / thread: when a new client connection came up, the server assigned a process or thread to handle the new connection. This means that to solve the C10K problem, the operating system needs to run 10,000 processes or threads at the same time.

Processes and threads are one of the most expensive resources in the operating system. Each new connection opens a new process / thread, which will result in a great waste of resources. Moreover, restricted by hardware resources, there is an upper limit on the number of processes / threads that the system can run at the same time.

In other words, in the process / thread model, the number of client connections that each server can handle is very limited. In order to support a large amount of business, it can only be achieved through the simple and rude way of heap server. But such a sea of people tactics is neither stable nor economical.

In order to process multiple network connections in a single process / thread at the same time, IO multiplexing technologies such as select, poll, epoll and so on emerge as the times require. In the IO multiplexing model, processes / threads no longer block a connection, but monitor multiple connections at the same time, dealing only with active connections with new data.

Why do you need a cooperative process?

The simple IO multiplexing programming model is not as intuitive as the blocking programming model, which brings a lot of inconvenience to the engineering project. The most typical is like the callback programming model in JavaScript, where all kinds of callback functions are flying everywhere, which is not an intuitive way of thinking.

In order to achieve a programming model as intuitive as blocking, the concept of co-program (user-mode thread) is proposed. The cooperative program implements multiple execution contexts on the basis of the process / thread. The event loop implemented by IO multiplexing technologies such as epoll is responsible for driving the scheduling and execution of the protocol.

The cooperative process can be regarded as a higher-level encapsulation of IO multiplexing technology. Although it has some performance overhead compared with the original IO multiplexing, it is very prominent compared with the process / thread model. The co-program takes up less resources than the process / thread, and the switching cost is relatively low. Therefore, collaborative programming has unlimited potential in the field of high concurrency applications.

However, the unique operating mechanism of Xiecheng makes beginners suffer a lot of losses and make a lot of mistakes.

Next, through some simple examples, we explore the way of collaborative process application, realize the role of collaborative process, and reveal the common misunderstandings in the design and deployment of highly concurrent applications. Since asyncio is the main trend of the development of Python, the example takes asyncio as the object of explanation.

The first collaborative application

The collaborative application is driven by an event loop, and the socket must be in non-blocking mode, otherwise it will block the event loop. Therefore, once you use the cooperative program, you have to say goodbye to many class libraries. Take the MySQL database operation as an example, if we use asyncio, we have to use the aiomysql package to connect to the database.

If you want to develop Web applications, you can use the aiohttp package, which can be installed through the pip command:

$pip install aiohttp

This example implements a full Web server, although it only returns the current time:

From aiohttp import web from datetime import datetime async def handle (request): return web.Response (text=datetime.now (). Strftime ('% Y-%m-%d% HGV% MRV% S')) app = web.Application () app.add_routes ([web.get ('/', handle),]) if _ _ name__ = ='_ main__': web.run_app (app)

Line 4, implement the handler, get the current time and return

Line 7, create the application object and register the handler with the route

Line 13, run the Web application, the default port is 8080

When a new request arrives, aiohttp will create a new protocol to process the request, which will be responsible for executing the corresponding handler. Therefore, the handler must be a legitimate co-programming function, starting with the async keyword.

After running the program, we can know the current time through it. On the command line, you can use the curl command to initiate a request:

$curl http://127.0.0.1:8080/ 2020-08-06 15:50:34 stress test

To develop highly concurrent applications, it is necessary to evaluate the processing capacity of the application. We can initiate a large number of requests in a short period of time and measure the throughput of the application. However, no matter how fast your hands are, you can only make several requests a second. What should I do?

We need to use some stress testing tools, such as ab in the Apache toolset. How to install and use ab is beyond the scope of this article, please refer to this article: Web stress testing (https://network.fasionchan.com/zh_CN/latest/performance/web-pressure-test.html).

It is not too late. Let's press 10000 requests to see the result.

$ab-n 10000-c 100 http://127.0.0.1:8080/ This is ApacheBench, Version 2.3 Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation Http://www.apache.org/ Benchmarking 127.0.0.1 (be patient) Completed 1000 requests Completed 2000 requests Completed 3000 requests Completed 4000 requests Completed 5000 requests Completed 6000 requests Completed 7000 requests Completed 8000 requests Completed 9000 requests Completed 10000 requests Finished 10000 requests Server Software: Python/3.8 Server Hostname: 127.0.0.1 Server Port: 8080 Document Path: / Document Length: 19 bytes Concurrency Level: 100 Time taken For tests: 5.972 seconds Complete requests: 10000 Failed requests: 0 Total transferred: 1700000 bytes HTML transferred: 190000 bytes Requests per second: 1674.43 [# / sec] (mean) Time per request: 59.722 [ms] (mean) Time per request: 0.597 [ms] (mean Across all concurrent requests) Transfer rate: 277.98 [Kbytes/sec] received Connection Times (ms) min mean [+ /-sd] median max Connect: 0 21.5 1 15 Processing: 43 58 5.0 57 89 Waiting: 29 47 6.3 47 85 Total: 43 60 4.8 58 90 Percentage of The requests served within a certain time (ms) 50% 58 66% 59 75% 60 80% 61% 65 95% 69 98% 72 99% 85 100% 90 (longest request)

-n option, which specifies the total number of requests, that is, the total number of requests sent

-c option, which specifies the number of concurrency, that is, how many requests are sent at the same time

You can learn from the report output from ab that all 10000 requests were successful, with a total time of 5.972 seconds and a processing speed of 1674.43 requests per second.

Now, we are trying to provide the number of concurrency to see if the processing speed has improved:

$ab-n 10000-c 100 http://127.0.0.1:8080/

At 1000 concurrency, 10000 requests were completed in 5.771 seconds, with a processing speed of 1732.87, a slight improvement but not significant. It's not surprising that most of the processing logic in the example is computational, and it makes little sense to increase the number of concurrency.

What is Xiecheng good at?

A co-program is good at dealing with IO-type application logic. for example, when a co-program is waiting for a database response, the event loop will wake up another ready co-program to execute, thus improving throughput. To reduce complexity, we simulate the effect of waiting for the database by sleeping in the program.

Import asyncio from aiohttp import web from datetime import datetime async def handle (request): # Sleep for one second asyncio.sleep (1) return web.Response (text=datetime.now (). Strftime ('% Y-%m-%d% HV% MV% S')) app = web.Application () app.add_routes ([web.get ('/', handle)) ]) if _ _ name__ = ='_ main__': web.run_app (app) Total concurrent requests time (seconds) processing speed (requests / second) 10010000102.31097.745001000022.129451.8910001000012.780782.50

It can be seen that with the increase of the number of concurrency, the processing speed is also significantly improved, and the trend is close to linear.

At this point, the study on "how to use Asyncio to build highly concurrent applications" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report