In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
(a) what is the concurrent processing capability of the server?
The more requests a server can handle per unit time, the higher the server's ability, that is, the stronger the server's concurrent processing power, according to Xiaoyun.
The essential job of the server is to try to take out all the user request data in the kernel buffer as quickly as possible, and then process it as soon as possible, and then put the response data into a buffer that can send data, and then process the next set of requests.
(B) what methods are in place to measure the concurrent processing capacity of servers? First, throughput
Quantitative metrics: throughput, maximum number of requests processed by the server per unit time, unit req/s
To go a little further, HTTP requests are usually requests for different resources, that is, requests for different URL, some request images, some dynamic content, and some static pages. Obviously, the time taken by these requests is different, and the composition proportion of these requests is uncertain, so the actual throughput is very complex.
Because of the different nature of these requests, the key to the concurrency ability of the server lies in how to design the optimal concurrency strategy for different request properties. For example, a server handles many requests of different nature, which to a certain extent makes the performance of the server can not be brought into full play. The design of the concurrency strategy is that when the server processes more requests at the same time, it reasonably coordinates and makes full use of CPU computing and Icano operation, so as to provide higher throughput in the case of a large number of concurrent users.
In addition, in fact, how many users send requests at the same time is not determined by the server, once the actual number of concurrent users is too large, it is bound to affect the quality of the site. Therefore, the significance of obtaining the maximum number of concurrent users is to understand the carrying capacity of the server and consider the appropriate expansion scheme combined with the size of users. When considering the user model, users usually use browsers when visiting web sites. Browsers are multi-threaded for concurrent downloads of URL under the same domain name, but there are maximum restrictions, so the maximum number of concurrency mentioned above may not be an one-to-one relationship for real users. From the point of view of the server, the actual number of concurrent users can be understood as the total number of file descriptors currently maintained by the server representing different users, that is, the number of concurrent connections. The server generally limits the maximum number of users that can be served at the same time, such as the MaxClents parameter of apache. Let's go a little further. For the server, the server wants to support high throughput, and for the user, the user only wants to wait for the least time. Obviously, the two sides cannot be satisfied, so the balance of interests between the two sides is the maximum number of concurrent users we want.
Second, stress test
One principle must be clarified first: if 100 users make 10 requests to the server at the same time, which is the same as 1 user making 1000 requests to the server in a row, is the pressure on the server the same? It is actually different, because for each user, sending a request continuously actually means sending one request and receiving the response data before sending the next request. In this way, for one user to make 1000 requests to the server continuously, there is only one request in the server's network card receiving buffer at any time, while for 100 users to make 10 requests to the server at the same time, the server's network card receives no more than 100 requests waiting to be processed, obviously the server is under more pressure at this time.
Conditions to be considered on the premise of stress testing: (1) number of concurrent users (2) total number of requests (3) description of requested resources
The number of concurrent users refers to the total number of users who send requests to the server at the same time.
The relationship time in the stress test is subdivided into the following two categories: (1) the average request waiting time of the user (the transmission time of the data on the network is not taken into account here, and the local computing time of the user PC is included) (2) the average request processing time of the server
The average request waiting time of the user is mainly used to measure the service quality of the server under a certain number of concurrent users, while the average request processing time of the server is the reciprocal of the throughput. Generally speaking, the average request waiting time of the user = the average request processing time of the server * the number of concurrent users
(3) how to improve the concurrent processing ability of the server? First, improve the concurrent computing ability of CPU
The reason why the server can process multiple requests at the same time is that the operating system designs a multi-execution flow architecture so that multiple tasks can take turns to use system resources, including CPU, memory and Imax O. Here the Icano mainly refers to the disk Icano and the network Icano.
(1) Multi-process & multi-thread
The general implementation of multi-execution flow is the process. The advantage of multi-process lies not only in the rotational use of CPU time, but also in the overlapping use of CPU calculation and Imax O operation, which mainly refers to disk Imax O and network Imax O. In fact, most of the time of the process is mainly spent on the DMA operation. The modern computer's DMA technology allows the CPU not to participate in the whole process of the operation. For example, through the system call, the process makes the CPU send instructions to the network card or disk, and then the process is suspended, releases the CPU resources, and waits for the IUnip O device to finish its work and notify the process to be ready again through interruption. For a single task, CPU is idle most of the time, when the role of multiple processes is particularly important.
And the superiority of the process lies in the stability and robustness brought about by its independence.
Disadvantages of processes: each process has its own independent space and life cycle. When the child process is created by the parent process, all the data in the parent process address space is copied to its own address space, completely inheriting the context information of the parent process. Process creation using the fork () system call still has some overhead, and if this overhead is too frequent, it may become a major factor affecting performance.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.