In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
High concurrency systems have three sharp tools: cache, downgrade and current limit.
The purpose of current restriction is to protect the system by imposing speed limits on concurrent visits / requests. Once the speed limit is reached, you can deny service (direct to error page), wait in queue (second kill), and degrade (return pocket data or default data).
Common current limits in high concurrency systems are: limit the total number of concurrency (database connection pool), limit the number of instantaneous concurrency (such as nginx's limit_conn module to limit the number of instantaneous concurrent connections), and limit the average rate in the time window (nginx's limit_req module, used to limit the average rate per second)
In addition, the flow can be limited according to the number of network connections, network traffic, CPU or memory load, etc.
1. Current limiting algorithm
The simplest and roughest current limiting algorithm is the counter method, and the more commonly used algorithms are leaky bucket algorithm and token bucket algorithm.
1.1 counter
The counter method is the simplest and easiest to implement in the current limiting algorithm. For example, we stipulate that for interface A, we cannot have more than 100 visits per minute.
Then we can set a counter counter, which has a valid time of 1 minute (that is, the counter will be reset to 0 per minute). Every time a request comes, counter will be added by 1. If the value of counter is greater than 100, it means that there are too many requests.
Although this algorithm is simple, there is a very fatal problem, that is, the critical problem.
As shown in the following figure, 100 requests arrive immediately before 1:00, the 1:00 counter is reset, and 100 requests arrive immediately after 1: 00. Obviously, the counter will not exceed 100 and all requests will not be intercepted.
However, the number of requests has reached 200 during this period, far more than 100.
1.2 leaky bucket algorithm
As shown in the figure below, there is a leaky bucket with a fixed capacity that flows out of droplets at a constant rate; if the bucket is empty, no droplets will flow; the speed of water flowing into the leaky bucket is random; if the inflow exceeds the capacity of the bucket, the inflow will overflow (discarded)
You can see that the leaky bucket algorithm inherently limits the speed of requests and can be used for traffic shaping and current limiting control.
1.3 token bucket algorithm
A token bucket is a bucket that stores fixed capacity tokens, adding tokens to the bucket at a fixed rate r; a maximum of b tokens are stored in the bucket, and when the bucket is full, the newly added tokens are discarded
When a request arrives, it attempts to get a token from the bucket; if so, it continues to process the request; if not, it waits in queue or discards directly
It can be found that the outflow rate of the leaky bucket algorithm is constant or 0, while that of the token bucket algorithm may be greater than r.
Basic knowledge of 2.nginx
There are two main current limiting methods in Nginx: current limit by connection (ngx_http_limit_conn_module) and current limit by request rate (ngx_http_limit_req_module).
Before learning the current limiting module, we also need to understand nginx's handling of HTTP requests, nginx event handling process, and so on.
2.1HTTP request processing process
Nginx divides the HTTP request processing process into 11 stages. Most HTTP modules add their own handler to a certain stage (four of them cannot add custom handler). When nginx processes HTTP requests, all handler is called one by one.
Typedef enum {NGX_HTTP_POST_READ_PHASE = 0, / / currently only the realip module registers handler (useful when nginx is used as a proxy server, which is used by the backend to obtain the original ip of the client) NGX_HTTP_SERVER_REWRITE_PHASE, / / rewrite instruction is configured in the / / server block, override url NGX_HTTP_FIND_CONFIG_PHASE, / / find matching location; cannot customize handler NGX_HTTP_REWRITE_PHASE, / / location block is configured with rewrite directive to rewrite url NGX_HTTP_POST_REWRITE_PHASE, / / check whether url rewriting has occurred, and if so, return to the FIND_CONFIG phase; handler cannot be customized NGX_HTTP_PREACCESS_PHASE, / / access control. The current limiting module will register handler to this stage NGX_HTTP_ACCESS_PHASE, / / access control NGX_HTTP_POST_ACCESS_PHASE, / / process accordingly according to access control phase; handler; NGX_HTTP_TRY_FILES_PHASE cannot be customized, / / only if try_files directive is configured. Handler cannot be customized. NGX_HTTP_CONTENT_PHASE, / / content generation phase, return the response to the client NGX_HTTP_LOG_PHASE / / logging} ngx_http_phases
Nginx uses the structure ngx_module_s to represent a module, where the field ctx is a pointer to the module context structure; the HTTP module context structure of nginx is as follows (the fields of the context structure are all function pointers):
Typedef struct {ngx_int_t (* preconfiguration) (ngx_conf_t * cf); ngx_int_t (* postconfiguration) (ngx_conf_t * cf); / / this method registers handler to the corresponding stage void * (* create_main_conf) (ngx_conf_t * cf); / / the main configuration char * (* init_main_conf) (ngx_conf_t * cf, void * conf) in the http block Void * (* create_srv_conf) (ngx_conf_t * cf); / / server configuration char * (* merge_srv_conf) (ngx_conf_t * cf, void * prev, void * conf); void * (* create_loc_conf) (ngx_conf_t * cf); / / location configuration char * (* merge_loc_conf) (ngx_conf_t * cf, void * prev, void * conf);} ngx_http_module_t
Taking the ngx_http_limit_req_module module as an example, the postconfiguration method is simply implemented as follows:
Static ngx_int_t ngx_http_limit_req_init (ngx_conf_t * cf) {h = ngx_array_push (& cmcf- > phases [NGX _ HTTP_PREACCESS_PHASE] .handlers); * h = current-limiting method of ngx_http_limit_req_handler; / / ngx_http_limit_req_module module; when nginx processes HTTP requests, this method is called to determine whether to continue or refuse to request return NGX_OK;}
2.2 A brief introduction to nginx event handling
Suppose nginx is using epoll.
Nginx needs to register all concerned fd with epoll. Add method life as follows:
Static ngx_int_t ngx_epoll_add_event (ngx_event_t * ev, ngx_int_t event, ngx_uint_t flags)
The first parameter of the method is the ngx_event_t structure pointer, which represents a read or write event of concern. Nginx may set a timeout timer for the event to handle the event timeout, which is defined as follows:
Struct ngx_event_s {ngx_event_handler_pt handler; / / function pointer: event handler ngx_rbtree_node_t timer; / / timeout timer, stored in the red-black tree (the key of the node is the timeout of the event) unsigned timedout:1; / / record whether the event has timed out}
Generally, epoll_wait is called in a loop to listen to all fd and handle read and write events; epoll_wait is a blocking call, and the last parameter, timeout, is the timeout, that is, the maximum blocking timeout time. If no event occurs, the method returns
When nginx sets the timeout timeout, it looks for the nearest node from the red-black tree of the record timeout timer mentioned above as the timeout of the epoll_wait, as shown in the following code
Ngx_msec_t ngx_event_find_timer (void) {node = ngx_rbtree_min (root, sentinel); timer = (ngx_msec_int_t) (node- > key-ngx_current_msec); return (ngx_msec_t) (timer > 0? Timer: 0);}
At the same time, at the end of each loop, nginx checks the red and black tree to see if any event has expired, marks timeout=1 if it expires, and calls the handler of the event.
Void ngx_event_expire_timers (void) {for (;;) {node = ngx_rbtree_min (root, sentinel); if ((ngx_msec_int_t) (node- > key-ngx_current_msec) timedout = 1; ev- > handler (ev); continue;} break;}}
Nginx implements the handling of socket events and timing events through the above methods.
Ngx_http_limit_req_module module parsing
The ngx_http_limit_req_module module limits the flow of requests, that is, limits the request rate of users within a certain period of time, and uses the token bucket algorithm.
3.1 configuration instruction
The ngx_http_limit_req_module module provides the following configuration instructions for users to configure current restriction policies
/ / each configuration instruction mainly contains two fields: name, the processing method of parsing configuration static ngx_command_t ngx_http_limit_req_commands [] = {/ / General usage: limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s; / / $binary_remote_addr indicates remote client IP / / zone configure a storage space (need to allocate space to record the access rate of each client. Timeout space restrictions are eliminated using lru algorithm. Note that this space is allocated in shared memory and can be accessed by all worker processes) / / rate indicates a rate limit. In this example, 1qps {ngx_string ("limit_req_zone"), ngx_http_limit_req_zone,}, / / usage: limit_req zone=one burst=5 nodelay; / / zone specifies which shared space to use / / are requests exceeding this rate discarded directly? Burst is configured to handle burst traffic, indicating the maximum number of queued requests. When the client request rate exceeds the current limit rate, requests will be queued, while those exceeding burst will be directly rejected. / / nodelay must be used with burst. Queued requests will be given priority. Otherwise, if these requests are still processed at the current limit speed, the client may have timed out {ngx_string ("limit_req"), ngx_http_limit_req,}, / / the logging level when the request is limited. Usage: limit_req_log_level info | notice | warn | error; {ngx_string ("limit_req_log_level"), ngx_conf_set_enum_slot,}, / / the status code returned to the client when the request is restricted; usage: limit_req_status 503 {ngx_string ("limit_req_status"), ngx_conf_set_num_slot,},}
Note: $binary_remote_addr is a variable provided by nginx, which you can use directly in the configuration file. Nginx also provides many variables, so you can find the ngx_http_core_variables array in the ngx_http_variable.c file:
Static ngx_http_variable_t ngx_http_core_variables [] = {{ngx_string ("http_host"), NULL, ngx_http_variable_header, offsetof (ngx_http_request_t, headers_in.host), 0,0}, {ngx_string ("http_user_agent"), NULL, ngx_http_variable_header, offsetof (ngx_http_request_t, headers_in.user_agent), 0,0},. }
3.2 Source code parsing
Ngx_http_limit_req_module registers the ngx_http_limit_req_handler method to the NGX_HTTP_PREACCESS_PHASE phase of HTTP processing during the postconfiguration process
Ngx_http_limit_req_ handler executes the leaky bucket algorithm to determine whether the configured current limit rate is exceeded, so that it can be dropped or queued or passed.
When the user requests for the first time, a new record (mainly recording access count and access time) is added. The hash value of the client IP address (configure $binary_remote_addr) is stored as key in the red-black tree (quick lookup) and in the LRU queue (when there is not enough storage space, the record is eliminated and is deleted from the tail each time). When the user requests again, the record will be found and updated from the red-black tree, and the record will be moved to the head of the LRU queue
3.2.1 data structure
Limit_req_zone configures the storage space (name and size), current limiting speed, and current limiting variables (client IP, etc.) required by the current limiting algorithm. The structure is as follows:
Typedef struct {ngx_http_limit_req_shctx_t * sh; ngx_slab_pool_t * shpool;// memory pool ngx_uint_t rate; / / current limiting speed (qps multiplied by 1000 storage) ngx_int_t index; / / variable index (nginx provides a series of variables, user-configured current limiting variable index) ngx_str_t var; / / current limiting variable name ngx_http_limit_req_node_t * node } ngx_http_limit_req_ctx_t; / / also initializes shared storage space struct ngx_shm_zone_s {void * data; / / data pointing to ngx_http_limit_req_ctx_t structure ngx_shm_t shm; / / shared space ngx_shm_zone_init_pt init; / / initializing method function pointer void * tag; / / pointing to ngx_http_limit_req_module structure}
Limit_req configures the storage space used by current limit, the size of the queue, and whether to handle it urgently. The structure is as follows:
Typedef struct {ngx_shm_zone_t * shm_zone; / / shared storage space ngx_uint_t burst; / / queue size ngx_uint_t nodelay; / / whether to handle urgently when there is a request queued, used with burst (if configured, the queued request will be urgently processed, otherwise it will still be processed at the current limit speed)} ngx_http_limit_req_limit_t
As mentioned earlier, user access records are stored in both the red and black tree and the LRU queue. The structure is as follows:
/ / record structure typedef struct {u_char color; u_char dummy; u_short len; / / data length ngx_queue_t queue; ngx_msec_t last; / / Last access time ngx_uint_t excess; / / current number of outstanding requests (which nginx uses to implement token bucket current limiting algorithm) ngx_uint_t count / / the total number of such record requests u_char data [1]; / / data content (first look up according to key (hash value), and then compare whether the data content is equal)} ngx_http_limit_req_node_t; / / red-black tree node. Key configures the hash value of the current limit variable for the user. Struct ngx_rbtree_node_s {ngx_rbtree_key_t key; ngx_rbtree_node_t * left; ngx_rbtree_node_t * right; ngx_rbtree_node_t * parent; u_char color; u_char data;}; typedef struct {ngx_rbtree_t rbtree; / / red-black tree ngx_rbtree_node_t sentinel; / / NIL node ngx_queue_t queue; / / LRU queue} ngx_http_limit_req_shctx_t / / the queue has only prev and next pointers struct ngx_queue_s {ngx_queue_t * prev; ngx_queue_t * next;}
Consider that 1:ngx_http_limit_req_node_t records form a two-way linked list through prev and next pointers to realize LRU queues; the newly visited nodes are always inserted into the head of the linked list, and the nodes are deleted from the tail when eliminated
Ngx_http_limit_req_ctx_t * ctx;ngx_queue_t * Q; Q = ngx_queue_last (& ctx- > sh- > queue); lr = ngx_queue_data (Q, ngx_http_limit_req_node_t, queue) / / the ngx_http_limit_req_node_t structure header address is obtained by ngx_queue_t, and the implementation is as follows: # define ngx_queue_data (Q, type, link) (type *) ((u_char *) Q-offsetof (type, link)) / / queue field address minus its offset in the structure, is the structure header address
Thinking 2: the current-limiting algorithm first uses key to find the red-black tree node to find the corresponding record. How does the red-black tree node relate to the record ngx_http_limit_req_node_t structure? You can find the following code in the ngx_http_limit_req_module module:
Size = offsetof (ngx_rbtree_node_t, color) / / allocate memory for new records, calculate the required space + offsetof (ngx_http_limit_req_node_t, data) + len; node = ngx_slab_alloc_locked (ctx- > shpool, size); node- > key = hash; lr = (ngx_http_limit_req_node_t *) & node- > color / / color is of u_char type, so why can it be cast to ngx_http_limit_req_node_t pointer type? Lr- > len = (u_char) len;lr- > excess = 0; ngx_memcpy (lr- > data, data, len); ngx_rbtree_insert (& ctx- > sh- > rbtree, node); ngx_queue_insert_head (& ctx- > sh- > queue, & lr- > queue)
Through the analysis of the above code, the color and data fields of the ngx_rbtree_node_s structure are actually meaningless, the life form of the structure is different from the final storage form, and nginx finally uses the following storage form to store each record.
3.2.2 current limiting algorithm
It is mentioned above that the ngx_http_limit_req_handler method is registered to the NGX_HTTP_PREACCESS_PHASE phase of HTTP processing during the postconfiguration process
Therefore, when processing a HTTP request, the ngx_http_limit_req_handler method is executed to determine whether current restriction is needed.
3.2.2.1 implementation of leaky Bucket algorithm
Users may configure several current restrictions at the same time, so for HTTP requests, nginx needs to traverse all current limit policies to determine whether current restrictions are needed.
The ngx_http_limit_req_lookup method implements the leaky bucket algorithm, and the method returns three results:
NGX_BUSY: request rate exceeds current limit configuration, reject request; NGX_AGAIN: request has passed current current limit policy verification and continues to verify the next current limit policy; NGX_OK: request has passed all current limit policy verification and can execute the next phase; NGX_ERROR: error / / limit, current limit policy; hash, record key hash value; data, record key data content; len, record data length of key Ep, the number of requests to be processed; whether account is the last flow-limiting policy static ngx_int_t ngx_http_limit_req_lookup (ngx_http_limit_req_limit_t * limit, ngx_uint_t hash, u_char * data, size_t len, ngx_uint_t * ep, ngx_uint_t account) {/ / Red-black tree search specified defined while (node! = sentinel) {if (hash)
< node->Key) {node = node- > left; continue;} if (hash > node- > key) {node = node- > right; continue;} / / hash values are equal. Compare whether the data are equal lr = (ngx_http_limit_req_node_t *) & node- > color; rc = ngx_memn2cmp (data, lr- > data, len, (size_t) lr- > len) / find if (rc = = 0) {ngx_queue_remove (& lr- > queue); ngx_queue_insert_head (& ctx- > sh- > queue, & lr- > queue); / / move records to LRU queue header ms = (ngx_msec_int_t) (now-lr- > last); / / current time minus last visit time excess = lr- > excess-ctx- > rate * ngx_abs (ms) / 1000 + 1000 / / request to be processed-current limit rate * time period + 1 request (rate, number of requests, etc. Multiplied by 1000) if (excess
< 0) { excess = 0; } *ep = excess; //待处理数目超过burst(等待队列大小),返回NGX_BUSY拒绝请求(没有配置burst时,值为0) if ((ngx_uint_t) excess >Limit- > burst) {return NGX_BUSY;} if (account) {/ / if it is the last current restriction policy, update the last access time and the number of requests to be processed, and return NGX_OK lr- > excess = excess; lr- > last = now; return NGX_OK;} / / increasing the number of visits lr- > count++; ctx- > node = lr; return NGX_AGAIN / / not the last current restriction policy, return NGX_AGAIN, and continue to verify the next current restriction policy} node = (rc)
< 0) ? node->Left: node- > right;} / / if no node is found, you need to create a new record * ep = 0; / / the storage size calculation method refers to section 3.2.1 data structure size = offsetof (ngx_rbtree_node_t, color) + offsetof (ngx_http_limit_req_node_t, data) + len; / / attempt to eliminate the record (LRU) ngx_http_limit_req_expire (ctx, 1) Node = ngx_slab_alloc_locked (ctx- > shpool, size); / / allocation space if (node = = NULL) {/ / insufficient space, allocation failed ngx_http_limit_req_expire (ctx, 0); / / compulsory phase-out record node = ngx_slab_alloc_locked (ctx- > shpool, size); / / allocation space if (node = = NULL) {/ / allocation failed and NGX_ERROR return NGX_ERROR was returned }} node- > key = hash; / / assign lr = (ngx_http_limit_req_node_t *) & node- > color; lr- > len = (u_char) len; lr- > excess = 0; ngx_memcpy (lr- > data, data, len); ngx_rbtree_insert (& ctx- > sh- > rbtree, node); / / insert records to the red-black tree and LRU queue ngx_queue_insert_head (& ctx- > sh- > queue, & lr- > queue) If (account) {/ / if it is the last current restriction policy, update the last access time, the number of requests to be processed, and return NGX_OK lr- > last = now; lr- > count = 0; return NGX_OK;} lr- > last = 0; lr- > count = 1; ctx- > node = lr; return NGX_AGAIN; / / not the last current restriction policy, return NGX_AGAIN, and continue to verify the next current restriction policy}
For example, if burst is configured to 0, the number of requests to be processed is initially excess; token generation cycle T, as shown in the following figure
3.2.2.2LRU phase-out strategy
In the previous section of the percussion algorithm, ngx_http_limit_req_expire eliminates a record, each time deleted from the end of the LRU queue
The second parameter, n, forcibly deletes one record at the end, and then attempts to delete one or two records when the number is zero. When the record is checked, one or two records will be deleted. The code is as follows:
Static void ngx_http_limit_req_expire (ngx_http_limit_req_ctx_t * ctx, ngx_uint_t n) {/ / delete up to 3 records while (n
< 3) { //尾部节点 q = ngx_queue_last(&ctx->Sh- > queue); / / get record lr = ngx_queue_data (Q, ngx_http_limit_req_node_t, queue); / / Note: when 0, the if code block cannot be entered, so the trailing node must be deleted. When n is not 0, enter the if code block and verify whether you can delete if (nasty +! = 0) {ms = (ngx_msec_int_t) (now-lr- > last); ms = ngx_abs (ms); / / it can be accessed within a short period of time and cannot be deleted, and return if (ms) directly
< 60000) { return; } //有待处理请求,不能删除,直接返回 excess = lr->Excess-ctx- > rate * ms / 1000; if (excess > 0) {return;}} / / delete ngx_queue_remove (Q); node = (ngx_rbtree_node_t *) ((u_char *) lr-offsetof (ngx_rbtree_node_t, color)); ngx_rbtree_delete (& ctx- > sh- > rbtree, node); ngx_slab_free_locked (ctx- > shpool, node);}}
3.2.2.3 burst implementation
Burst is designed to deal with sudden traffic. When occasional sudden traffic arrives, the server should be allowed to handle more requests.
When burst is 0, requests will be rejected as long as they exceed the current limit rate; when burst is greater than 0, requests exceeding the current limit rate will be queued for processing instead of being rejected directly
How to implement the queuing process? And nginx also needs to regularly process queued requests.
Section 2.2 mentions that all events have a timer. Nginx implements queuing and timing processing of requests through the cooperation of events and timers.
The ngx_http_limit_req_handler method has the following code:
/ / calculate how long the current request needs to be queued to process delay = ngx_http_limit_req_account (limits, n, & excess, & limit); / / add a readable event if (ngx_handle_read_event (r-> connection- > read, 0)! = NGX_OK) {return NGX_HTTP_INTERNAL_SERVER_ERROR;} r-> read_event_handler = ngx_http_test_reading;r- > write_event_handler = ngx_http_limit_req_delay / / writable event handler ngx_add_timer (r-> connection- > write, delay); / / add timer for writable events (cannot be returned to the client before timeout)
The method of calculating delay is very simple, which is to traverse all the current restriction policies, calculate the time it takes to process all pending requests, and return the maximum value.
If (delay [n] .nodelay) {/ / when nodelay is configured, the request will not be delayed. The delay is 0 continue;} delay = excess * 1000 / ctx- > rate; if (delay > max_delay) {max_delay = delay; * ep = excess; * limit = & delay [n];}
Take a brief look at the implementation of the writable event handler function ngx_http_limit_req_delay
Static void ngx_http_limit_req_delay (ngx_http_request_t * r) {wev = r-> connection- > write; if (! wev- > timedout) {/ / No timeout will not handle if (ngx_handle_write_event (wev, 0)! = NGX_OK) {ngx_http_finalize_request (r, NGX_HTTP_INTERNAL_SERVER_ERROR);} return;} wev- > timedout = 0; r-> read_event_handler = ngx_http_block_reading R-> write_event_handler = ngx_http_core_run_phases; ngx_http_core_run_phases (r); / / timed out, continue processing HTTP request} 4. Actual combat
4.1 testing general current limit
1) configure nginx to limit the current rate to 1qps, and limit the flow for the client IP address (the default is 503 for the return status code), as follows:
Http {limit_req_zone $binary_remote_addr zone=test:10m rate=1r/s; server {listen 80; server_name localhost; location / {limit_req zone=test; root html; index index.html index.htm;}}
2) continuously and concurrently initiate several requests; 3) looking at the server access log, you can see that three requests have been reached in 22 seconds and only one request has been processed; two requests have arrived in 23 seconds, the first request has been processed and the second request has been rejected
Xx.xx.xx.xxx-[22/Sep/2018:23:33:22 + 0800] "GET / HTTP/1.0" 200612 "-" ApacheBench/2.3 "
Xx.xx.xx.xxx-[22/Sep/2018:23:33:22 + 0800] "GET / HTTP/1.0" 503,537 "-" ApacheBench/2.3 "
Xx.xx.xx.xxx-[22/Sep/2018:23:33:22 + 0800] "GET / HTTP/1.0" 503,537 "-" ApacheBench/2.3 "
Xx.xx.xx.xxx-[22/Sep/2018:23:33:23 + 0800] "GET / HTTP/1.0" 200612 "-" ApacheBench/2.3 "
Xx.xx.xx.xxx-[22/Sep/2018:23:33:23 + 0800] "GET / HTTP/1.0" 503,537 "-" ApacheBench/2.3 "
4.2 Test burst
1) in case of speed limit 1qps, requests exceeding the speed limit will be rejected directly. In order to cope with sudden traffic, requests should be allowed to be queued for processing. Therefore, burst=5 is configured, that is, a maximum of 5 requests are allowed to queue for processing.
Http {limit_req_zone $binary_remote_addr zone=test:10m rate=1r/s; server {listen 80; server_name localhost; location / {limit_req zone=test burst=5; root html; index index.html index.htm;}}
2) use ab to initiate 10 requests concurrently, ab-n 10-c 10 http://xxxxx
3) check the server access log; according to the log, the first request is processed, 2 to 5 four requests are rejected, and 6 to 10 five requests are processed; why is this the result?
View ngx_http_log_module and register handler to NGX_HTTP_LOG_PHASE phase (the last stage of HTTP request processing)
So the actual situation should be like this: 10 requests arrive at the same time, the first request arrives directly, the second to sixth requests arrive, and the queue delays processing (one per second); the 7th to 10th requests are directly rejected, so print the access log first.
The second to sixth requests are processed with one meter per second, and the access log is printed after processing, that is, one per second from 49 to 53 seconds.
Xx.xx.xx.xxx-[22/Sep/2018:23:41:48 + 0800] "GET / HTTP/1.0" 200612 "-" ApacheBench/2.3 "
Xx.xx.xx.xxx-[22/Sep/2018:23:41:48 + 0800] "GET / HTTP/1.0" 503,537 "-" ApacheBench/2.3 "
Xx.xx.xx.xxx-[22/Sep/2018:23:41:48 + 0800] "GET / HTTP/1.0" 503,537 "-" ApacheBench/2.3 "
Xx.xx.xx.xxx-[22/Sep/2018:23:41:48 + 0800] "GET / HTTP/1.0" 503,537 "-" ApacheBench/2.3 "
Xx.xx.xx.xxx-[22/Sep/2018:23:41:48 + 0800] "GET / HTTP/1.0" 503,537 "-" ApacheBench/2.3 "
Xx.xx.xx.xxx-[22/Sep/2018:23:41:49 + 0800] "GET / HTTP/1.0" 200612 "-" ApacheBench/2.3 "
Xx.xx.xx.xxx-[22/Sep/2018:23:41:50 + 0800] "GET / HTTP/1.0" 200612 "-" ApacheBench/2.3 "
Xx.xx.xx.xxx-[22/Sep/2018:23:41:51 + 0800] "GET / HTTP/1.0" 200612 "-" ApacheBench/2.3 "
Xx.xx.xx.xxx-[22/Sep/2018:23:41:52 + 0800] "GET / HTTP/1.0" 200612 "-" ApacheBench/2.3 "
Xx.xx.xx.xxx-[22/Sep/2018:23:41:53 + 0800] "GET / HTTP/1.0" 200612 "-" ApacheBench/2.3 "
4) the response time calculated by ab is as follows: the minimum response time 87ms, the maximum response time 5128ms, and the average response time 1609ms:
Min mean [+ /-sd] median maxConnect: 41 44 1.7 44 46Processing: 46 1566 1916.6 1093 5084Waiting: 46 1565 1916.7 1092 5084Total: 87 1609 1916.2 1135 5128
4.3Test nodelay
1) 4.2 shows that after burst is configured, although burst requests will be queued up, the response time is too long and the client may have timed out already. Therefore, nodelay configuration is added to make nginx urgently process waiting requests to reduce response time:
Http {limit_req_zone $binary_remote_addr zone=test:10m rate=1r/s; server {listen 80; server_name localhost; location / {limit_req zone=test burst=5 nodelay; root html; index index.html index.htm;}}
2) use ab to initiate 10 requests concurrently, ab-n 10-c 10 http://xxxx/
3) View the server access log; the first request is processed directly, the second to sixth five requests are queued for processing (nodelay,nginx emergency processing is configured), and the 7th to 10th requests are rejected
Xx.xx.xx.xxx-[23/Sep/2018:00:04:47 + 0800] "GET / HTTP/1.0" 200612 "-" ApacheBench/2.3 "
Xx.xx.xx.xxx-[23/Sep/2018:00:04:47 + 0800] "GET / HTTP/1.0" 200612 "-" ApacheBench/2.3 "
Xx.xx.xx.xxx-[23/Sep/2018:00:04:47 + 0800] "GET / HTTP/1.0" 200612 "-" ApacheBench/2.3 "
Xx.xx.xx.xxx-[23/Sep/2018:00:04:47 + 0800] "GET / HTTP/1.0" 200612 "-" ApacheBench/2.3 "
Xx.xx.xx.xxx-[23/Sep/2018:00:04:47 + 0800] "GET / HTTP/1.0" 200612 "-" ApacheBench/2.3 "
Xx.xx.xx.xxx-[23/Sep/2018:00:04:47 + 0800] "GET / HTTP/1.0" 200612 "-" ApacheBench/2.3 "
Xx.xx.xx.xxx-[23/Sep/2018:00:04:47 + 0800] "GET / HTTP/1.0" 503,537 "-" ApacheBench/2.3 "
Xx.xx.xx.xxx-[23/Sep/2018:00:04:47 + 0800] "GET / HTTP/1.0" 503,537 "-" ApacheBench/2.3 "
Xx.xx.xx.xxx-[23/Sep/2018:00:04:47 + 0800] "GET / HTTP/1.0" 503,537 "-" ApacheBench/2.3 "
Xx.xx.xx.xxx-[23/Sep/2018:00:04:47 + 0800] "GET / HTTP/1.0" 503,537 "-" ApacheBench/2.3 "
4) the response time calculated by ab is as follows: the minimum response time 85ms, the maximum response time 92ms, and the average response time 88ms:
Min mean [+ /-sd] median maxConnect: 42 43 0.5 43 43Processing: 43 46 2.4 47 49Waiting: 42 45 2.5 46 49Total: 85 88 2.8 90 92 Summary
This paper first analyzes the common current limiting algorithms (leaky bucket algorithm and token bucket algorithm), and briefly introduces the process of nginx processing HTTP request and the realization of nginx timing event, then analyzes the basic data structure of ngx_http_limit_req_module module and its current limiting process in detail, and gives an example to help readers understand the configuration and results of nginx current limiting. As for the other module ngx_http_limit_conn_module is the current limit for the number of links, it is easier to understand, so I will not introduce it in detail here.
The above is the whole content of this article, I hope it will be helpful to your study, and I also hope that you will support it.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
NAME ls-list directory contentsSYNOPSIS ls [OPTION]... [FILE]... DES
© 2024 shulou.com SLNews company. All rights reserved.