In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)05/31 Report--
Today, the editor will share with you the relevant knowledge points about how Nginx reverse proxy supports long connections. The content is detailed and the logic is clear. I believe most people still know too much about this knowledge, so share this article for your reference. I hope you can get something after reading this article.
Preface
The nginx upstream connection to the backend defaults to a short connection, initiates a connection to the backend through http/1.0, and sets the requested "connection" header to "close". The connection between nginx and the front end is a persistent connection by default. After a user establishes a connection with nginx, multiple requests are sent through this persistent connection. If nginx is only used as a reverse proxy, it is possible that a user connection requires multiple short connections to the back end. If the back-end server (origin server or cache server) is not strong in handling concurrent connections, it may lead to a bottleneck.
Nginx's current mechanism for establishing and obtaining upstream connections is shown in the following figure. Nginx creates a connection pool at the beginning (no sharing between processes, locks can be avoided), which is provided to all forward / backward connections.
If you want to implement upstream persistent connections, each process needs another connection pool, which is full of persistent connections. Once a connection is established with the backend server, the connection will not be closed immediately after the current request connection ends, but the used connection will be saved in a keepalive connection pool. Each time you need to establish a backward connection later, you only need to find it from this connection pool. If you find a suitable connection, you can directly use this connection. There is no need to re-create the socket or initiate the connection (). This not only saves the time consumption of the three-way handshake when establishing the connection, but also avoids the slow start of the tcp connection. If you cannot find a suitable connection in the keepalive connection pool, follow the original steps to reestablish the connection. Assuming that the connection lookup time is negligible, this approach must be beneficial and harmless (of course, a small amount of extra memory is required).
Different people have different choices on how to design this keepalive connection pool. For example, nginx's current third-party module upstream keepalive (author maxim dounin) uses a queue to do it. Because upstream is likely to have multiple servers, it may take a long time to find when a large number of connections are maintained. Each upstream server can be assigned a pool (queue) to shorten the lookup time. But generally speaking, the memory operation is very fast, and the impact will not be great. The upstream keepalive module currently supports only memcached, but its code can be reused to achieve a long connection to http upstream. Since the author of nginx did not consider the long connection of upstream before, it may be difficult to modularize http upstream keepalive in design, which can only be done by manually modifying the code.
An example of a complete configuration that allows upstream to support persistent connections is as follows:
# user nobody; worker_processes 1; # error_log logs/error.log; # error_log logs/error.log notice; # error_log logs/error.log info; # pid logs/nginx.pid; events {worker_connections 1024;} http {include mime.types; default_type application/octet-stream # log_format main'$remote_addr-$remote_user [$time_local] "$request" #'$status $body_bytes_sent "$http_referer" #'"$http_user_agent"$http_x_forwarded_for"; # access_log logs/access.log main; client_max_body_size 20m; client_header_buffer_size 32k; large_client_header_buffers 4 32k; sendfile on; # tcp_nopush on; # keepalive_timeout 0 Keepalive_timeout 65; # gzip on; proxy_buffer_size 64k; proxy_buffers 32 32k; proxy_busy_buffers_size 128k; upstream aaucfg_backend {server 127.0.0.1 proxy_buffers 97; keepalive 16;} upstream hfc_backend {server 127.0.1 proxy_buffers 8090; keepalive 16;} upstream manager_backend {server 127.0.0.1 proxy_buffers 8095; keepalive 16;} server {listen 80; server_name localhost; # charset koi8-r # access_log logs/host.access.log main; root html/tools; index index.html index.htm index.php; proxy_http_version 1.1; proxy_set_header connection ""; proxy_set_header host $host; proxy_set_header x-real_ip $remote_addr; proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for Location / {if (!-e $request_filename) {# rewrite ^ / (. *) $/ index.php/$1 last; # break; rewrite ^ / (. *) $/ index.php/$1;}} location ~ *\. (ico | css | js | gif | png) (\? [0-9] +)? ${expires max; log_not_found off;} location ^ ~ / aaucfg/ {# proxy_pass http://$remote_addr:97$request_uri; Proxy_pass http://aaucfg_backend;} location ^ ~ / hfc/ {# proxy_pass http://$remote_addr:8090$request_uri; proxy_pass http://hfc_backend;} location ^ ~ / manager/ {# proxy_pass http://$remote_addr:8095$request_uri; proxy_pass http://manager_backend;} # error_page 404 / 404.html # redirect server error pages to the static page / 50x.html # error_page 500 502 503 504 / 50x.html; location = / 50x.html {root html;} # proxy the php scripts to apache listening on 127.0.0.1 50x.html 80 # location ~\ .php$ {# proxy_pass #} # pass the php scripts to fastcgi server listening on 127.0.0.1 php$ 9000 # # location ~. Php$ {# fastcgi_pass 127.0.0.1 php$ 9000; # fastcgi_index index.php; # fastcgi_param script_filename $document_root$fastcgi_script_name; # include fastcgi_params; #} location ~ .php {fastcgi_pass 127.0.0.1 fastcgi_pass 9000; fastcgi_index index.php; fastcgi_param script_filename $document_root$fastcgi_script_name Include fastcgi.conf; include fastcgi_params; # defines the variable $path_info to store pathinfo information set $path_info ""; # defines the variable $real_script_name to hold the real address set $real_script_name $fastcgi_script_name # if the address matches the regular expression in quotation marks if ($fastcgi_script_name ~ "^ (. +?\ .php) (/. +) $") {# assign the file address to the variable $real_script_name set $real_script_name $1; # assign the parameter after the file address to the variable $path_info set $path_info $2;} # configure some parameters of fastcgi fastcgi_param script_filename $document_root$real_script_name Fastcgi_param script_name $real_script_name; fastcgi_param path_info $path_info;} # deny access to .htaccess files, if apache's document root # concurs with nginx's one # # location ~ /\ .ht {# deny all; #}} # another virtual host using mix of ip-, name-, and port-based configuration # # server {# listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / {# root html # index index.html index.htm; #} #} # https server # # server {# listen 443 ssl; # server_name localhost; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_cache shared:ssl:1m; # ssl_session_timeout 5m; # ssl_ciphers highway ssl_session_timeout Md5; # ssl_prefer_server_ciphers on; # location / {# root html; # index index.html index.htm #} #} above is all the content of the article "how does the Nginx reverse proxy support persistent connections?" Thank you for reading! I believe you will gain a lot after reading this article. The editor will update different knowledge for you every day. If you want to learn more knowledge, please pay attention to the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.