Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to optimize the performance of Nginx under High concurrency

2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article introduces the relevant knowledge of "how to optimize Nginx performance under high concurrency". Many people will encounter such a dilemma in the operation of actual cases, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

Nginx, like Apache, is a WEB server. Based on the REST architecture style, take the unified resource descriptor (UniformResources ldentifier) URl or the unified resource locator (UniformResources Locator) URL as the basis for communication, and provide various network services through the HTTP protocol.

Apache has been developing for a long time and is the undisputed largest server in the world. It has many advantages: stable, open source, cross-platform and so on. It has been around for too long, and in the era of its rise, the Internet industry is far from what it is now. So it's designed to be a heavyweight. It does not support highly concurrent servers. Running tens of thousands of concurrent visits on Apache will cause the server to consume a lot of memory. The sz operating system also consumes a lot of CPU resources to switch between processes or threads, resulting in a decrease in the average response speed of HTTP requests.

All these determine that it is impossible for Apache to become a high-performance WEB server, and the lightweight high concurrency server Nginx arises at the historic moment.

Characteristics

Is a high-performance HTTP and reverse proxy web server, lightweight provides IMAP/POP3/SMTP services released on October 4, 2004 (the first public version 0.1.0) Nginx 1.4.0 stable version has been released on April 24, 2013 C language preparation Nginx is a cross-platform server Nginx has its own library, and in addition to zlib, PCRE and OpenSSL, the standard module only uses the system C library functions.

advantage

Low memory (in 3W concurrent connections, 10 nginx processes consume about 150m of memory) high concurrency (official tests can support 5W concurrent connections, in the actual production environment can reach 2-3W concurrent connections) simple (easy to understand configuration files) price (free, open source) support Rewriter rewriting (can be based on domain name, URL Divide the HTTP into different backend server groups) built-in health check (if several services in the nginx backend are down, it will not affect the front-end access, and can automatically detect the service status) save bandwidth (support GZIP compression, you can add Header headers cached locally by the browser)

High stability, reverse proxy, rarely downtime Chinese mainland use nginx website users: Baidu, JD.com, Sina, NetEase, Tencent, Taobao and so on

Functions: web server, lightweight; load, balancing; caching; high concurrency

Application scenarios: proxy server; IP load, static load; static and dynamic separation; current limiting, health monitoring

Installation and command

Installation:

Sudo apt-get install nginx

View version

Nginx-v nginx version: nginx/1.18.0 (Ubuntu)

Basic command

# nginx access page (welcome to nginx) specific location vi / usr/share/nginx/html/index.html# access IPcurl 192.168.100.11 "close nginx process nginx-s stop# start nginx process / usr/sbin/nginx # yum installed nginx can also use servic nginx start# to check whether the configuration file is incorrect nginx-t # reload configuration file nginx-s reload# View log tail-f filename # display file defaults to 10 lines Refresh display # example: tail-f / var/log/lvs-agent.log tail-f / etc/nginx/nginx.conf# after viewing the file, several lines of tail-n file name example: tail-n 100 / var/log/aa.log# delete network card ipip addr del 192.168.11.5 lvs 32 dev lo (lo eth2 eth3) # lvs clear all cluster services ipvsadm-C # get the ip hanging on the eth0 network card Value ip a | grep eth0 | grep inet | awk'{print $2}'| cut-d "/"-fallow to verify whether the ip or URL is available return 200curl-I-m 10-o / dev/null-s-w% {http_code} 10.110.26.10 jar package launched in the background nohup java-jar / usr/sbin/ project name. Jar > > / var/log/ project name .log 2 > & 1 & # check whether the previous command was executed Function returns 0 to execute successfully and other failed echo $? # check whether the nginx process is started. This command is used in the code to determine whether the nginx process is started. If only ps aux is used, grep nginx will be returned with content even if it is not started, affecting the judgment of ps aux | grep nginx | greo-v grep configuration file.

Nginx.conf

# nginx.conf# Global configuration area user www-data;worker_processes auto;pid / run/nginx.pid;include / etc/nginx/modules-enabled/*.conf;# Network event configuration area events {worker_connections 768; # multi_accept on;} # HTTP Module http {# HTTP Global Settings # # sendfile on; tcp_nopush on; tcp_nodelay on Keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include / etc/nginx/mime.types; default_type application/octet-stream; # SSL Settings # # ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3 # Dropping SSLv3, ref: POODLE ssl_prefer_server_ciphers on; # Log Settings # # access_log / var/log/nginx/access.log; error_log / var/log/nginx/error.log; # Resource Compression Settings # # gzip on; # enable # gzip_vary on by default # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 168k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript; # Virtual host configuration # # include / etc/nginx/conf.d/*.conf Include / etc/nginx/sites-enabled/*;}

/ etc/nginx/sites-enabled/*

Server {# virtual host configuration listen 80 default_server; # listening port listen [:]: 80 default_server; # SSL configuration # # listen 443 ssl default_server; # listen [:]: 443 ssl default_server; # # Note: You should disable gzip for SSL traffic. # See: https://bugs.debian.org/773332 Read up on ssl_ciphers to ensure a secure configuration. # See: https://bugs.debian.org/765782 Self signed certs generated by the ssl-cert package # Don't use them in a production server! # # include snippets/snakeoil.conf; # data cache location root / var/www/html; # Add index.php to the list if you are using PHP index index.html index.htm index.nginx-debian.html # domain name, can have multiple server_name _; # pairs / reverse proxy location / {# First attempt to serve request as file, then # as directory, then fall back to displaying a 404. # uwsgi_pass 127.0.0.1 uri 8000; # include / etc/nginx/uwsgi_params; try_files $uri $uri/ = 404;} # pass PHP scripts to FastCGI server # # location ~\. Php$ {# include snippets/fastcgi-php.conf # With php-fpm (or other unix sockets): # fastcgi_pass unix:/var/run/php/php7.4-fpm.sock; # # With php-cgi (or other tcp sockets): # fastcgi_pass 127.0.0.1 fastcgi_pass unix:/var/run/php/php7.4-fpm.sock; 9000 #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # # location ~ /\ .ht {# deny all; #}} proxy mode and configure reverse proxy forward proxy (forward proxy):

Is a server (proxy server) located between the client (user A) and the original server (origin server) (target server). In order to obtain content from the original server, the client sends a request to the proxy server and specifies the target (the original server), and then the proxy server transfers the request to the original server and returns the obtained content to the client. The client must make some special configuration to use the forward proxy. In general, if not specified, the agent technology defaults to the forward agent technology.

It is equivalent to a career intermediary, and the client cannot communicate with the actual server. The client knows that he is an intermediary.

Reverse proxy (reverse proxy):

In contrast to the forward proxy, it is like the original server to the client, and the client does not need to make any special settings. The client sends a normal request to the content in the reverse proxy's namespace (name-space), which then determines where to forward the request (the original server) and returns the obtained content to the client as if it were already its own.

It is equivalent to a person who buys and sells a house. When buying a house, it is a buyer relative to the one who sells the house. When selling the house, it is a seller.

The client does not know that he is a proxy server, and the server also thinks that he is only a client, not a proxy server.

Transparent proxy ∶

Transparent proxy means that the client does not need to know that there is a proxy server at all. It adapts your requestfields (message) and sends the real IP. Note that encrypted transparent proxies belong to anonymous proxies, which means there is no need to set up the use of proxies. An example of transparent agent practice is the behavior management software used by many companies today.

# reverse proxy is turned off by default # upstream localhost is a third-party module, balancing upstream localhost {# actual server server 192.168.136.133 server 8081;} # load server {listen 80 # proxy server port server_name localhost Location / {proxy_pass http://localhost; # sends the request to one of the actual servers}}

Load balancing method:

Polling method (default)

Weighted polling method (weight)

Fair

Url_hash

Source address hash method

Minimum connection method (least_conn)

Dynamic and static separation

Nginx dynamic and static separation simply means to separate dynamic and static requests, which cannot be understood as simply separating dynamic pages from static pages physically.

Strictly speaking, dynamic requests should be separated from static requests, which can be understood as using Nginx to deal with static pages and Tomcat to deal with dynamic pages.

From the point of view of current implementation, dynamic and static separation can be roughly divided into two types:

One is to simply separate static files into separate domain names and put them on separate servers, which is also a popular scheme at present.

One is to release dynamic and static files together and separate them through nginx.

Mkdir static # stores static files

Server {# virtual host configuration listen 80 default_server; # listening port listen [:]: 80 default_server; # data cache location root / var/www/html; # Add index.php to the list if you are using PHP index index.html index.htm index.nginx-debian.html # domain name, can have multiple server_name _; # pairs / reverse proxy location / {# First attempt to serve request as file, then # as directory, then fall back to displaying a 404. # uwsgi_pass 127.0.0.1 uri/ 8000; # include / etc/nginx/uwsgi_params; try_files $uri $uri/ = 404;} # if it is these file suffixes, go here to find location ~. *\. (html | htm | gif | jpeg | bmp | png | ico | txt | js | css) ${root / static; expires 30d # cache validity}} log management log format

Logs/access.log files generated by logs to the Nginx root directory use the "main" log format by default, or you can customize the default "main" log format.

Og_format main'$remote_addr-$remote_user [$time_local] "$request"'$status $body_bytes_sent "$http_referer"'"$http_user_agent"$http_x_forwarded_for"' $remote_addr: the ip address of the client (proxy server, showing the proxy service ip) $remote_user: used to record the user name of the remote client (usually "-") $time_local: used to record the access time and time zone $request: the url used to record the request and the request method $status: response status code, for example: 200success, 404page not found, etc. $body_bytes_sent: log cutting of the main contents of the file sent to the client

Nginx log files do not have rotate function

If we write and generate a log every day, we can write a nginx log cutting script to automatically cut log files.

The first step is to rename the log file (don't worry about losing the log because nginx can't find the log file after renaming. Before you reopen the log file with the original name, nginx still logs your renamed file. Linux locates the file based on the file descriptor rather than the file name.)

The second step is to send the USR1 signal to the nginx main process

After receiving the signal, the nginx main process reads the log file name from the configuration file.

Reopen the log file (named after the log name in the configuration file) and use the user of the worker process as the owner of the log file

When the log file is reopened, the nginx main process closes the log file with the same name and notifies

The worker process uses the newly opened log file. The worker process immediately opens the new log file and closes the renamed log file, and then you can work with the old log file. [or restart the nginx service]

The auto-cutting script for nginx logs per minute is as follows:

Create a new shell script

High concurrency architecture analysis what is high concurrency?

High concurrency (High Concurrency) is one of the factors that must be considered in the design of Internet distributed system architecture. it usually means that the system can process many requests in parallel at the same time.

Some commonly used indicators of high concurrency correlation include response time (Response Time), throughput (Throughput), query rate per second QPS (Query Per Second), number of concurrent users and so on.

Response time: the time the system responds to the request

Throughput: the number of requests processed per unit time.

QPS: number of response requests per second

How to improve the concurrency ability of the system?

In the design of Internet distributed architecture, there are two main ways to improve the concurrency ability of the system: vertical expansion (ScaleUp) and horizontal expansion (Scale Out).

Vertical expansion: improve the processing capacity of a single machine. There are two more ways to scale vertically.

Enhance the performance of stand-alone hardware

Improve the performance of stand-alone architecture

In the early days of the rapid development of Internet business, if the budget is not a problem, it is strongly recommended to use the method of "enhancing the performance of stand-alone hardware" to improve the concurrency capability of the system, because at this stage, the company's strategy is often to develop business to seize time. "enhancing stand-alone hardware performance" is often the fastest way.

Whether it is to improve the performance of stand-alone hardware, or to improve the performance of stand-alone architecture, there is a fatal deficiency: there is always a limit to stand-alone performance. Therefore, the Internet distributed architecture design high concurrency ultimate solution is still horizontal expansion.

Horizontal scaling: as long as you increase the number of servers, you can linearly expand system performance.

Every server has its limits.

Implemented in three ways

Limit_conn_zone (restrict connection flow)

Limit_req_zone (restrict request flow)

Ngx_http_upstream_module (backend service restrictions)

Insert a gadget here: limit the download of test tools

Yum install http-tools-ykey means the size of the Document Length page tested by Document Path, the number of concurrent users, the total time spent in Time taken for tests testing, the total number of Complete requests requests, the number of concurrent connections, the number of failed Failed requests requests, the number of Write errors errors per second, the throughput of requests per second, the response time per request Time per request takes time to limit the connection flow http {# binary_remote_addr:IP # zone=one:10m Apply for a 10m space to store the connected IP limit_conn_zone $binary_remote_addr zone=one:10m; server {# zone area to release 10 connected nginx at a time to process limit_conn one 10 }} limit request flow (speed limit) http {# rate means that each connection can issue a connection limit_req_zone $binary_remote_addr zone=req_one:10m rate=1r/s; server {# brust: token, one at a time, and will report an error of 503 limit_req zone=req_one brust=120;}} backend service limit

This module provides the back-end current-limiting function we need.

The module has one parameter: max_conns can limit the current on the server, but it can only be used in the commercial version of nginx

After the nginx1.11.5 version, officials have separated this parameter from the commercial version, that is, as long as we upgrade the widely used nginx1.9.12 version and 1.10 version in production.

Server {# max_conns maximum number of receiving services server 127.0.0.1 server 8080 max_conns=100;} security configuration

Version security

Http {server_tokens off;}

IP security

Http {location / {allow 127.0.0.1; # whitelist deny all; # blacklist}}

File security

Http {location / logs {autoindex on; # display directory root / static;} location ^ / logs~*\. (log | txt) ${add_header Content_Type text/plain; root / static;}}

Connection security

Http {# SSL settings} Nginx optimization

Adjust the main configuration file of Nginx to increase concurrency

Worker_processes 2 tweak # adjust consistent with CPU events {# maximum number of concurrency per worker worker_connection 65535;}

Nginx persistent connections: after HTTP1.1, the HTTP protocol supports persistent connections, that is, persistent connections. The advantage is that multiple HTTP requests and responses can be sent on a single TCP connection.

Nginx long connection and short connection can enhance the disaster recovery ability of the server.

Reduces the consumption and delay of establishing and closing connections. If we use nginx as a reverse proxy or load balancer, persistent connection requests from the client will be converted into short connections and sent to the server. In order to support persistent connections, we need to do some configuration on the nginx server.

When using nginx, to achieve a long connection, we must do the following two things:

From client to nginx is a long connection (events)

From nginx to server is a long connection (http)

For the client, nginx actually plays the role of server, on the contrary, server,nginx is a client

Events {# keepalive timeout. The default is 60s. Keep in mind that this parameter cannot be set too large! Otherwise, it will cause many invalid http connections to occupy the number of connections in nginx, and eventually nginx crashes! Keepalive_timeout 60;} http {keepalive_timeout 60;} Nginx compression

Gzip compression: compression can be enabled before sending the response message to the client, which can effectively save bandwidth and improve the speed of response to the client. Compression will consume the cpu performance of nginx.

Gzip compression can be configured under http,server and location modules

Http {# gzip module setting # enable compression gzip on; # sets the minimum number of bytes of pages allowed to be compressed, and the number of page bytes is obtained from the content-length in the header header. The default value is 0, no matter how much the page is compressed. It is recommended to set the number of bytes greater than 2k. Less than 2k may increase the pressure. Gzip_min_length 2k; # sets up the system to get several units of cache to store the compressed result data stream of gzip. For example, 44k represents four times the memory requested with 4k as a single bit and 4k as the original data size. 48k represents 4 times the memory requested in 8k units according to the original data size in 8k units. # if it is not set, the default value is to apply for memory space of the same size as the original data to store gzip compression results. Gzip_buffers 4 16k; # Compression level, 1-10. The larger the number, the better the compression, and the more CPU time it takes up gzip_comp_level 5 # default: gzip_types text/html (does not compress js/css files by default) # Compression type, matching MIME type for compression # cannot use wildcard text/* # (whether specified or not) text/html has been compressed by default # set which compressed text file can refer to conf/mime.types gzip_types text/plain application/xjavascript text/css application/xml # values of 1.0 and 1.1 indicate whether to compress http 1.0. If you select 1.0, both 1.0 and 1.1 can compress gzip_http_version 1.0 # IE6 and below prohibit compression of gzip_disable "MSIE [1-6]\." # default value: when off # Nginx is enabled as a reverse proxy, the result returned by the backend server is enabled or disabled. The prerequisite for matching is that the backend server must return a header header containing "Via". # off-turn off compression of all agent result data # expired-enable compression, if header header contains "Expires" header information # no-cache-enable compression, if header header contains "Cache-Control:no-cache" header information # no-store-enable compression, if header header contains "Cache-Control:no-store" header information # private-enable compression If header header contains "Cache-Control:private" header information # no_last_modified-enable compression, if header header does not contain "Last-Modified" header information # no_etag-enable compression, if header header does not contain "ETag" header information # auth-enable compression If the header header contains the "Authorization" header message # any-unconditionally enable compression gzip_proxied expired no-cache no-store private auth # for CDN and proxy server. For the same url, you can return compressed and uncompressed copy gzip_vary on} based on header information.

Status monitoring

Server {location / NginxStatus {stub_status on; access_log off;}}

Plug-in installation

. / configure-- prefix=...-- with-http_stub_status _ module "how to optimize Nginx performance under High concurrency" ends here. Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report