In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces the relevant knowledge of Nginx performance optimization methods, the content is detailed and easy to understand, the operation is simple and fast, and has a certain reference value. I believe you will gain something after reading this Nginx performance optimization method article. Let's take a look at it.
Parameter optimization of Linux system
Some of the configurations mentioned below need a newer Linux kernel to support. The author uses CentOS 7.4, kernel version 3.10. If it does not meet the needs, it is best to upgrade accordingly. After all, patching is a thankless thing. For system-level tuning, we usually modify the file descriptor limit, the buffer queue length, and the number of temporary ports.
File descriptor limit
Since each TCP connection takes up a file descriptor, once the file descriptor is exhausted, an error such as "Too many open files" will be returned when the new connection arrives. In order to improve performance, we need to modify it: 1. System-level restrictions on editing the file / etc/sysctl.conf, add the following:
Fs.file-max = 10000000fs.nr_open = 10000000
User-level restrictions on editing the file / etc/security/limits.conf, adding the following:
* hard nofile 1000000 * soft nofile 1000000
Here we just make sure that the user-level limit is not greater than the system-level limit, otherwise there may be a problem of not being able to log in through SSH. After the modification, execute the following command:
$sysctl-p
You can see if the modification is successful by executing the command ulimit-a.
TCP connection queue length
Edit the file / etc/sysctl.conf and add the following:
# The length of the syn quenenet.ipv4.tcp_max_syn_backlog = 6553 million The length of the tcp accept queuenet.core.somaxconn = 65535
Tcp_max_syn_backlog is used to specify the length of the semi-connected SYN queue. When a new connection arrives, the system detects the semi-connected SYN queue. If the queue is full, the SYN request cannot be processed, and the statistical count somaxconn is added to the ListenOverflows and ListenDrops in / proc/net/netstat to specify the length of the fully connected ACCEPT queue. When the queue is full, the ACK packets sent by the client will not be processed correctly. And return the error "connection reset by peer" Nginx will record an error log "no live upstreams while connecting to upstreams" if the above error occurs, we need to consider increasing the configuration of these two items.
Temporary port
Because Nginx acts as a proxy, each TCP connection to the upstream Web service occupies a temporary port, so we need to modify the ip_local_port_range parameter, modify the / etc/sysctl.conf file, and add the following:
Net.ipv4.ip_local_port_range = 102465535net.ipv4.ip_local_reserved_ports = 8080,8081, 102465535net.ipv4.ip_local_reserved_ports 9000-9010
Where the parameter ip_local_reserved_ports is used to specify the reserved port, which is to prevent the service port from being occupied and cannot be started.
Nginx parameter optimization
Nginx parameter optimization mainly focuses on the configuration file nginx.conf, which will not be discussed in detail below.
Working process
One of the important reasons for the strong performance of Nginx is that it uses the multi-process non-blocking I-paw O model, so we should make good use of this:
The default Nginx of worker_processes has only one master process and one worker process. We need to modify it, either to the specified number or to auto, that is, the number of CPU cores of the system. A higher number of worker will lead to competition for cpu resources between processes, resulting in unnecessary context switching. So here we set it to the number of cores of cpu: worker_processes auto
The number of concurrent connections that can be handled by each worker in worker_connections. The default value of 512 is not enough. Let's increase it appropriately: worker_connections 4096.
Nginx supports the following Imax O multiplexing methods: select, poll, kqueue, epoll, rtsig, / dev/poll, eventport. They are suitable for different operating systems, and epoll is the most efficient one on Linux: use epoll.
KeepAlive
In order to avoid frequent establishment and disconnection from Nginx to Web services, we can enable the KeepAlive persistent connection feature supported from HTTP 1.1, which can greatly reduce CPU and network overhead, and is the biggest improvement in performance in our actual combat. Keepalive must be used in conjunction with proxy_http_version and proxy_set_header. The reference configuration is as follows:
Upstream BACKEND {keepalive 300; server 127.0.0.1 keepalive 8081;} server {listen 8080; location / {proxy_pass http://BACKEND; proxy_http_version 1.1; proxy_set_header Connection "";}}
Keepalive is neither timeout nor the number of connection pools. The official explanation is as follows:
The connections parameter sets the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process. When this number is exceeded, the least recently used connections are closed.
It can be seen that it means "maximum number of idle long connections", and free long connections exceeding this number will be recycled. When the number of requests is stable and smooth, the number of free long connections will be very small (close to 0). In reality, the number of requests cannot always be smooth and stable, and when the number of requests fluctuates, the number of idle long connections fluctuates:
When the number of idle persistent connections is greater than the configuration value, the portion of the persistent connections that are larger than the configured value will be recycled; when the persistent connections are insufficient, a new persistent connection will be re-established.
Therefore, if this value is too small, it will cause connection pools to be frequently recycled, allocated, and reclaimed. In order to avoid this situation, you can adjust this value according to the actual situation. In our actual situation, the response time of the web service with a target QPS of 6000 is about 200ms, so we need about 1200 persistent connections, and the keepalive value is only 10% of the number of long connections. Here, we can set it to 1000 if we don't want to calculate it.
Access-Log caching
The cost of logging is relatively high, but Nginx supports log caching. We can take advantage of this feature to reduce the frequency of writing log files, thus improving performance. You can use a combination of buffer and flush parameters to control caching behavior:
Access_log / var/logs/nginx-access.log buffer=64k gzip flush=1m
Buffer sets the cache size. When the buffer reaches the size specified by buffer, Nginx will write the cached log to the file; flush specifies the cache timeout, and when the time specified by flush arrives, it will trigger the cache log to write to the file.
File descriptor limit
There is also a corresponding configuration item in the Nginx configuration: worker_rlimit_nofile. In theory, this value should be set to the value in / etc/security/limits.conf divided by worker_processes, but in practice it is impossible for each process to be evenly distributed, so it can be set to the same as / etc/security/limits.conf here.
Worker_rlimit_nofile 1000000; this is the end of this article on "methods of Nginx performance optimization". Thank you for reading! I believe you all have a certain understanding of the "methods of Nginx performance optimization". If you want to learn more, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.