In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article introduces the relevant knowledge of "what is the method of Nginx+Linux performance tuning". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
Nginx, known for its high-performance load balancing, caching and web servers, supports 40 per cent of the world's busy websites. In most usage scenarios, the default configurations of Nginx and Linux systems perform well, but it is still necessary to make some tuning to achieve * performance.
Nginx expert service, has worked with some busy websites around the world to tune Nginx to reach the limits of performance, and can support any customer who needs to make full use of the system's capabilities.
Brief introduction
It is assumed that the reader has a basic understanding of Nginx architecture and configuration concepts. Instead of repeating the contents of the Nginx document, this article outlines the various configuration options and provides links to related documentation.
When tuning, a better rule is to change only one configuration item at a time, and if there is no performance improvement after the change, it will be returned to the original value.
Let's discuss Linux tuning first, because some values affect the values that can be used in the Nginx configuration.
Linux configuration
The modern Linux kernel (2.6 +) can fine-tune various configurations, some of which you may want to change. If the operating system configuration is too low, you will see an error message in the kernel log, so you need to adjust these configurations. There are many Linux configuration items, and this article only mentions those configuration items that are most likely to require tuning under a normal workload. If you need more information about these configurations, please refer to the Linux documentation.
Backlog queue
The following settings are directly related to the connection and how it is queued. If the incoming connection rate is high and the performance level is uneven, for example, some connections seem to be paused, it may be useful to change these configurations.
Net.core.somaxconn this item sets the queue size for connections waiting to be accepted by Nginx. Because Nginx accepts connections very quickly, this value usually doesn't need to be very large, but the default value is very low, so if you have a high-traffic site, it's a good idea to increase this value. If the setting is too low, you can see the error message in the kernel log, and you should increase this value until there is no error message. Note: if you set it to a value greater than 512, you should also use the backlog parameter of the listen command to match this value to change the Nginx configuration.
Net.core.netdev_max_backlog this setting sets the rate at which packets are buffered by the Nic before being handed over to CPU for processing. For machines with high bandwidth, this value may need to be increased. Check the network card documentation for suggestions, or check the kernel log for error messages.
File descriptor
A file descriptor is an operating system resource that handles things such as connecting and opening files. Nginx can use up to two file descriptors for each connection. For example, if Nginx is used as a proxy, one is for client connections and the other is for connecting to the proxied server. If HTTP keepalive is used, the use of connection descriptors is much less. For systems with a large number of connections, the following settings may need to be adjusted:
Sys.fs.file_max this is a system-wide limitation of file descriptors.
Nofile this is a user-level file descriptor limit, configured in the / etc/security/limits.conf file
Temporary port
When Nginx is used as a proxy, every connection to the upstream server uses a temporary port.
Net.ipv4.ip_local_port_range is used to specify the start and end port numbers that can be used. If you see the port running out, you can increase this range. The common settings are 1024 to 65000.
Net.ipv4.tcp_fin_timeout is used to specify how long a port that is no longer in use can be used again by another connection. Typically, this value defaults to 60 seconds, but can be safely reduced to 30 or even 15 seconds.
Nginx configuration
Here are some Nginx instructions that may affect performance. As mentioned earlier, we discuss only those instructions that recommend most user adjustments. Changes to any instructions not mentioned here are not recommended without guidance from the Nginx team.
Working process
Nginx can run multiple worker processes, each of which can handle a large number of connections. You can use the following instructions to control the number of worker processes and how connections are handled:
Worker_processes, which controls the number of worker processes that Nginx runs. In most cases, one CPU core and one worker process can work well. You can set this instruction to auto to achieve the number of worker processes that match the number of CPU cores. Sometimes, you can increase this value, such as when the worker process needs to handle a large number of disk IO operations. This value defaults to 1.
Worker_connections represents the number of connections that each worker process can handle at the same time. The default value is 512, but most systems can handle larger values. How much this value should be set depends on the server hardware configuration and the characteristics of the traffic, which can be found by testing.
Keepalives
Persistent connections can have a significant impact on performance by reducing the CPU and network overhead required to open and close connections. Nginx terminates all client connections and has a separate connection to the upstream server. Nginx supports persistent connections between clients and upstream servers. The following instructions relate to client persistent connections:
Keepalive_requests this indicates how many requests the client can send on a single persistent connection. The default value is 100, which can be set to a higher value, which is useful in test scenarios where the load generator sends a large number of requests from a single client.
Keepalive_timeout indicates how long an idle persistent connection remains open.
The following instructions relate to upstream persistent connections:
Keepalive this specifies the number of idle persistent connections per worker process to the upstream server. There is no default value for this instruction.
To enable persistent connections to upstream, you need to add the following directives:
Proxy_http_version 1.1; proxy_set_header Connection ""
Access log
Logging each request takes CPU and IO cycles, and one way to reduce this impact is to enable access log buffering. This causes Nginx to buffer a series of log entries and then write to the file at once instead of a single write.
Access log buffering can be turned on by specifying the buffer=size option of the access_log directive, which specifies the size of the buffer to be used. You can also use the "flush=time" option to tell Nginx how long it will take to write entries in the buffer to the file.
After these two options are defined, when the buffer cannot hold the next log, or when the entry in the buffer exceeds the time specified by the flush parameter, Nginx writes the entry in the buffer to the log file. When the worker reopens the log file or closes it, the entries in the buffer are also written to the file. You can also disable access logging completely.
Sendfile
Sendfile is an operating system feature that can be enabled on Nginx. It often achieves zero copy by copying data from one file descriptor to another in the kernel, thus providing faster TCP data transfer. Nginx can use this mechanism to write the contents of cache or disk to socket without the need for context switching from kernel space to user space, so it is very fast and uses less CPU overhead. Because data never touches user space, it's impossible to insert filters that need to access data into the processing chain, and you can't use any Nginx filters that need to change content, such as gzip filters. Nginx does not enable this mechanism by default.
Limit
Nginx and Nginx Plus allow you to set restrictions to control client resource consumption so as not to affect system performance as well as user experience and security. Here are some related instructions:
Limit_conn / limit_conn_zone directives can be used to limit the number of connections allowed by Nginx, such as the number of connections from a single client IP address. This prevents a single client from opening too many connections and consuming too many resources.
Limit_rate is used to limit the bandwidth that clients are allowed to use on a single connection. This prevents some clients from overloading the system, thus helping to provide QoS guarantees for all clients.
Limit_req / limit_req_zone instructions can be used to limit the rate of request processing in Nginx. Together with limit_rate, you can prevent some clients from overloading the system, thus helping to provide QoS guarantees for all clients. These instructions can also be used to enhance security, especially for login pages, by limiting the request rate to make it appropriate for human users and slowing down programs that try to access your application.
Max_conns is used to limit the number of simultaneous connections to a single server in the upstream group. This prevents the upstream server from overloading. The default value is 0, which means there is no limit.
Queue if max_conns is set, the queue directive is used to determine what happens when a request cannot be processed because there are no servers available in the upstream group or those servers reach the max_conns limit. This instruction is used to set how many requests will be queued and for how long. If this instruction is not set, there will be no queuing behavior.
Other considerations
Nginx also has some features that can be used to improve the performance of web applications. These features do not often appear in tuning discussions, but they are worth mentioning, as their impact can also be considerable. We will discuss two of these features.
Caching
For a Nginx instance that loads a group of web servers or application servers, enabling caching can significantly reduce response time and reduce the load on back-end servers. Caching itself is a topic and will not be discussed here.
Compress
Compressing the response can greatly reduce the size of the response and reduce the bandwidth consumption. However, this requires CPU resources to handle compression, so * is used when it is worth reducing bandwidth usage. It is important to note that compression cannot be enabled again for things that are already compressed, such as jpeg images. For more information about Nginx compression configuration, please refer to Nginx Administration Guide-Compression and decompression.
This is the end of the content of "what is the method of Nginx+Linux performance tuning". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.