Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to optimize Nginx configuration and kernel

2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)05/31 Report--

In this article, the editor introduces "how to optimize Nginx configuration and kernel" in detail, with detailed content, clear steps and proper handling of details. I hope this article "how to optimize Nginx configuration and kernel" can help you solve your doubts.

Optimization in nginx directive (configuration file)

The copy code is as follows:

Worker_processes 8

The number of nginx processes. It is recommended to specify the number of cpu, which is generally a multiple of it.

The copy code is as follows:

Worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000 10000000

Assign cpu to each process. In the above example, eight processes are assigned to eight cpu, of course, you can write more than one process, or assign a process to multiple cpu.

The copy code is as follows:

Worker_rlimit_nofile 102400

This directive means that when a nginx process opens the maximum number of file descriptors, the theoretical value should be the maximum number of open files (ulimit-n) divided by the number of nginx processes, but the nginx allocation request is not so uniform, so it is best to be consistent with the value of ulimit-n.

The copy code is as follows:

Use epoll

Use epoll's iUniver model, needless to say.

The copy code is as follows:

Worker_connections 102400

The maximum number of connections allowed per process, theoretically the maximum number of connections per nginx server is worker_processes*worker_connections.

The copy code is as follows:

Keepalive_timeout 60

Keepalive timeout.

The copy code is as follows:

Client_header_buffer_size 4k

The buffer size of the client request header, which can be set according to the paging size of your system. Generally, the header size of a request will not exceed 1k, but since the paging of the system is generally larger than 1k, it is set to the paging size here. The page size can be obtained with the command getconf pagesize.

The copy code is as follows:

Open_file_cache max=102400 inactive=20s

This specifies the cache for open files, which is not enabled by default. Max specifies the number of caches, which is recommended to be the same as the number of open files. Inactive refers to how long it takes to delete the cache after the file has not been requested.

The copy code is as follows:

Open_file_cache_valid 30s

This refers to how often the valid information in the cache is checked.

The copy code is as follows:

Open_file_cache_min_uses 1

The minimum number of times a file is used during the time of the inactive parameter in the open_file_cache directive. If this number is exceeded, the file descriptor is always opened in the cache. As in the example above, if a file is not used once in inactive time, it will be removed.

Optimization of kernel parameters

The copy code is as follows:

Net.ipv4.tcp_max_tw_buckets = 6000

The number of timewait. The default is 180000.

The copy code is as follows:

Net.ipv4.ip_local_port_range = 1024 65000

The range of ports that the system is allowed to open.

The copy code is as follows:

Net.ipv4.tcp_tw_recycle = 1

Enable timewait fast recycling.

The copy code is as follows:

Net.ipv4.tcp_tw_reuse = 1

Turn on reuse. Allows time-wait sockets to be reused for new tcp connections.

The copy code is as follows:

Net.ipv4.tcp_syncookies = 1

Enable syn cookies, and when syn waiting queue overflow occurs, enable cookies to handle it.

The copy code is as follows:

Net.core.somaxconn = 262144

The backlog of the listen function in the web application limits the net.core.somaxconn of our kernel parameters to 128 by default, while the ngx_listen_backlog defined by nginx defaults to 511, so it is necessary to adjust this value.

The copy code is as follows:

Net.core.netdev_max_backlog = 262144

The maximum number of packets allowed to be sent to the queue when each network interface receives packets faster than the kernel processes them.

The copy code is as follows:

Net.ipv4.tcp_max_orphans = 262144

The maximum number of tcp sockets in the system that are not associated to any of the user file handles. If this number is exceeded, the orphan connection will be immediately reset and a warning message will be printed. This limit is only to prevent simple dos attacks, do not rely too much on it or artificially reduce this value, but should increase this value (if you increase memory).

The copy code is as follows:

Net.ipv4.tcp_max_syn_backlog = 262144

The maximum value of recorded connection requests for which the client acknowledgement has not been received. For systems with 128m of memory, the default is 1024, and for systems with small memory, it is 128m.

The copy code is as follows:

Net.ipv4.tcp_timestamps = 0

The timestamp avoids the winding of serial numbers. A 1gbps link is sure to encounter a sequence number that has been used before. Timestamps allow the kernel to accept such "abnormal" packets. It needs to be turned off here.

The copy code is as follows:

Net.ipv4.tcp_synack_retries = 1

To open a peer-to-peer connection, the kernel needs to send a syn with an ack that responds to the previous syn. It is the second handshake in the so-called three-way handshake. This setting determines the number of syn+ack packets sent by the kernel before the connection is abandoned.

The copy code is as follows:

Net.ipv4.tcp_syn_retries = 1

The number of syn packets sent before the kernel gives up establishing a connection.

The copy code is as follows:

Net.ipv4.tcp_fin_timeout = 1

If the socket is closed by the local request, this parameter determines how long it remains in the fin-wait-2 state. The peer can make an error and never close the connection, or even crash unexpectedly. The default value is 60 seconds. 2.2 the usual value of the kernel is 180s, you can press this setting, but keep in mind that even if your machine is a light web server, there is a risk of memory overflow due to a large number of dead sockets. Fin-wait- 2 is less dangerous than fin-wait-1 because it can only eat up to 1.5k of memory, but they last longer.

The copy code is as follows:

Net.ipv4.tcp_keepalive_time = 30

The frequency at which tcp sends keepalive messages when keepalive is enabled. The default is 2 hours.

A complete kernel optimization configuration

The copy code is as follows:

Net.ipv4.ip_forward = 0

Net.ipv4.conf.default.rp_filter = 1

Net.ipv4.conf.default.accept_source_route = 0

Kernel.sysrq = 0

Kernel.core_uses_pid = 1

Net.ipv4.tcp_syncookies = 1

Kernel.msgmnb = 65536

Kernel.msgmax = 65536

Kernel.shmmax = 68719476736

Kernel.shmall = 4294967296

Net.ipv4.tcp_max_tw_buckets = 6000

Net.ipv4.tcp_sack = 1

Net.ipv4.tcp_window_scaling = 1

Net.ipv4.tcp_rmem = 4096 87380 4194304

Net.ipv4.tcp_wmem = 4096 16384 4194304

Net.core.wmem_default = 8388608

Net.core.rmem_default = 8388608

Net.core.rmem_max = 16777216

Net.core.wmem_max = 16777216

Net.core.netdev_max_backlog = 262144

Net.core.somaxconn = 262144

Net.ipv4.tcp_max_orphans = 3276800

Net.ipv4.tcp_max_syn_backlog = 262144

Net.ipv4.tcp_timestamps = 0

Net.ipv4.tcp_synack_retries = 1

Net.ipv4.tcp_syn_retries = 1

Net.ipv4.tcp_tw_recycle = 1

Net.ipv4.tcp_tw_reuse = 1

Net.ipv4.tcp_mem = 94500000 915000000 927000000

Net.ipv4.tcp_fin_timeout = 1

Net.ipv4.tcp_keepalive_time = 30

Net.ipv4.ip_local_port_range = 1024 65000

A simple nginx optimized configuration file

The copy code is as follows:

User www www

Worker_processes 8

Worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000

Error_log / www/log/nginx_error.log crit

Pid / usr/local/nginx/nginx.pid

Worker_rlimit_nofile 204800

Events

{

Use epoll

Worker_connections 204800

}

Http

{

Include mime.types

Default_type application/octet-stream

Charset utf-8

Server_names_hash_bucket_size 128

Client_header_buffer_size 2k

Large_client_header_buffers 4 4k

Client_max_body_size 8m

Sendfile on

Tcp_nopush on

Keepalive_timeout 60

Fastcgi_cache_path / usr/local/nginx/fastcgi_cache levels=1:2

Keys_zone=test:10m

Inactive=5m

Fastcgi_connect_timeout 300

Fastcgi_send_timeout 300

Fastcgi_read_timeout 300

Fastcgi_buffer_size 16k

Fastcgi_buffers 16 16k

Fastcgi_busy_buffers_size 16k

Fastcgi_temp_file_write_size 16k

Fastcgi_cache test

Fastcgi_cache_valid 200 302 1h

Fastcgi_cache_valid 301 1d

Fastcgi_cache_valid any 1m

Fastcgi_cache_min_uses 1

Fastcgi_cache_use_stale error timeout invalid_header http_500

Open_file_cache max=204800 inactive=20s

Open_file_cache_min_uses 1

Open_file_cache_valid 30s

Tcp_nodelay on

Gzip on

Gzip_min_length 1k

Gzip_buffers 4 16k

Gzip_http_version 1.0

Gzip_comp_level 2

Gzip_types text/plain application/x-javascript text/css application/xml

Gzip_vary on

Server

{

Listen 8080

Server_name ad.test.com

Index index.php index.htm

Root / www/html/

Location / status

{

Stub_status on

}

Location. *\. (php | php5)? $

{

Fastcgi_pass 127.0.0.1:9000

Fastcgi_index index.php

Include fcgi.conf

}

Location ~. *\. (gif | jpg | jpeg | png | bmp | swf | js | css) $

{

Expires 30d

}

Log_format access'$remote_addr-$remote_user [$time_local] "$request"'

'$status $body_bytes_sent "$http_referer"'

'"$http_user_agent" $http_x_forwarded_for'

Access_log / www/log/access.log access

}

}

Several instructions about fastcgi

The copy code is as follows:

Fastcgi_cache_path / usr/local/nginx/fastcgi_cache levels=1:2 keys_zone=test:10m inactive=5m

This directive specifies a path, directory structure level, keyword area storage time, and inactive delete time for the fastcgi cache.

The copy code is as follows:

Fastcgi_connect_timeout 300

Specifies the timeout for connecting to the backend fastcgi.

The copy code is as follows:

Fastcgi_send_timeout 300

The timeout for sending the request to the fastcgi, which refers to the timeout for sending the request to the fastcgi after two handshakes have been completed.

The copy code is as follows:

Fastcgi_read_timeout 300

The timeout for receiving fastcgi replies, which is the timeout for receiving fastcgi replies after two handshakes have been completed.

The copy code is as follows:

Fastcgi_buffer_size 16k

Specify how much buffer is needed to read the first part of the fastcgi reply. Here you can set the buffer size specified by the fastcgi_buffers instruction. The above instruction specifies that it will use a 16k buffer to read the first part of the reply, that is, the response header. In fact, this response header is usually very small (no more than 1k), but if you specify the buffer size in the fastcgi_buffers instruction Then it will also allocate a buffer size specified by fastcgi_buffers to cache.

The copy code is as follows:

Fastcgi_buffers 16 16k

Specify how many and how many buffers are needed locally to buffer fastcgi responses. As shown above, if a php script produces a page size of 256k, it will be allocated 16 buffers of 16k to cache. If it is greater than 256k, the portion larger than 256k will be cached in the path specified by fastcgi_temp. Of course, this is an unwise solution for the server load, because the data is processed faster in memory than on the hard disk. Usually, the setting of this value should choose a middle value of the page size generated by the php script in your site. For example, if the page size generated by most scripts on your site is 256k, you can set this value to 1616k, or 464k or 64k, but obviously, the latter two are not good settings, because if the resulting page is only 32k, if you use 464k, it will allocate a 64k buffer to cache. If you use 644k, it allocates eight 4k buffers to cache, while if you use 1616k, it allocates two 16k to cache pages, which seems more reasonable.

The copy code is as follows:

Fastcgi_busy_buffers_size 32k

I don't know what this instruction is for, except that the default value is twice that of fastcgi_buffers.

The copy code is as follows:

Fastcgi_temp_file_write_size 32k

How many blocks will be used when writing to fastcgi_temp_path? the default value is twice that of fastcgi_buffers.

The copy code is as follows:

Fastcgi_cache test

Turn on the fastcgi cache and give it a name. Personally, I find it very useful to turn on caching, which can effectively reduce the cpu load and prevent 502 errors. But this cache can cause a lot of problems because it caches dynamic pages. Specific use also needs to be based on their own needs.

The copy code is as follows:

Fastcgi_cache_valid 200 302 1h

Fastcgi_cache_valid 301 1d

Fastcgi_cache_valid any 1m

Specify the caching time for the specified reply code, as in the example above, 200302 replies are cached for 1 hour, 301 responses for 1 day, and others for 1 minute.

The copy code is as follows:

Fastcgi_cache_min_uses 1

The minimum number of times to cache the inactive parameter value of the fastcgi_cache_path directive. As in the example above, if a file is not used once in 5 minutes, the file will be removed.

The copy code is as follows:

Fastcgi_cache_use_stale error timeout invalid_header http_500

Not knowing what this parameter does, the guess is to let nginx know which types of caches are useless. These are the parameters related to fastcgi in nginx. In addition, fastcgi also has some configurations that need to be optimized. If you use php-fpm to manage fastcgi, you can modify the following values in the configuration file:

The copy code is as follows:

sixty

The number of concurrent requests processed at the same time, that is, it will open up to 60 child threads to handle concurrent connections.

The copy code is as follows:

102400

The maximum number of open files.

The copy code is as follows:

204800

The maximum number of requests each process can execute before resetting.

A few test results

The static page is the test file I mentioned in the squid configuration 4w concurrency article. The following figure shows the test results after running the webbench-c 30000-t 600 http://ad.test.com:8080/index.html command on six machines at the same time:

Number of connections filtered using netstat:

The result of the php page in status (the php page calls phpinfo):

Number of connections to php pages filtered by netstat:

Server load before using fastcgi caching:

It is already difficult to open the php page at this time, and you need to refresh it many times before opening it. The low cpu0 load in the figure above is due to the fact that all the Nic interrupt requests are allocated to the cpu0 during the test, and seven processes are started in nginx to cpu1-7.

After using fastcgi caching:

At this point, you can easily open the php page.

This test is not connected to any database, so it doesn't have much reference value, but I don't know if the above test has reached its limit. It doesn't seem to be based on memory and cpu usage, but there is no extra machine for me to run webbench.

After reading this, the article "how to optimize Nginx configuration and kernel" has been introduced. If you want to master the knowledge points of this article, you still need to practice and use it yourself. If you want to know more about related articles, welcome to follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report