Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to break through 100, 000 concurrency in nginx Optimization

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article introduces the relevant knowledge of "how to achieve a breakthrough of 100, 000 concurrency in nginx optimization". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

1. Generally speaking, the following items in the nginx configuration file are useful for optimization:

1. Worker_processes 8

The number of nginx processes is recommended to be specified according to the number of cpu, which is generally a multiple of it (for example, 2 quad-core cpu counts as 8).

2. Worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000 10000000

Assign cpu to each process. In the above example, 8 processes are assigned to 8 cpu. Of course, you can write more than one, or one

Each process is assigned to multiple cpu.

3. Worker_rlimit_nofile 65535

This instruction means that when a nginx process opens the maximum number of file descriptors, the theoretical value should be the maximum number of open characters.

The number of pieces (ulimit-n) is divided by the number of nginx processes, but the nginx allocation request is not so uniform, so it is best to be consistent with the value of ulimit-n.

Now the number of open files under the Linux 2.6 kernel is 65535. If the number of open files is 65535, you should fill in 65535 accordingly.

This is because assigning requests to processes during nginx scheduling is not so balanced, so if you enter 10240 and the total concurrency reaches 30-40,000, a process may exceed 10240, and a 502error will be returned.

How to view the linux system file descriptor:

[root@web001 ~] # sysctl-a | grep fs.file

Fs.file-max = 789972

Fs.file-nr = 5100 789972

4. Use epoll

Using epoll's iPUBO model

(

Supplementary note:

Similar to apache, nginx has different event models for different operating systems.

A) Standard event model

Select and poll belong to the standard event model. If there is no more effective method in the current system, nginx will choose select or poll.

B) efficient event model

Kqueue: used in FreeBSD 4.1, OpenBSD 2.9, NetBSD 2.0 and MacOS X. Using kqueue on MacOS X systems with dual processors may cause the kernel to crash.

Epoll: used in Linux kernel version 2.6 and later.

/ dev/poll: used for Solaris 7 11 Universe 99, HP/UX 11.22 + (eventport), IRIX 6.5.15 + and Tru64 UNIX 5.1A +.

Eventport: used in Solaris 10. To prevent kernel crashes, it is necessary to install security patches.

)

5. Worker_connections 65535

The maximum number of connections allowed per process, theoretically the maximum number of connections per nginx server is worker_processes*worker_connections.

6. Keepalive_timeout 60

Keepalive timeout.

7. Client_header_buffer_size 4k

The buffer size of the client request header can be set according to the paging size of your system. Generally, the size of a request header will not exceed 1k, but since the paging of the system is generally larger than 1k, it is set to the paging size here.

The page size can be obtained with the command getconf PAGESIZE.

[root@web001 ~] # getconf PAGESIZE

4096

However, there are cases where the client_header_buffer_size exceeds 4k, but the client_header_buffer_size value must be set to an integral multiple of the system page size.

8. Open_file_cache max=65535 inactive=60s

This specifies the cache for open files, which is not enabled by default. Max specifies the number of caches, which is recommended to be the same as the number of open files. Inactive refers to how long it takes to delete the cache after the file has not been requested.

9. Open_file_cache_valid 80s

This refers to how often the valid information in the cache is checked.

10. Open_file_cache_min_uses 1

The minimum number of times a file is used during the time of the inactive parameter in the open_file_cache directive. If this number is exceeded, the file descriptor is always opened in the cache. As in the example above, if a file is not used once in inactive time, it will be removed.

2. Optimization of kernel parameters:

Net.ipv4.tcp_max_tw_buckets = 6000

The number of timewait. The default is 180000.

Net.ipv4.ip_local_port_range = 1024 65000

The range of ports that the system is allowed to open.

Net.ipv4.tcp_tw_recycle = 1

Enable timewait fast recycling.

Net.ipv4.tcp_tw_reuse = 1

Turn on reuse. Allows TIME-WAIT sockets to be reused for new TCP connections.

Net.ipv4.tcp_syncookies = 1

Enable SYN Cookies, and when SYN waiting queue overflow occurs, enable cookies to handle it.

Net.core.somaxconn = 262144

The backlog of the listen function in the web application limits the net.core.somaxconn of our kernel parameters to 128 by default, while the NGX_LISTEN_BACKLOG defined by nginx defaults to 511, so it is necessary to adjust this value.

Net.core.netdev_max_backlog = 262144

The maximum number of packets allowed to be sent to the queue when each network interface receives packets faster than the kernel processes them.

Net.ipv4.tcp_max_orphans = 262144

The maximum number of TCP sockets in the system that are not associated to any of the user file handles. If this number is exceeded, the orphan connection will be immediately reset and a warning message will be printed. This limit is only to prevent simple DoS attacks, do not rely too much on it or artificially reduce this value, but should increase this value (if you increase memory).

Net.ipv4.tcp_max_syn_backlog = 262144

The maximum value of recorded connection requests for which the client acknowledgement has not been received. For systems with 128 megabytes of memory, the default is 1024, and for systems with small memory, it is 128.

Net.ipv4.tcp_timestamps = 0

The timestamp avoids the winding of serial numbers. A 1Gbps link is sure to encounter a sequence number that has been used before. Timestamps allow the kernel to accept such "abnormal" packets. It needs to be turned off here.

Net.ipv4.tcp_synack_retries = 1

To open a peer-to-peer connection, the kernel needs to send a SYN with an ACK that responds to the previous SYN. It is the second handshake in the so-called three-way handshake. This setting determines the number of SYN+ACK packets sent by the kernel before the connection is abandoned.

Net.ipv4.tcp_syn_retries = 1

The number of SYN packets sent before the kernel gives up establishing a connection.

Net.ipv4.tcp_fin_timeout = 1

If the socket is closed by the local request, this parameter determines how long it remains in the FIN-WAIT-2 state. The peer can make an error and never close the connection, or even crash unexpectedly. The default value is 60 seconds. The usual value of the kernel is 180s, 3 you can press this setting, but keep in mind that even if your machine is a light WEB server, there is a risk of memory overflow due to a large number of dead sockets. FIN-WAIT- 2 is less dangerous than FIN-WAIT-1 because it can only eat up to 1.5K of memory, but they last longer.

Net.ipv4.tcp_keepalive_time = 30

The frequency at which TCP sends keepalive messages when keepalive is enabled. The default is 2 hours.

3. Post a complete kernel optimization setting below:

In vi / etc/sysctl.conf CentOS5.5, you can directly replace all content with the following:

Net.ipv4.ip_forward = 0net.ipv4.conf.default.rp_filter = 1net.ipv4.conf.default.accept_source_route = 0kernel.sysrq = 0kernel.core_uses_pid = 1net.ipv4.tcp_syncookies = 1kernel.msgmnb = 65536kernel.msgmax = 65536kernel.shmmax = 68719476736kernel.shmall = 4294967296net.ipv4.tcp_max_tw_buckets = 6000net.ipv4.tcp_sack = 1net.ipv4.tcp_window_scaling = 1net.ipv4.tcp_rmem = 4096 87380 4194304net.ipv4.tcp_wmem = 4096 16384 4194304net.core.wmem_default = 8388608net.core.rmem_default = 8388608net.core.rmem_max = 16777216net.core.wmem_max = 16777216net.core.netdev_max_backlog = 262144net.core.somaxconn = 262144net.ipv4.tcp_max_orphans = 3276800net.ipv4.tcp_max_syn_backlog = 262144net.ipv4.tcp_timestamps = 0net.ipv4.tcp_synack_retries = 1net.ipv4.tcp_syn_retries = 1net.ipv4.tcp_tw_recycle = 1net.ipv4.tcp_tw_reuse = 1net. Ipv4.tcp_mem = 94500000 915000000 927000000net.ipv4.tcp_fin_timeout = 1net.ipv4.tcp_keepalive_time = 30net.ipv4.ip_local_port_range = 1024 65 000

For the configuration to take effect immediately, you can use the following command:

/ sbin/sysctl-p

4. The following is about the optimization of the number of system connections

Linux default values open files and max user processes are 1024

# ulimit-n

1024

# ulimit Cu

1024

Problem description: server only allows 1024 files to be opened at the same time and 1024 user processes to be processed

Use ulimit-a to view all limits for the current system, and ulimit-n to view the current maximum number of open files.

The new linux is only 1024 by default, so it's easy to encounter error: too many open files when you use it as a server with a heavy load. Therefore, it needs to be enlarged.

Solution:

You can modify it instantly with ulimit Cn 65535, but it won't work after reboot. (note ulimit-SHn 65535 is equivalent to ulimit-n 65535,-S refers to soft,-H refers to hard)

There are three ways to modify it:

1. Add a line to / etc/rc.local ulimit-SHn 65535

two。 Add a line to / etc/profile ulimit-SHn 65535

3. At the end of / etc/security/limits.conf, add:

* soft nofile 65535

* hard nofile 65535

* soft nproc 65535

* hard nproc 65535

Which method is used specifically, the first method has no effect in CentOS, the third method is effective, and the second method is effective in Debian.

# ulimit-n

65535

# ulimit-u

65535

Note: the ulimit command itself has soft and hard settings. Adding-H means hard, and adding-S means soft defaulting shows soft limits.

The soft limit refers to the setting for which the current system is in effect. The hard limit can be lowered by ordinary users. But it cannot be increased. The soft limit cannot be higher than the hard limit. Only root users can increase the hard limit.

5. Here is a simple nginx configuration file:

User www www;worker_processes 8 include mime.types;default_type application/octet-stream;charset utf-8;server_names_hash_bucket_size workerworkers cputative buffers 00000001 00000010 00000100 00001000 000000 001000000000000 errorists log / www/log/nginx_error.log crit;pid / usr/local/nginx/nginx.pid;worker_rlimit_nofile 204800 events {include mime.types;default_type application/octet-stream;charset utf-8;server_names_hash_bucket_size 204800;} http {include mime.types;default_type application/octet-stream;charset utf-8;server_names_hash_bucket_size 128 clientmakers heading buffers size 2klargeclientstarting headerbags buffers 4 4kclientfighters maxbodyweights buffers 8m clientsendfile buffers Tcp_nopush on;keepalive_timeout 60 and fastcgivers cachemeters path / usr/local/nginx/fastcgi_cache levels=1:2keys_zone=TEST:10minactive=5m;fastcgi_connect_timeout 300, fastcgivers sendings timeout 300, fastcgivers readouts timeout 300 fastcgigs buffers 4k / fastcgibones buffers 8 4k / fastcgiosities buffers 8k fastcgiosities temptpacks fileholders writepieces size 8k fastcgivers cache TEST;fastcgi_cache_valid 200 302 1h fastcgitives cacheables valid 301dfastcgitives cacheids valid any 1m / fastcgibones cacheables minuses 1 Fastcgi_cache_use_stale error timeout invalid_header http_500;open_file_cache max=204800 inactive=20s;open_file_cache_min_uses 1 is openings fileholders cachets valid 30s is tcpspeak nodelay on;gzip on;gzip_min_length 1kmitgzipstones buffers 4 16kmitgzipstones buffers version 1.0 mitgzipstones compounding level 2 exiting gzipstones types text/plain application/x-JavaScript text/css application/xml;gzip_vary on;server {listen 80trait serverbrands name backup.aiju.com;index index.PHP index.htm Root / www/html/;location / status {stub_status on;} location ~. * /. (php | php5)? ${fastcgi_pass 127.0.0.1 Phantom 9000 × fastcgistered index index.php;include fcgi.conf;} location. * /. (gif | jpg | jpeg | png | bmp | js | css) ${expires 30d;} log_format access'$remote_addr-$remote_user [$time_local] "$request"'$status $body_bytes_sent "$http_referer"'"$http_user_agent" $http_x_forwarded_for' Access_log / www/log/access.log access;}}

6. Several instructions about FastCGI:

Fastcgi_cache_path / usr/local/nginx/fastcgi_cache levels=1:2 keys_zone=TEST:10minactive=5m

This directive specifies a path, directory structure level, keyword area storage time, and inactive delete time for the FastCGI cache.

Fastcgi_connect_timeout 300

Specifies the timeout for connecting to the backend FastCGI.

Fastcgi_send_timeout 300

The timeout for sending the request to the FastCGI, which refers to the timeout for sending the request to the FastCGI after two handshakes have been completed.

Fastcgi_read_timeout 300

The timeout for receiving FastCGI replies, which is the timeout for receiving FastCGI replies after two handshakes have been completed.

Fastcgi_buffer_size 4k

Specify how much buffer to use to read the first part of the FastCGI reply. Generally, the first part of the reply will not exceed 1k. Because the page size is 4k, it is set to 4k here.

Fastcgi_buffers 8 4k

Specify how many and how many buffers are needed locally to buffer FastCGI replies.

Fastcgi_busy_buffers_size 8k

I don't know what this instruction is for, except that the default value is twice that of fastcgi_buffers.

Fastcgi_temp_file_write_size 8k

How many blocks will be used when writing to fastcgi_temp_path? the default value is twice that of fastcgi_buffers.

Fastcgi_cache TEST

Turn on the FastCGI cache and give it a name. Personally, I find it very useful to turn on caching, which can effectively reduce the CPU load and prevent 502 errors.

Fastcgi_cache_valid 200 302 1h

Fastcgi_cache_valid 301 1d

Fastcgi_cache_valid any 1m

Specify the caching time for the specified reply code, as in the example above, 200302 replies are cached for 1 hour, 301 responses for 1 day, and others for 1 minute.

Fastcgi_cache_min_uses 1

The minimum number of times to cache the inactive parameter value of the fastcgi_cache_path directive. As in the example above, if a file is not used once in 5 minutes, the file will be removed.

Fastcgi_cache_use_stale error timeout invalid_header http_500

Not knowing what this parameter does, the guess is to let nginx know which types of caches are useless. These are the parameters related to FastCGI in nginx. In addition, FastCGI also has some configurations that need to be optimized. If you use php-fpm to manage FastCGI, you can modify the following values in the configuration file:

sixty

The number of concurrent requests processed at the same time, that is, it will open up to 60 child threads to handle concurrent connections.

102400

The maximum number of open files.

204800

The maximum number of requests each process can execute before resetting.

This is the end of the content of "how to optimize nginx to achieve 100, 000 concurrency". Thank you for your reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report