Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to optimize the performance of nginx+php

2025-03-13 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article is to share with you about how to optimize the performance of nginx+php, the editor thinks it is very practical, so I share it with you to learn. I hope you can get something after reading this article.

Preparatory work

One ECS server

Compile nginx+php manually

Modify index.php and output 'hello world'

Using ab tool, ab-c 100-n 50000, the average value of qps of pressure test was recorded for 5 times.

Find ways to optimize and adjust various parameters. If you find an improvement in QPS every time you adjust a parameter, write it down and think about where the bottleneck of qps is.

Description of some basic configurations of Nginx user administrator administrators; # configure users or groups. The default is nobody nobody. Worker_processes 2; # the number of processes allowed to be generated. Default is 1 pid/nginx / pid/nginx.pid; # specify nginx process running file storage address error_log log/error.log debug; # to determine the log path and level. This setting can be put into the global block, http block, server block, and the level is: debug | info | notice | warn | error | crit | alert | emerg events {accept_mutex on; # sets the serialization of network connections to prevent the occurrence of panic phenomena. The default is on multi_accept on; # to set whether a process accepts multiple network connections at the same time. The default is off # use epoll; # event-driven model, select | poll | kqueue | epoll | resig | / dev/poll | eventport worker_connections 1024. # maximum number of connections, default is 512} http {include mime.types; # File extension and File Type Mapping Table default_type application/octet-stream; # default file type, default is text/plain # access_log off; # unservice log log_format myFormat'$remote_addr-$remote_user [$time_local] $request $status $body_bytes_sent $http_referer $http_user_agent $http_x_forwarded_for' # Custom format access_log log/access.log myFormat; # combined is the default value of log format sendfile on; # allows sendfile to transfer files, default is off, can be in http block, server block, location block. Sendfile_max_chunk 100k; # the number of transfers per call per process cannot be greater than the set value. The default is 0, that is, there is no upper limit. Keepalive_timeout 65; # connection timeout. The default is 75s, which can be found in the http,server,location block. Upstream mysvr {server 127.0.0.1server 7878; server 192.168.10.121 backup; # Hot standby} error_page 404 https://www.baidu.com; # error page server {keepalive_requests 120; # maximum number of single connection requests. Listen 4545; # listening port server_name 127.0.0.1; # listening address location ~ ^. + ${# request url filtering, regular matching, ~ case-sensitive, ~ case-insensitive. # root path; # Root directory # index vv.txt; # set the default page proxy_pass http://mysvr; # request to mysvr defined server list deny 127.0.0.1; # rejected ipallow 172.18.5.54; # allowed ip} Let's get started.

ECS configuration

CPU: 1 core

Memory: 1 GiB

Operating system: CentOS 7 64 bit

Currently used bandwidth: 1Mbps

In the case of high concurrency, the kernel will think that the system has been attacked by SYN flood and will send cookies (possible SYN flooding on port 80. Sending cookies), which slows down the speed of requests, so if you set this parameter to 0 on the application server to disable system protection, you can do large concurrency testing:

$vim / etc/sysctl.conf $net.ipv4.tcp_syncookies = 0$ sysctl-p $net.ipv4.tcp_syncookies = 0

Net.ipv4.tcp_syncookies = 0 # this parameter is used to prevent flood attacks, but for large concurrent systems, this setting is disabled

The net.ipv4.tcp_max_syn_backlog# parameter determines the number of SYN_RECV status queues. Generally, the default value is 512 or 1024, that is, beyond this number, the system will no longer accept new TCP connection requests, which can prevent the system from running out of resources to a certain extent. You can increase this value as appropriate to accept more connection requests.

The net.ipv4.tcp_tw_recycle# parameter determines whether to speed up the recycling of TIME_WAIT 's sockets, which defaults to 0.

The net.ipv4.tcp_tw_reuse# parameter determines whether sockets in the TIME_WAIT state can be used for new TCP connections, which defaults to 0.

The net.ipv4.tcp_max_tw_buckets# parameter determines the total number of sockets in TIME_WAIT status, which can be set according to the number of connections and system resources.

Default compilation test, no modifications:

Ab-c 100-n 50000 http://127.0.0.1/index.php

1, 7111 2, 7233 3, 7240 4, 7187 5, 7197 average: 7194

Modify worker_processes 1 = > auto

Worker_processes: the number of work processes, usually adjusted to auto or the same number as the cpu core.

Generally speaking, one process is enough, you can set the number of connections to be very large. (worker_processes: 1 workerboards connections: 10000)

If there are more CPU-consuming jobs such as SSL and gzip, and it is multicore CPU, it can be set to the same number as CPU. (worker_processes: number of CPU cores)

Or when you have to deal with a lot of small files, and the total file size is much larger than memory, you can also increase the number of processes to make full use of IO bandwidth (mainly because IO operations have block)

1, 7242 2, 7228 3, 7275 4, 7234 5, 7231 average: 7242

Worker_connections 1020 = > 65535

The maximum number of client connections allowed by a single process. In general, the number of connections can be set to the same as the number of ports.

1, 7212 2, 7236 3, 7223 4, 7260 5, 7230 average 7232

Worker_rlimit_nofile 65535

The maximum number of open files in the worker process, which can be optimized to be the same as the number of ports.

1, 7243 2, 7236 3, 7146 4, 7243 5, 7196 average 7212.8

Use epoll

Using epoll's Icano model, the event handling model is optimized.

1, 7265 2, 7196 3, 7227 4, 7216 5, 7253 average 7231

Multi_accept ON

The multi_accept directive enables NGINX worker to accept as many connections as possible when it is notified of a new connection. The purpose of this directive is to immediately accept all connections and put them in the listening queue. If the instruction is disabled, the worker process will accept the connection one by one.

1, 7273 2, 7281 3, 7308 4, 7299 5, 7290 average 7290

Accept_mutex ON

Since we have multiple workers configured in NGINX, we should also configure relevant instructions that affect worker. The accept_mutex parameter under the events area causes each available worker process to accept new connections one by one. Set the network connection serialization to prevent the occurrence of panic phenomenon. The default is on.

The server is 1 core, so the impact is small.

1, 7268 2, 7295 3, 7308 4, 7274 5, 7261 average 7281

Tcp_nopush On

TCP_CORK is an alternative to the Nagle algorithm, and Linux provides the TCP_CORK option. This option tells the TCP stack to attach packets and sends them when they are full or when the application explicitly deletes the TCP_CORK indication to send packets. This makes the transmitted data packets optimal, and thus improves the efficiency of the network.

NGINX provides the tcp_nopush instruction to enable TCP_CORK when connecting sockets. This directive can be used for http,server and location blocks:

Http {

Tcp_nopush on

}

1, 7309 2, 7321 3, 7292 4, 7308 5, 7322 average 7310

Tcp_nodelay on

There is a "packet" problem in TCP/IP networks, in which single-character messages can cause network congestion on high-load networks. For example, the packet size is 41 bytes, of which 40 bytes are used for the TCP header, and only 1 byte is useful information. These packets take up about 4000% of the overhead and saturate the network.

Ohn Nagle solves the problem by not sending packets immediately (Nagle's algorithm). All such packets are collected for a certain amount of time and then sent at once as a single packet. This improves the efficiency of the underlying network. Therefore, a typical TCP/IP protocol stack waits 200 milliseconds before sending a packet to the client.

You can use the TCP_NODELAY option to disable Nagle's buffering algorithm when you open the socket and send it as soon as the data is available. NGINX provides the tcp_nodelay directive to enable this option. This directive can be used for http,server and location blocks:

Http {

Tcp_nodelay on

}

1, 7326 2, 7316 3, 7334 4, 7274 5, 7290 average 7308

Worker_priority-5

Indicates the nice value of the worker process

In the Linux system, the process with high priority will consume more system resources. Here, the static priority of the process is configured, and the value range from-20 to + 19 is the highest. Therefore, you can set this value a little lower, but it is not recommended to be lower than the value of the kernel process (usually-5)

The performance improvement from 0 to-5 in the test is significant 0 can reach the average value of 8000.

1, 7982 2, 8023 3, 7932 4, 7911 5, 8052 average 7980

Php-fpm parameter tuning

Pm = dynamic

Indicates which process quantity management method is used

Dynamic indicates that the number of php-fpm processes is dynamic, starting with the number specified by pm.start_servers. If there are more requests, it will automatically increase to ensure that the number of idle processes is not less than pm.min_spare_servers. If the number of processes is large, it will be cleaned accordingly to ensure that the number of redundant processes is not more than pm.max_spare_servers.

Static indicates that the number of php-fpm processes is static, and the number of processes is always the number specified by pm.max_children, and no longer increases or decreases.

1.pm.start_servers = 15; number of starting php-fpm processes in dynamic mode

2.pm.min_spare_servers = 5; minimum number of php-fpm processes in dynamic mode

3.pm.max_spare_servers = 25; maximum number of php-fpm processes in dynamic mode

4.pm.max_requests = 5000

Sets the number of requests for services before each child process is reborn. It is very useful for third-party modules that may have memory leaks. If set to'0', requests are always accepted. Equivalent to the PHP_FCGI_MAX_REQUESTS environment variable. Default value: 0. This configuration means that when a PHP-CGI process accumulates to 5000 requests, it automatically restarts the process.

7934 2, 8107 3, 8013 4, 8039 5, 7990 mean 8016

Opcache

Opcache is definitely a powerful tool for optimization. Opcache is the bytecode cache, that is, when PHP is compiled, the php code is first converted into bytecode, and then the bytecode is executed.

When the php file is executed for the second time, it will also be converted back to bytecode, but in many cases, the content of the file is almost the same, such as a static HTML file, the content will not change for a long time after it is generated, and the user access request will be read by the server directly to the client browser. You don't have to parse and build through PHP.

Bytecode data in memory can be directly cached for secondary compilation. In this way, the program will be faster and consume less cpu.

Conclusion

Php-fpm uses the prefork approach (listen the same address, and then fork several child processes), and the fast-cgi manager implements the multi-process model.

But when php runs, each process can only handle one request. In fact, the runtime is single-process, single-threaded.

Php-fpm A thread is a blocking model. You must wait for the client to request the php server to return data before the next request from nginx can be accepted. At this time, FPM needs more processes to cope with concurrency, and higher qps requires more processes to process. When processing requests, blocking occurs for a long time, resulting in the process memory cannot be released, and there are not enough child processes to process subsequent requests. More bottleneck lies in the processing and memory consumption of PHP-FPM large quantum processes.

The above is how to optimize the performance of nginx+php, the editor believes that there are some knowledge points that we may see or use in our daily work. I hope you can learn more from this article. For more details, please follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report