In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Nginx, as the most popular Web application, is very important to optimize it. Through the preliminary optimization of Nginx, in-depth optimization of Nginx (1), we can do a lot of optimization of Nginx to meet the basic needs, but as a qualified operation and maintenance engineer, it is obviously far from enough to master the above optimization methods for Nginx. Therefore, this blog post is needed to further optimize Nginx.
Blog outline:
First, install nginx server
II. Nginx configuration optimization
(1) the number of worker processes running in Nginx
(2) Nginx event handling model
(3) turn on efficient transmission mode
(4) connection timeout
(5) fastcgi tuning
(6) expires cache tuning
(7) hotlink protection
(8) Kernel parameter optimization.
First, install nginx server
Get the Nginx package
[root@localhost ~] # yum-y install pcre-devel zlib-devel openssl-devel / / install nginx depends on [root@localhost ~] # useradd-s / sbin/nologin-M nginx / / create nginx user [root@localhost ~] # tar zxf nginx-1.14.0.tar.gz-C / usr/src [root@localhost ~] # cd / usr/src/nginx-1.14.0/ [root@localhost nginx-1.14.0] # . / configure-- prefix=/usr/local/nginx-- user=nginx\-- group=nginx-- with-http_dav_module-- with-http_stub_status_module\-- with-http_addition_module-- with-http_sub_module-- with-http_flv_module\-- with-http_mp4_module-- with-pcre-- with-http_ssl_module\-- with-http_gzip_static_module & & make & make install// compiles and installs nginx When compiling and installing the source package, you can use. / configure-- help to get a detailed description of the configuration options
Configuration options explain:
-- with-http_dav_module: add PUT,DELETE,MKCOL: create collections, COPY and MOVE methods;-- with-http_stub_status_module: get status statistics of Nginx;-- with-http_addition_module: as an output filter, support incomplete buffering, partial corresponding requests;-- with-http_sub_module: allow some other text to replace some text in the corresponding Nginx. -- with-http_flv_module: support flv video files;-- with-http_mp4_module: support mp4 video files, provide pseudo-streaming media server support;-- with-http_ssl_module: enable ngx_http_ssl_module [root@localhost ~] # ln-s / usr/local/nginx/sbin/nginx / usr/local/sbin / / create a symbolic link [root@localhost ~] # nginx-t / / check the Nginx configuration file syntax [root@localhost ~] # nginx/ / launch Nginx
Several options commonly used for the nginx command:
-v: display version information;-V: display version information and configuration option parameters;-t: test configuration files for syntax errors;-T: test configuration files and display them;-Q: suppress non-error messages during configuration;-s (stop, quit, reopen, reload): send signals to the main process: stop, exit, reopen, reload;-c: set configuration files -g: set global directives from the configuration file 2. Nginx configuration optimization [root@localhost ~] # ps-ef | grep nginx// lists the processes generated by the Nginx program root 120790 10 22:49-00:00:00 nginx: master process nginxnginx 120791 120790 0 22:49-00:00:00 nginx: worker processroot 120873 1928 0 22:57 pts/0 00:00:00 grep-- color=auto nginx// Article 3 is negligible because of the grep command
You can see from the display that the work process is the user of the Nginx program, but the master process is root. Among them, master is the monitoring process, also known as the main process of Nginx; the work process is the working process, and in some cases there will be cache-related processes.
The diagram is as follows:
It can also be seen from the picture that master is the administrator and the work process is the one that provides services for users!
(1) the number of worker processes running in Nginx
Suggestion: generally set the core of CPU or the number of cores x2.
[root@localhost ~] # cat / proc/cpuinfo | grep processor | wc-L1max / through this command, you can see that the cpu of the current server is a [root@localhost ~] # vim / usr/local/nginx/conf/nginx.conf / / Edit Nginx configuration file worker_processes 2; / / the number of working processes. It is recommended to use the number of CPU or twice the number of CPU. A maximum of 8 worker_cpu_affinity 01 10 can be opened. / / run CPU affinity worker_rlimit_nofile 65535; / / the maximum number of files opened [root@localhost ~] # ulimit-n1024 bind / you can see that the default limit of the system is 1024 files. In addition to the nginx configuration file to limit the accident, you also need to modify the file resource limit file, as follows: [root@localhost ~] # vim / etc/security/limits.conf... / / omit some contents # * soft nofile 65535 / / add a soft limit on the number of open files * hard nofile 65535 / / add a new limit on the number of open files * soft noproc 65535 / / add Number of processes that can be opened by soft connections * hard noproc 65535 / / add a hard limit on the number of processes that can be opened [root@localhost ~] # su-/ / Last login: Wednesday December 4 22:29:45 CST 2019 from 192.168.1.253pts/0 [root@localhost ~] # ulimit-n65535ax / you can see that the number of files has increased to 65535 Prove that the modified file is in effect [root@localhost ~] # nginx-s reload / / reload the nginx service configuration file [root@localhost ~] # ps-ef | grep nginxroot 120790 10 22:49? 00:00:00 nginx: master process nginxnginx 121276 120790 0 23:21? 00:00:00 nginx: worker processnginx 121277 120790 0 23:21? 00:00:00 nginx : worker processroot 121279 121226 0 23:22 pts/0 00:00:00 grep-- color=auto nginx// because worker_processes is set to 2 It can be seen that two work processes have been generated. (2) Nginx event handling model [root@localhost] # vim / usr/local/nginx/conf/nginx.conf... / / omit some contents events {use epoll; worker_connections 65535; multi_accept on;}
Configuration item explanation:
Use epol: make Nginx use epoll event model; work_connections: is the maximum number of client connections allowed by a single worker process, which is generally based on server performance and memory. The actual maximum value is the number of worker processes multiplied by work_connections. Actually, we fill in a 65535, which is enough. These are all concurrent values. A website with such a large number of concurrency can be regarded as a big site. Multi_accept: tell nginx to accept as many connections as possible after receiving a new connection notification. (3) turn on the efficient transfer mode [root@localhost] # vim / usr/local/nginx/conf/nginx.conf... / / omit some contents http {include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on
Configuration item explanation:
Include mime.types: media type, include is just an instruction to include the contents of another file in the current file; default_type application/octet-stream: the default media type is sufficient Sendfile on: enable efficient file transfer mode. The sendfile instruction specifies whether nginx calls the sendfile function to output files. For common applications, set it to on. If it is used for downloading and other application disk IO heavy-load applications, it can be set to off to balance the processing speed of disk and network Ibano and reduce the load of the system.
Note: if the picture does not display properly, change this to off. Tcp_nopush on; must be enabled in sendfile mode to be effective, prevent network congestion and actively reduce the number of network segments (telling nginx to send all header files in one packet instead of one after another. (4) connection timeout
The main purpose is to protect server resources, CPU, memory, and control the number of connections, because establishing connections also requires resource consumption.
[root@localhost] # vim / usr/local/nginx/conf/nginx.conf... / / omit some contents http {keepalive_timeout 60; tcp_nodelay on; client_header_buffer_size 4k; open_file_cache max=102400 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 1; client_header_timeout 15; client_body_timeout 15; reset_timedout_connection on; send_timeout 15; server_tokens off Client_max_body_size 10m
Configuration item explanation:
Keepalived_timeout: the client connection persists the session timeout, after which the server disconnects the link. Tcp_nodelay; also prevents network congestion, but it is only valid if it is included in the keepalived parameter. Client_header_buffer_size 4k: the buffer size of the client request header, which can be set according to the paging size of your system. Generally, the size of a request header will not exceed 1k, but since the paging of the system is generally larger than 1k, it is set to the paging size here. The paging size can be obtained using the command getconf PAGESIZE; open_file_cache max=102400 inactive=20s: this will specify cache for open files. Default is not enabled. Max specifies the number of caches, which is recommended to be the same as the number of open files. Inactive refers to how long it takes for files not to be requested to delete the cache. Open_file_cache_valid 30s: this refers to how often to check the valid information of the cache. The minimum number of times a file is used within the time of the inactive parameter in the open_file_cache_min_uses 1:open_file_cache directive. If this number is exceeded, the file descriptor is always opened in the cache. For example, if a file is not used once in inactive time, it will be removed; client_header_timeout: sets the timeout of the request header. We can also set this lower, and if no data is sent beyond this time, nginx will return an error of request timeout; client_body_timeout: set the timeout of the request body. We can also lower this setting and send no data beyond this time, with the same error prompt as above; reset_timeout_connection tells nginx to close unresponsive client connections. This will free up the memory space occupied by that client; send_timeout responds to the client timeout, which is limited to the time between two activities, after which the client has no activity and nginx closes the connection; server_tokens does not make nginx execute faster, but it can close the nginx version number in the error page, which is good for security Client_max_body_size upload file size limit; (5) fastcgi tuning [root@localhost ~] # vim / usr/local/nginx/conf/nginx.conf … / / omit some contents http {fastcgi_connect_timeout 600; fastcgi_send_timeout 600; fastcgi_read_timeout 600; fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; fastcgi_temp_path / usr/local/nginx/nginx_tmp; fastcgi_intercept_errors on Fastcgi_cache_path / usr/local/nginx/fastcgi_cache levels=1:2 keys_zone=cache_fastcgi:128m inactive=1d max_size=10g
Configuration item explanation:
Cache: write cache; Buffer: read cache; fastcgi_connect_timeout 600: specify the timeout for connecting to the backend FastCGI; fastcgi_send_timeout 600: the timeout for sending requests to FastCGI; fastcgi_read_timeout 600: specify the timeout for receiving FastCGI replies Fastcgi_buffer_size 64k: specifies the buffer size needed to read the first part of the FastCGI reply. The default buffer size is the size of each block in the fastcgi_buffers instruction, which can be set to a smaller value. Fastcgi_buffers 4 64k: specify how many and how many buffers are needed locally to buffer FastCGI reply requests. If the page size generated by a php script is 256KB, then 4 64KB buffers will be allocated to cache. If the page size is larger than 256KB, then the parts larger than 256KB will be cached in the path specified by fastcgi_temp_path, but this is not a good method, because the data processing speed in memory is faster than that of disk. Generally speaking, this value should be the middle value of the page size generated by php scripts in the site. If the page size generated by most scripts in the site is 256KB, you can set this value to "8 32K", "464k", etc.; fastcgi_busy_buffers_size 128k: it is recommended to set it to twice the size of fastcgi_buffers, buffer in busy hours. Fastcgi_temp_file_write_size 128k: how many blocks of data will be used when writing to fastcgi_temp_path. The default value is twice that of fastcgi_buffers. When this value is set, it may report 502 Bad Gateway;fastcgi_temp_path / usr/local/nginx1.10/nginx_tmp: cache temporary directory; fastcgi_intercept_errors on: this instruction specifies whether to pass 4xx and 5xx error messages to the client, or allows nginx to use error_page to handle error messages. Fastcgi_cache_path / usr/local/nginx1.10/fastcgi_cache levels=1:2
Keys_zone=cache_fastcgi:128m inactive=1d max_size=10g:fastcgi_cache cache directory, you can set the directory level. For example, 160256 subdirectories will be generated at 1:2. Cache_fastcgi is the name of this cache space, and how much memory is used by cache (such a popular content nginx is directly stored in memory to improve access speed). Inactive indicates the default expiration time. If the cache data is not accessed within the expiration time, it will be deleted. Max_size indicates how much hard disk space can be used at most Fastcgi_cache cache_fastcgi: enables FastCGI caching and assigns a name to it. It is very useful to turn on caching, which can effectively reduce the load of CPU and prevent 502 error release. The name of the cache created by cache_fastcgi for the proxy_cache_path instruction; fastcgi_cache_valid 200302 1h: used to specify the caching time of the reply code. The value in the instance indicates that 200,302 responses will be cached for one hour, to be used with fastcgi_cache; fastcgi_cache_valid 301 1d: cache 301 responses for one day; fastcgi_cache_valid any 1m: cache other responses for 1 minute Fastcgi_cache_min_uses 1: this directive is used to set how many requests the same URL will be cached; fastcgi_cache_key http://$host$request_uri: this directive is used to set the key value of the web cache, and nginx stores the md5 hash according to the Key value. Generally, it is combined into proxy_cache_key;fastcgi_pass according to variables such as $host (domain name) and $request_uri (path to the request): specify the listening port and address of the FastCGI server, which can be local or other
Summary:
The caching functions of nginx are: proxy_cache / fastcgi_cache; the role of proxy_cache is to cache the content of the back-end server, which may be anything, including static and dynamic; the role of fastcgi_cache is to cache the content generated by fastcgi, in many cases, the dynamic content generated by php; proxy_cache cache reduces the number of times nginx communicates with the backend, saving transmission time and back-end broadband Fastcgi_cache cache reduces the number of communications between nginx and php, and reduces the pressure on php and database (mysql). (6) expires cache tuning
Cache, mainly for pictures, css,js and other elements to change the opportunity to use relatively few cases, especially pictures, take up a large bandwidth, we can completely set the picture in the browser local cache 365drecoveryjsmenhtml can be cached for more than 10 days, so that users open and load slowly for the first time, the second time, it is very fast! When caching, we need to list the extensions that need to be cached.
The Expires cache is configured in the server field. As follows:
Location ~ *\. (ico | jpe?g | gif | png | bmp | swf | flv) ${expires 30d; / / caching time is 30 days # log_not_found off; / / whether to log nonexistent errors in error_log access_log off; / / No log} location ~ *\. (js | css) ${expires 7d; log_not_found off Access_log off;}
Advantages of expire features:
Expires can reduce the bandwidth purchased by the website, save costs, and improve the user access experience at the same time; reduce the pressure on services, save server costs, is a very important function of web services. Expire function
Cons: when cached pages or data are updated, users may still see old content, affecting the user experience instead. Solution: the first one to shorten the cache time, for example, 1 day, but not completely, unless the update frequency is more than 1 day; the second renames the cached object.
Content that the site does not want to be cached
1) website traffic statistics tool
2) frequently updated files (logo of google)
(7) hotlink protection
In fact, the hotlink protection principle of Nginx is exactly the same as that of Apache, except that the configuration file is slightly different.
Location ~ * ^. +\. (jpg | gif | png | swf | flv | wma | wmv | asf | mp3 | mmf | zip | rar) ${valid_referers none blocked www.benet.com benet.com; if ($invalid_referer) {# return 302 http://www.benet.com/img/nolink.jpg; / / or defined to another website with a return status code of 302 return 404 / / or access the status code 404 break;} access_log off;}
Very simple, there is no configuration verification, friends who do not know about hotlink protection can refer to the deep optimization Apache which has a detailed introduction to the optimization of Apache!
(8) Kernel parameter optimization.
Write the required parameters in the / etc/sysctl.conf file to make it effective!
The commonly used parameters are:
Fs.file-max = 999999: this parameter indicates the maximum number of handles that a process (such as a worker process) can open at the same time. This parameter limits the maximum number of concurrent connections in a straight line and needs to be configured according to the actual situation; net.ipv4.tcp_max_tw_buckets = 6000: this parameter indicates the maximum number of TIME_WAIT sockets allowed by the operating system. If this number is exceeded, the TIME_WAIT socket will be cleared immediately and a warning will be printed. This parameter defaults to 180000. Too many TIME_WAIT sockets will slow down the Web server.
Note: actively closing the server of the connection will result in a connection in TIME_WAIT status; net.ipv4.ip_local_port_range = 1024 65000: the range of ports allowed to be opened by the system; net.ipv4.tcp_tw_recycle = 1: enable timewait rapid recycling; net.ipv4.tcp_tw_reuse = 1: enable reuse. Allows TIME-WAIT sockets to be reused for new TCP connections. This makes sense for the server because there are always a large number of TIME-WAIT-state connections on the server; net.ipv4.tcp_keepalive_time = 30: this parameter indicates how often TCP sends keepalive messages when keepalive is enabled. The default is 2 hours, and if you set it smaller, you can clean up invalid connections more quickly; net.ipv4.tcp_syncookies = 1: enable SYN Cookies and enable cookies to handle when SYN waiting queue overflow occurs; backlog of the listen function in net.core.somaxconn = 40960:web application limits the net.core.somaxconn of our kernel parameters to 128by default, while the NGX_LISTEN_BACKLOG defined by nginx defaults to 511, so it is necessary to adjust this value.
Note: for a TCP connection, Server and Client need a three-way handshake to establish a network connection. When the three-way handshake is successful, we can see that the state of the port changes from LISTEN to ESTABLISHED, and then the data can be transmitted over the link. Each port in the Listen state has its own listening queue. The length of the listening queue is related to the somaxconn parameter and the listen () function in the program that uses the port.
Somaxconn parameter: defines the maximum listening queue length per port in the system. This is a global parameter with a default value of 128. for a high-load web service environment that often handles new connections, the default value of 128is too small. For most environments, it is recommended to increase this value to 1024 or more. Large listening queues can also help prevent denial of service; net.core.netdev_max_backlog = 262144: the maximum number of packets allowed to be sent to the queue when packets are received by each network interface faster than the kernel processes them Net.ipv4.tcp_max_syn_backlog = 262144: this parameter indicates the maximum length of the queue of SYN requests accepted by TCP during the establishment phase of the three-way handshake. The default is 1024. Setting it to a larger size allows Linux not to lose connection requests initiated by clients when the Nginx is too busy for accept new connections. Net.ipv4.tcp_rmem = 10240 87380 12582912: this parameter defines the minimum, default, and maximum values of the TCP accept cache (for the TCP accept sliding window); net.ipv4.tcp_wmem = 10240 87380 12582912: this parameter defines the minimum, default, and maximum values of the TCP send cache (used for the TCP send sliding window); net.core.rmem_default = 6291456: this parameter represents the default size of the kernel socket accept cache Net.core.wmem_default = 6291456: this parameter indicates the default size of the kernel socket send cache; net.core.rmem_max = 12582912: this parameter indicates the maximum size of the kernel socket to accept the cache; net.core.wmem_max = 12582912: this parameter represents the maximum size of the kernel socket send cache; net.ipv4.tcp_syncookies = 1: this parameter is independent of performance and is used to solve illegal SYN operations of TCP.
-this is the end of this article. Thank you for reading-
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.