In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
1. Number of worker processes running in Nginx
The number of working processes running in Nginx generally sets the core of CPU or the number of cores x2. If you do not know the number of cores in cpu, you can see it by pressing 1 after the top command, or you can view the / proc/cpuinfo file grep ^ processor / proc/cpuinfo | wc-l
[root@lx~] # vi/usr/local/nginx1.10/conf/nginx.confworker_processes 4; [root@lx~] # / usr/local/nginx1.10/sbin/nginx-s reload [root@lx~] # ps-aux | grep nginx | grep-v greproot 9834 0.0 47556 1948? Ss 22:36 0:00 nginx: master processnginxwww 10135 0.0 0.0 50088 2004? S 22:58 0:00 nginx: worker processwww 10136 0.0 0.0 50088 2004? S 22:58 0:00 nginx: worker processwww 10137 0.0 0.0 50088 2004? S 22:58 0:00 nginx: worker processwww 10138 0.0 0.0 50088 2004? S 22:58 0:00 nginx: worker process2, Nginx run CPU affinity
For example, 4-core configuration:
Worker_processes 4 workerplate CPUs 0001 0010 0100 1000
For example, 8-core configuration:
Worker_processes 8 workerworker computers 00000001 00000010 00000100 0000100000010000 00100000 01000000 10000000
A maximum of 8 worker_processes is enabled, more than 8 performance improvements will not be improved, and the stability will become lower, so 8 processes will be enough.
3. Maximum number of files opened by Nginx worker_rlimit_nofile 65535
This directive means that when a nginx process opens the maximum number of file descriptors, the theoretical value should be the maximum number of open files (ulimit-n) divided by the number of nginx processes, but the nginx allocation request is not so uniform, so it is best to be consistent with the value of ulimit-n.
Note: the file resource limit can be set in / etc/security/limits.conf for individual users such as root/user or * on behalf of all users.
* soft nofile 65535 * hard nofile 65535
User re-login takes effect (ulimit-n)
4. Nginx event handling model events {use epoll; worker_connections 65535; multi_accept on;}
Nginx adopts epoll event model and has high processing efficiency.
Work_connections is the maximum number of client connections allowed by a single worker process, which is generally based on server performance and memory, and the actual maximum is the number of worker processes multiplied by work_connections.
In fact, we fill in a 65535, enough, these are concurrent values, the concurrency of a website to reach such a large number, but also a big station!
Multi_accept tells nginx to accept as many connections as possible after receiving a new connection notification. The default is on. When set to on, multiple worker processes connections in a serial manner, that is, only one worker for a connection is awakened, and the others are dormant. When set to off, multiple worker processes connections in parallel, that is, a connection wakes up all worker until the connection is assigned. Continue hibernation without getting a connection. When you have a small number of server connections, turning on this parameter will reduce the load to a certain extent, but when the server throughput is very large, you can turn this parameter off for efficiency.
5. Turn on the efficient transmission mode http {include mime.types; default_type application/octet-stream;... Sendfile on; tcp_nopush on;... }
Include mime.types: media type, include is just an instruction that contains the contents of another file in the current file.
Default_type application/octet-stream: the default media type is sufficient.
Sendfile on: enable efficient file transfer mode. The sendfile instruction specifies whether nginx calls the sendfile function to output files. For common applications, set it to on. If it is used for downloading and other application disk IO heavy-load applications, it can be set to off to balance the processing speed of disk and network Ibano and reduce the load of the system. Note: if the picture does not display properly, change this to off.
Tcp_nopush on: must be enabled in sendfile mode to prevent network congestion and actively reduce the number of network segments (send the response header and the beginning of the body together instead of sending one after another. ) 6. Connection timeout
The main purpose is to protect server resources, CPU, memory, and control the number of connections, because establishing connections also requires resource consumption.
Keepalive_timeout 60 inception no delay on;client_header_buffer_size 4k openings fileholders cache max=102400 inactive=20s;open_file_cache_valid 30s openings fileholders cachettmings uses 1 clientmakers headerheads timeouts 15 clientships timeouts timeout 15 clients timeouts connection 15 off;client_max_body_size outbound connection 15 users serverstores tokens off;client_max_body_size 10m
Keepalived_timeout: the client connection persists the session timeout, after which the server disconnects the link.
Tcp_nodelay: it also prevents network congestion, but it is only valid if it is included in the keepalived parameter.
Client_header_buffer_size 4k: the buffer size of the client request header, which can be set according to the paging size of your system. Generally, the size of a request header will not exceed 1k, but since the paging of the system is generally larger than 1k, it is set to the paging size here. The page size can be obtained with the command getconf PAGESIZE.
Open_file_cache max=102400 inactive=20s: this specifies cache for open files, which is not enabled by default. Max specifies the number of caches, which is recommended to be the same as the number of open files. Inactive refers to how long it takes to delete the cache after the file has not been requested.
Open_file_cache_valid 30s: this refers to how often the valid information in the cache is checked.
Open_file_cache_min_uses 1: the minimum number of times a file is used during the time of the inactive parameter in the open_file_cache directive. If this number is exceeded, the file descriptor is always opened in the cache. As in the example above, if a file is not used once in inactive time, it will be removed.
Client_header_timeout: sets the timeout for the request header. We can also lower this setting, and if no data is sent beyond this time, nginx will return an error of request time out.
Client_body_timeout sets the timeout for the request body. We can also lower this setting and send no data beyond this time, with the same error prompt as above.
Reset_timeout_connection: tells nginx to close unresponsive client connections. This will free up the memory space occupied by that client.
Send_timeout: responds to the client timeout, which is limited to the time between two activities. If the client has no activity beyond this time, nginx closes the connection.
Server_tokens: it doesn't make nginx execute faster, but it can turn off the nginx version number in the error page, which is good for security. Client_max_body_size: file size limit for uploaded files. 7. Fastcgi tuning fastcgi_connect_timeout 600 make fastcgivers sendworthy timeout 600 fastcgitives readership timeout 600 fastcgiy buffers buffers 4 64k matchmakers buffers size 128k fastcgiosities temptation files writebacks size 128kitsfastcgitives temptation pathways, nginx1.10 nginx1.10 ngincgidomains tppany fastcgibles errors on;fastcgi_cache_path/usr/local/nginx1.10/fastcgi_cache levels=1:2 keys_zone=cache_fastcgi:128minactive=1d max_size=10g
Fastcgi_connect_timeout 600: specifies the timeout for connecting to the back-end FastCGI.
Fastcgi_send_timeout 600: the timeout for sending the request to the FastCGI.
Fastcgi_read_timeout 600: specifies the timeout for receiving FastCGI replies.
Fastcgi_buffer_size 64k: specifies the buffer size required to read the first part of the FastCGI reply. The default buffer size is. The size of each block in the fastcgi_buffers instruction can be set to a smaller value.
Fastcgi_buffers 4 64k: specify how many and how many buffers are needed locally to buffer FastCGI reply requests. If the page size generated by a php script is 256KB, then 4 64KB buffers will be allocated to cache. If the page size is larger than 256KB, then the parts larger than 256KB will be cached in the path specified by fastcgi_temp_path, but this is not a good method, because the data processing speed in memory is faster than that of disk. Generally, this value should be the middle value of the page size generated by the php script in the site. If the page size generated by most of the scripts in the site is 256KB, then you can set this value to "832K", "464k", and so on.
Fastcgi_busy_buffers_size 128k: it is recommended to set it to twice the size of fastcgi_buffers, buffer during busy hours.
Fastcgi_temp_file_write_size 128k: how many blocks will be used when writing to fastcgi_temp_path? the default value is twice that of fastcgi_buffers. If the value is set, 502BadGateway may be reported if the load is loaded.
Fastcgi_temp_path: cache the temporary directory.
Fastcgi_intercept_errors on: this directive specifies whether to pass 4xx and 5xx error messages to the client, or to allow nginx to use error_page to handle error messages. Note: if the static file does not exist, it will return 404 pages, but the php page will return blank pages!
Fastcgi_cache_path / usr/local/nginx1.10/fastcgi_cachelevels=1:2 keys_zone=cache_fastcgi:128minactive=1d max_size=10g: fastcgi_cache cache directory, which can be set at the directory level. For example, 160256 subdirectories will be generated at 1:2. Cache_fastcgi is the name of this cache space, and how much memory is used by cache (nginx, which is such a popular content, is directly stored in memory to improve access speed). Inactive indicates the default expiration time. If the cached data is not accessed within the expiration time, it will be deleted, and max_size indicates the maximum amount of hard disk space used.
Fastcgi_cache cache_fastcgi: # means to turn on the FastCGI cache and give it a name. It is very useful to turn on caching, which can effectively reduce the load of CPU and prevent 502 error release. The name of the cache created by cache_fastcgi for the proxy_cache_path instruction.
Fastcgi_cache_valid 200302 1h: # is used to specify the caching time of the reply code. The value in the instance indicates that the 200and 302 responses will be cached for one hour, to be used with fastcgi_cache.
Fastcgi_cache_valid 301 1D: caches 301 responses for one day.
Fastcgi_cache_valid any 1m: cache other replies for 1 minute.
Fastcgi_cache_min_uses 1: this directive sets how many requests the same URL will be cached.
Fastcgi_cache_key http://$host$request_uri: this instruction is used to set the key value of the web cache. Nginx stores the md5 hash according to the Key value. It is generally combined into proxy_cache_key based on variables such as $host (domain name), $request_uri (request path), and so on.
Fastcgi_pass: specify the listening port and address of the FastCGI server, which can be native or otherwise.
Summary:
The caching functions of nginx are: proxy_cache / fastcgi_cache
The role of proxy_cache is to cache the content of the back-end server, which can be anything, both static and dynamic.
The role of fastcgi_cache is to cache the content generated by fastcgi, in many cases dynamic content generated by php.
Proxy_cache cache reduces the number of times nginx communicates with the back-end, saving transmission time and back-end broadband.
Fastcgi_cache caching reduces the number of communications between nginx and php, and reduces the pressure on php and database (mysql). 8. Gzip tuning
Using gzip compression may save us bandwidth, speed up transmission, have a better experience, and save us costs, so this is a key point.
Nginx enables compression, which requires you to ngx_http_gzip_module the module. Apache uses mod_deflate.
Generally we need to compress the content are: text, js,html,css, for pictures, video, flash what is not compressed, but also note that we need to use the function of gzip is to consume CPU!
Gzip on;gzip_min_length 2k alternate gzipboxes buffers 4 32k dexterous zippers httppages version 1.1 mitgzipsides compactness level 6 vicinity gzipboxes typestext text plain text/css text/javascriptapplication/json application/javascript application/x-javascriptapplication/xml;gzip_vary on;gzip_proxied any;gzip on; # enable compression
Gzip_min_length 1k: set the minimum number of bytes of pages allowed to be compressed. The number of page bytes is obtained from the Content-Length of the header header. The default value is 0. No matter how many pages are compressed, it is recommended to set it to greater than 1K. If it is smaller than 1K, the pressure may become larger and larger.
Gzip_buffers 4 32k: compression buffer size, which means that 4 units of 32K memory are requested as the compression result stream cache. The default value is to apply for memory space of the same size as the original data to store gzip compression results.
Gzip_http_version 1.1: compressed version, which is used to set and identify the HTTP protocol version. The default is 1.1. Most browsers already support GZIP decompression, so you can use the default.
Gzip_comp_level 6: compression ratio, which is used to specify GZIP compression ratio. 1 compression ratio is the smallest, processing speed is the fastest, and 9 compression ratio is the largest, but the transmission speed is fast, but the processing is slow and consumes CPU resources.
Gzip_types text/css text/xml application/javascript: used to specify the type of compression, the 'text/html' type is always compressed. Default: gzip_types text/html (js/css files are not compressed by default)
Compression type, matching mime type for compression
You cannot use the wildcard character text/*
Text/html has been compressed by default (whether specified or not)
Set which compressed text file can refer to conf/mime.types. Gzip_vary on: varyheader support, the option allows the front-end cache server to cache GZIP-compressed pages, such as using Squid to cache nginx-compressed data. 9. Expires cache tuning
Cache, mainly for pictures, css,js and other elements to change the opportunity to use relatively few cases, especially pictures, take up a large bandwidth, we can completely set the picture in the browser local cache 365drecoveryjsmenhtml can be cached for more than 10 days, so that users open and load slowly for the first time, the second time, it is very fast! When caching, we need to list the extension names that need to be cached, and the Expires cache is configured in the server field.
Location ~ *\. (ico | jpe?g | gif | png | bmp | swf | flv) ${expires 30d / logbooks not found off;access_log off;} location ~ *\. (js | css) ${expires 7d | logbooks not found found off;access_log off;}
Note: whether log_not_found off; records errors that do not exist in error_log. The default is.
Summary:
Advantages of expire features:
Expires can reduce the bandwidth purchased by websites and save costs.
At the same time, improve the user access experience
To reduce the pressure of service and save server cost is a very important function of web service.
Expire functional drawbacks:
When the cached page or data is updated, the user may still see the old content, which will affect the user experience.
Solution: the first one to shorten the cache time, for example, 1 day, but not completely, unless the update frequency is more than 1 day; the second renames the cached object.
Content that the site does not want to be cached:
Website traffic statistics tool
Frequently updated files (google's logo). 10. Hotlink protection
To prevent others from quoting pictures and other links directly from your website, consuming your resources and network traffic, then our solutions are as follows:
Watermark, brand promotion, your bandwidth, server enough
Firewall, direct control, as long as you know the source of IP
The following method of hotlink protection strategy is to give 404 an error message directly. Location * ^. +\. (jpg | gif | png | swf | flv | wma | wmv | asf | mp3 | mmf | zip | rar) ${valid_referers noneblocked www.benet.com benet.com;if ($invalid_referer) {# return 302 https://cache.yisu.com/upload/information/20200218/28/1413.jpg; return 404; break;} access_log off;}
Parameters can be in the following form:
None: means a Referer header that does not exist (meaning empty, that is, direct access, such as opening an image directly in a browser). Blocked: means to camouflage the referer header according to the firewall, such as "Referer:XXXXXXX". Server_names: is a list of one or more servers, and the "*" wildcard can be used in names after version 0.5.33. 11. Kernel parameter optimization
Fs.file-max = 999999: this parameter represents the maximum number of handles that a process (such as a worker process) can open at the same time. This parameter limits the maximum number of concurrent connections in a straight line and needs to be configured according to the actual situation.
Net.ipv4.tcp_max_tw_buckets = 6000: this parameter represents the maximum number of TIME_WAIT sockets allowed by the operating system, and if this number is exceeded, the TIME_WAIT socket will be cleared immediately and a warning will be printed. This parameter defaults to 180000, and too many TIME_WAIT sockets will slow down the Web server. Note: actively closing the server side of the connection will result in a connection in TIME_WAIT state
Net.ipv4.ip_local_port_range = 1024 65000: the range of ports that the system is allowed to open.
Net.ipv4.tcp_tw_recycle = 1: enable timewait fast recycling.
Net.ipv4.tcp_tw_reuse = 1: enable reuse. Allows TIME-WAIT sockets to be reused for new TCP connections. This makes sense for the server because there are always a large number of TIME-WAIT-state connections on the server.
Net.ipv4.tcp_keepalive_time = 30: this parameter indicates how often TCP sends keepalive messages when keepalive is enabled. The default is 2 hours, and if you set it smaller, you can clean up invalid connections more quickly.
Net.ipv4.tcp_syncookies = 1: enable SYN Cookies. When a SYN waiting queue overflow occurs, enable cookies to handle it.
Net.core.somaxconn = 40960: the backlog of the listen function in web applications will give us kernel parameters by default.
Net.core.somaxconn: limited to 128, while NGX_LISTEN_BACKLOG defined by nginx defaults to 511, so it is necessary to adjust this value. Note: for a TCP connection, Server and Client need a three-way handshake to establish a network connection. When the three-way handshake is successful, we can see that the state of the port changes from LISTEN to ESTABLISHED, and then the data can be transmitted over the link. Each port in the Listen state has its own listening queue. The length of the listening queue is related to parameters such as the somaxconn parameter and the listen () function in programs that use the port. Somaxconn defines the maximum listening queue length per port in the system, which is a global parameter with a default value of 128. for a high-load web service environment that often handles new connections, the default value of 128is too small. For most environments, it is recommended to increase this value to 1024 or more. Large listening queues can also help prevent DoS denial of service.
Net.core.netdev_max_backlog = 262144: the maximum number of packets allowed to be sent to the queue when each network interface receives packets faster than the kernel processes them.
Net.ipv4.tcp_max_syn_backlog = 262144: this parameter indicates the maximum length of the queue for SYN requests accepted by TCP during the establishment phase of the three-way handshake. Default is 1024. Setting it larger can prevent Linux from losing connection requests initiated by clients when Nginx is too busy to accept new connections.
Net.ipv4.tcp_rmem = 10240 87380 12582912: this parameter defines the minimum, default, and maximum values of the TCP accept cache (for the TCP accept sliding window).
Net.ipv4.tcp_wmem = 10240 87380 12582912: this parameter defines the minimum, default, and maximum values of the TCP send cache (for TCP send sliding windows).
Net.core.rmem_default = 6291456: this parameter indicates that the kernel socket accepts the default size of the cache.
Net.core.wmem_default = 6291456: this parameter represents the default size of the kernel socket send cache.
Net.core.rmem_max = 12582912: this parameter indicates that the kernel socket accepts the maximum size of the cache.
Net.core.wmem_max = 12582912: this parameter represents the maximum size of the kernel socket send cache.
Net.ipv4.tcp_syncookies = 1: this parameter is independent of performance and is used to solve the SYN*** of TCP.
A complete kernel optimization setting is posted below:
Fs.file-max = 999999net.ipv4.ip_forward = 0net.ipv4.conf.default.rp_filter = 1net.ipv4.conf.default.accept_source_route = 0kernel.sysrq = 0kernel.core_uses_pid = 1net.ipv4.tcp_syncookies = 1kernel.msgmnb = 65536kernel.msgmax = 65536kernel.shmmax = 68719476736kernel.shmall = 4294967296net.ipv4.tcp_max_tw_buckets = 6000net.ipv4.tcp_sack = 1net.ipv4.tcp_window_scaling = 1net.ipv4.tcp_rmem = 10240 87380 12582912net.ipv4. Tcp_wmem = 10240 87380 12582912net.core.wmem_default = 8388608net.core.rmem_default = 8388608net.core.rmem_max = 16777216net.core.wmem_max = 16777216net.core.netdev_max_backlog = 262144net.core.somaxconn = 40960net.ipv4.tcp_max_orphans = 3276800net.ipv4.tcp_max_syn_backlog = 262144net.ipv4.tcp_timestamps = 0net.ipv4.tcp_synack_retries = 1net.ipv4.tcp_syn_retries = 1net.ipv4.tcp_tw_recycle = 1net.ipv4.tcp _ Tw_reuse = 1net.ipv4.tcp_mem = 94500000 915000000 927000000net.ipv4.tcp_fin_timeout = 1net.ipv4.tcp_keepalive_time = 30net.ipv4.ip_local_port_range = 1024 65 000
Execute sysctl-p to make the kernel changes take effect.
12. Optimization of the number of system connections
The default value for linux is open files 1024. View the current system values:
# ulimit-n1024
Indicates that server only allows 1024 files to be opened at the same time.
Use ulimit-a to view all limits for the current system, and ulimit-n to view the current maximum number of open files.
The new linux is only 1024 by default, so it's easy to encounter error: too many open files when you use it as a server with a heavy load. Therefore, it needs to be enlarged and added at the end of / etc/security/limits.conf:
* soft nofile 65535 * hard nofile 65535 * soft noproc 65535 * hard noproc 65535
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.