In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
(1) the number of working processes running in nginx. Generally, set the core of cpu or the number of cores x2
If you do not know the number of cores in cpu, you can see it by pressing 1 after the top command, or you can view the / proc/cpuinfo file grep ^ processor / proc/cpuinfo | wc-l
[root@lx~] # vi/usr/local/nginx1.10/conf/nginx.conf
Worker_processes 4
[root@lx~] # / usr/local/nginx1.10/sbin/nginx-s reload
[root@lx~] # ps-aux | grep nginx | grep-v grep
Root 9834 0.0 0.0 47556 1948? Ss 22:36 0:00 nginx: master processnginx
Www 10135 0.0 0.0 50088 2004? S 22:58 0:00 nginx: worker process
Www 10136 0.0 0.0 50088 2004? S 22:58 0:00 nginx: worker process
Www 10137 0.0 0.0 50088 2004? S 22:58 0:00 nginx: worker process
Www 10138 0.0 0.0 50088 2004? S 22:58 0:00 nginx: worker process
Nginx runs CPU affinity
For example, 4-core configuration
Worker_processes 4
Worker_cpu_affinity 0001 0010 0100 1000
For example, 8-core configuration
Worker_processes 8
Worker_cpu_affinity 00000001 00000010 00000100 0000100000010000 00100000 01000000 10000000
A maximum of 8 worker_processes is enabled, more than 8 performance improvements will not be improved, and the stability will become lower, so 8 processes will be enough.
The maximum number of files that Nginx can open
Worker_rlimit_nofile 65535
This directive means that when a nginx process opens the maximum number of file descriptors, the theoretical value should be the maximum number of open files (ulimit-n) divided by the number of nginx processes, but the nginx allocation request is not so uniform, so it is best to be consistent with the value of ulimit-n.
Note:
File resource limits can be set in / etc/security/limits.conf for individual users such as root/user or * on behalf of all users.
* soft nofile 65535
* hard nofile 65535
User re-login takes effect (ulimit-n)
(2) Nginx event handling model
Events {
Use epoll
Worker_connections 65535
Multi_accept on
}
Nginx adopts epoll event model and has high processing efficiency.
Work_connections is the maximum number of client connections allowed by a single worker process, which is generally based on server performance and memory. The actual maximum value is the number of worker processes multiplied by work_connections.
In fact, we fill in a 65535, enough, these are concurrent values, the concurrency of a website to reach such a large number, but also a big station!
Multi_accept tells nginx to accept as many connections as possible after receiving a new connection notification. The default is on. When set to on, multiple worker processes connections in a serial manner, that is, only one worker for a connection is awakened, and the others are dormant. When set to off, multiple worker processes connections in parallel, that is, a connection wakes up all worker until the connection is assigned. Continue hibernation without getting a connection. When you have a small number of server connections, turning on this parameter will reduce the load to a certain extent, but when the server throughput is very large, you can turn this parameter off for efficiency.
(3) turn on efficient transmission mode
Http {
Include mime.types
Default_type application/octet-stream
……
Sendfile on
Tcp_nopush on
……
Include mime.types; / / media type, include is just an instruction to include the contents of another file in the current file
Default_type application/octet-stream; / / the default media type is sufficient
Sendfile on;// enables efficient file transfer mode. The sendfile instruction specifies whether nginx calls the sendfile function to output files. For common applications, set it to on. If it is used for downloading and other application disk IO heavy-load applications, it can be set to off to balance the processing speed of disk and network Ibano and reduce the load of the system.
Note: if the picture does not display properly, change this to off.
Tcp_nopush on; must be enabled in sendfile mode to prevent network congestion and actively reduce the number of network segments (sending the response header and the beginning of the body together instead of sending one after another. )
(4) connection timeout
The main purpose is to protect server resources, CPU, memory, and control the number of connections, because establishing connections also requires resource consumption.
Keepalive_timeout 60
Tcp_nodelay on
Client_header_buffer_size 4k
Open_file_cache max=102400 inactive=20s
Open_file_cache_valid 30s
Open_file_cache_min_uses 1
Client_header_timeout 15
Client_body_timeout 15
Reset_timedout_connection on
Send_timeout 15
Server_tokens off
Client_max_body_size 10m
The keepalived_timeout client connection persists session timeout, after which the server disconnects the link
Tcp_nodelay; is also used to prevent network congestion, but it is only valid if it is included in the keepalived parameter.
Client_header_buffer_size 4k
The buffer size of the client request header can be set according to the paging size of your system. Generally, the size of a request header will not exceed 1k, but since the paging of the system is generally larger than 1k, it is set to the paging size here. The page size can be obtained with the command getconf PAGESIZE.
Open_file_cache max=102400 inactive=20s
This specifies the cache for opening files, which is not enabled by default, max specifies the number of caches, advises and opens files
The number is the same. Inactive refers to how long it takes to delete the cache after the file has not been requested.
Open_file_cache_valid 30s
This refers to how often the valid information in the cache is checked.
Open_file_cache_min_uses 1
The minimum number of times a file is used during the time of the inactive parameter in the open_file_cache directive. If this number is exceeded, the text
The item descriptor is always opened in the cache, as in the example above, if a file is not used at once in inactive time, it will be removed.
Client_header_timeout sets the timeout for the request header. We can also set this lower. If no data is sent beyond this time, nginx will return an error of request time out.
Client_body_timeout sets the timeout for the request body. We can also lower this setting and send no data beyond this time, with the same error prompt as above.
Reset_timeout_connection tells nginx to close unresponsive client connections. This will free up the memory space occupied by that client.
Send_timeout responds to the client timeout, which is limited to the time between two activities. If the client has no activity beyond this time, nginx closes the connection.
Server_tokens does not make nginx execute faster, but it can turn off the nginx version number in the error page, which is good for security.
Client_max_body_size upload file size limit
(5) fastcgi tuning
Fastcgi_connect_timeout 600
Fastcgi_send_timeout 600
Fastcgi_read_timeout 600
Fastcgi_buffer_size 64k
Fastcgi_buffers 4 64k
Fastcgi_busy_buffers_size 128k
Fastcgi_temp_file_write_size 128k
Fastcgi_temp_path/usr/local/nginx1.10/nginx_tmp
Fastcgi_intercept_errors on
Fastcgi_cache_path/usr/local/nginx1.10/fastcgi_cache levels=1:2 keys_zone=cache_fastcgi:128minactive=1d max_size=10g
Fastcgi_connect_timeout 600; # specifies the timeout for connecting to the backend FastCGI.
The timeout for fastcgi_send_timeout 600; # to send the request to FastCGI.
Fastcgi_read_timeout 600; # specifies the timeout for receiving FastCGI replies.
Fastcgi_buffer_size 64k; # specifies how much buffer is needed to read the first part of the FastCGI reply. The default buffer size is the size of each block in the fastcgi_buffers instruction, which can be set to a smaller value.
Fastcgi_buffers 4 64k; # specifies how many and how many buffers are needed locally to buffer FastCGI reply requests. If the page size generated by a php script is 256KB, then 4 64KB buffers will be allocated to cache. If the page size is larger than 256KB, then the parts larger than 256KB will be cached in the path specified by fastcgi_temp_path, but this is not a good method, because the data processing speed in memory is faster than that of disk. Generally, this value should be the middle value of the page size generated by the php script in the site. If the page size generated by most of the scripts in the site is 256KB, then you can set this value to "832K", "464k", and so on.
Fastcgi_busy_buffers_size 128k; # it is recommended to set it to twice the size of fastcgi_buffers, buffer in busy hours
Fastcgi_temp_file_write_size 128k; # how many data blocks will be used when writing to fastcgi_temp_path? the default value is twice that of fastcgi_buffers. If the value is set up, it may be reported to 502BadGateway when the load is up.
Fastcgi_temp_path # cache temporary directory
The fastcgi_intercept_errors on;# directive specifies whether to pass 4xx and 5xx error messages to the client, or to allow nginx to use error_page to handle error messages.
Note: if the static file does not exist, it will return 404 pages, but the php page will return blank pages!
Fastcgi_cache_path / usr/local/nginx1.10/fastcgi_cachelevels=1:2 keys_zone=cache_fastcgi:128minactive=1d max_size=10g # fastcgi_cache cache directory, you can set the directory level. For example, 160256 subdirectories will be generated at 1:2. Cache_fastcgi is the name of this cache space, and how much memory is used by cache (such a popular content nginx is directly stored in memory to improve access speed). Inactive indicates the default expiration time. If the cached data is not accessed within the expiration time, it will be deleted. Max_size indicates the maximum amount of hard disk space used.
Fastcgi_cache cache_fastcgi; # means to turn on the FastCGI cache and give it a name. It is very useful to turn on caching, which can effectively reduce the load of CPU and prevent 502 error release. The name of the cache created by cache_fastcgi for the proxy_cache_path instruction
Fastcgi_cache_valid 200302 1h; # is used to specify the caching time of the reply code. The value in the instance indicates that the 200,302 responses will be cached for one hour, to be used with fastcgi_cache.
Fastcgi_cache_valid 301 1D; # cache 301 responses for one day
Fastcgi_cache_valid any 1m; # caches other replies to 1 minute
Fastcgi_cache_min_uses 1; # this directive sets how many requests the same URL will be cached.
Fastcgi_cache_key http://$host$request_uri; # this instruction is used to set the key value of the web cache. Nginx stores the md5 hash according to the Key value. It is generally combined into proxy_cache_key based on variables such as $host (domain name), $request_uri (request path), and so on.
Fastcgi_pass # specifies the listening port and address of the FastCGI server, either native or other
Summary:
The caching functions of nginx are: proxy_cache / fastcgi_cache
The role of proxy_cache is to cache the content of the back-end server, which can be anything, both static and dynamic.
The role of fastcgi_cache is to cache the content generated by fastcgi, in many cases dynamic content generated by php.
Proxy_cache cache reduces the number of times nginx communicates with the back-end, saving transmission time and back-end broadband.
Fastcgi_cache caching reduces the number of communications between nginx and php, and reduces the pressure on php and database (mysql).
(6) gzip tuning
Using gzip compression may save us bandwidth, speed up transmission, have a better experience, and save us costs, so this is a key point.
Nginx to enable compression requires you to ngx_http_gzip_module the module, apache uses mod_deflate
Generally we need to compress the content are: text, js,html,css, for pictures, video, flash what is not compressed, but also note that we need to use the function of gzip is to consume CPU!
Gzip on
Gzip_min_length 2k
Gzip_buffers 4 32k
Gzip_http_version 1.1
Gzip_comp_level 6
Gzip_typestext/plain text/css text/javascriptapplication/json application/javascript application/x-javascriptapplication/xml
Gzip_vary on
Gzip_proxied any
Gzip on; # enables compression
Gzip_min_length 1k; # sets the minimum number of bytes of pages allowed to be compressed. The number of page bytes is obtained from the Content-Length of the header header. The default value is 0. No matter how many pages are compressed, it is recommended to set it to greater than 1K. If the value is smaller than 1K, the greater the pressure may be.
Gzip_buffers 4 32k; # compression buffer size indicates that 4 units of 32K memory are applied for caching the compression result stream. The default value is to apply for memory space of the same size as the original data to store the gzip compression result.
Gzip_http_version 1.1; # compressed version, which is used to set and identify the HTTP protocol version. The default is 1.1. at present, most browsers already support GZIP decompression. You can use the default.
Gzip_comp_level 6; # compression ratio, used to specify GZIP compression ratio, 1 compression ratio is the minimum, processing speed is the fastest, 9 compression ratio is the largest, the transmission speed is fast, but the processing is slow, and it consumes CPU resources.
Gzip_types text/css text/xml application/javascript; # is used to specify the type of compression, and the 'text/html' type is always compressed.
Default: gzip_types text/html (js/css files are not compressed by default)
# Compression type, matching MIME type for compression
# cannot use the wildcard character text/*
# (whether specified or not) text/html has been compressed by default
# set which compressed text file can refer to conf/mime.types
Gzip_vary on; # varyheader support. Change the option to allow the front-end cache server to cache GZIP-compressed pages, such as using Squid to cache nginx-compressed data.
(7) expires cache tuning
Cache, mainly for pictures, css,js and other elements to change the opportunity to use relatively few cases, especially pictures, take up a large bandwidth, we can completely set the picture in the browser local cache 365drecoveryjsmenhtml can be cached for more than 10 days, so that users open and load slowly for the first time, the second time, it is very fast! When caching, we need to list the extension names that need to be cached, and Expires cache is configured in the server field.
Location *\. (ico | jpe?g | gif | png | bmp | swf | flv) ${
Expires 30d
# log_not_found off
Access_log off
}
Location *\. (js | css) ${
Expires 7d
Log_not_found off
Access_log off
}
Note: whether log_not_found off; records errors that do not exist in error_log. The default is.
Summary:
Expire functional advantages (1) expires can reduce the bandwidth purchased by the website, save costs (2) and improve user access experience at the same time (3) reduce the pressure of services, save server costs, is a very important function of web services. Expire function disadvantage: the cached page or data is updated, the user may see the old content, but affect the user experience. Solution: the first one to shorten the cache time, for example, 1 day, but not completely, unless the update frequency is more than 1 day; the second renames the cached object.
Content that the site does not want to be cached 1) site traffic statistics tools 2) frequently updated files (google's logo)
(8) hotlink protection
To prevent others from quoting pictures and other links directly from your website, consuming your resources and network traffic, then our solutions are as follows: 1: watermark, brand promotion, your bandwidth, server sufficient 2: firewall, direct control, provided you know IP source 3: hotlink protection strategy the following method is to directly give 404 error prompts
Location ~ * ^. +\. (jpg | gif | png | swf | flv | wma | asf | mp3 | mmf | zip | rar) ${
Valid_referers noneblocked www.benet.com benet.com
If ($invalid_referer) {
# return 302 https://cache.yisu.com/upload/information/20200309/32/43497.jpg;
Return 404
Break
}
Access_log off
}
Parameters can be in the following form:
None means a Referer header that does not exist (empty, that is, direct access, such as opening a picture directly in a browser)
Blocked means to camouflage the referer header according to the firewall, such as "Referer:XXXXXXX".
Server_names is a list of one or more servers, and the "*" wildcard can be used in names after version 0.5.33.
(9) Kernel parameter optimization
Fs.file-max = 999999: this parameter represents the maximum number of handles that a process (such as a worker process) can open at the same time. This parameter limits the maximum number of concurrent connections in a straight line and needs to be configured according to the actual situation.
The parameter net.ipv4.tcp_max_tw_buckets = 6000 # indicates the maximum number of TIME_WAIT sockets allowed by the operating system, and if this number is exceeded, the TIME_WAIT socket will be cleared immediately and a warning message will be printed. This parameter defaults to 180000, and too many TIME_WAIT sockets will slow down the Web server.
Note: actively closing the server side of the connection will result in a connection in TIME_WAIT state
Net.ipv4.ip_local_port_range = 1024 65000 # the range of ports that the system is allowed to open.
Net.ipv4.tcp_tw_recycle = enable timewait fast recycling.
Net.ipv4.tcp_tw_reuse = reuse is enabled. Allows TIME-WAIT sockets to be reused for new TCP connections. This makes sense for the server because there are always a large number of TIME-WAIT-state connections on the server.
Net.ipv4.tcp_keepalive_time = 30: this parameter indicates how often TCP sends keepalive messages when keepalive is enabled. The default is 2 hours, and if you set it smaller, you can clean up invalid connections more quickly.
Net.ipv4.tcp_syncookies = enable SYN Cookies, and when SYN waiting queue overflow occurs, enable cookies to handle it.
Net.core.somaxconn = 40960 # the backlog of the listen function in the web application limits the net.core.somaxconn of our kernel parameters to 128by default, while the NGX_LISTEN_BACKLOG defined by nginx defaults to 511, so it is necessary to adjust this value.
Note: for a TCP connection, Server and Client need a three-way handshake to establish a network connection. When the three-way handshake is successful, we can see that the state of the port changes from LISTEN to ESTABLISHED, and then the data can be transmitted over the link. Each port in the Listen state has its own listening queue. The length of the listening queue is related to the somaxconn parameter and the listen () function in the program that uses the port.
Somaxconn parameter: defines the maximum listening queue length per port in the system. This is a global parameter with a default value of 128. for a high-load web service environment that often handles new connections, the default value of 128is too small. For most environments, it is recommended to increase this value to 1024 or more. Large listening queues can also help prevent DoS denial of service.
Net.core.netdev_max_backlog = 262144 # the maximum number of packets allowed to be sent to the queue when each network interface receives packets faster than the kernel processes them.
The parameter net.ipv4.tcp_max_syn_backlog = 262144 # indicates the maximum length of the queue for SYN requests accepted by TCP during the establishment phase of the three-way handshake. The default is 1024. Setting it larger can prevent Linux from losing connection requests initiated by clients when Nginx is too busy to accept new connections.
The parameter net.ipv4.tcp_rmem = 10240 87380 1258291 defines the minimum, default, and maximum values of the TCP accept cache (for the TCP accept slide window).
Net.ipv4.tcp_wmem = 10240 87380 12582912: this parameter defines the minimum, default, and maximum values of the TCP send cache (for TCP send sliding windows).
Net.core.rmem_default = 6291456: this parameter indicates that the kernel socket accepts the default size of the cache.
Net.core.wmem_default = 6291456: this parameter represents the default size of the kernel socket send cache.
Net.core.rmem_max = 12582912: this parameter indicates that the kernel socket accepts the maximum size of the cache.
Net.core.wmem_max = 12582912: this parameter represents the maximum size of the kernel socket send cache.
Net.ipv4.tcp_syncookies = 1: this parameter is independent of performance and is used to solve the SYN*** of TCP.
A complete kernel optimization setting is posted below:
Fs.file-max = 999999
Net.ipv4.ip_forward = 0
Net.ipv4.conf.default.rp_filter = 1
Net.ipv4.conf.default.accept_source_route = 0
Kernel.sysrq = 0
Kernel.core_uses_pid = 1
Net.ipv4.tcp_syncookies = 1
Kernel.msgmnb = 65536
Kernel.msgmax = 65536
Kernel.shmmax = 68719476736
Kernel.shmall = 4294967296
Net.ipv4.tcp_max_tw_buckets = 6000
Net.ipv4.tcp_sack = 1
Net.ipv4.tcp_window_scaling = 1
Net.ipv4.tcp_rmem = 10240 87380 12582912
Net.ipv4.tcp_wmem = 10240 87380 12582912
Net.core.wmem_default = 8388608
Net.core.rmem_default = 8388608
Net.core.rmem_max = 16777216
Net.core.wmem_max = 16777216
Net.core.netdev_max_backlog = 262144
Net.core.somaxconn = 40960
Net.ipv4.tcp_max_orphans = 3276800
Net.ipv4.tcp_max_syn_backlog = 262144
Net.ipv4.tcp_timestamps = 0
Net.ipv4.tcp_synack_retries = 1
Net.ipv4.tcp_syn_retries = 1
Net.ipv4.tcp_tw_recycle = 1
Net.ipv4.tcp_tw_reuse = 1
Net.ipv4.tcp_mem = 94500000 915000000 927000000
Net.ipv4.tcp_fin_timeout = 1
Net.ipv4.tcp_keepalive_time = 30
Net.ipv4.ip_local_port_range = 1024 65000
Execute sysctl-p to make the kernel changes take effect
(10) Optimization of the number of system connections:
Linux default open files is 1024
# ulimit-n
1024
Indicates that server only allows 1024 files to be opened at the same time.
Use ulimit-a to view all limits for the current system, and ulimit-n to view the current maximum number of open files.
The new linux is only 1024 by default, so it's easy to encounter error: too many open files when you use it as a server with a heavy load. Therefore, it needs to be enlarged
At the end of / etc/security/limits.conf, add:
* soft nofile 65535
* hard nofile 65535
* soft noproc 65535
* hard noproc 65535
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.