In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
I. Optimization before compilation and installation
The optimization before compilation and installation is mainly used to modify the program name and so on, by changing the source code to hide the software name and version number.
Download the required source code package I provided: https://pan.baidu.com/s/1tyS3GL0W2kcQGsdfwc3B1w
Extraction code: cs23
1. Start the installation:
[root@nginx ~] # yum-y erase httpd # Uninstall the default httpd service of the system Prevent port conflicts [root@nginx ~] # yum-y install openssl-devel pcre-devel # installation depends on [root@nginx src] # rz # rz command to upload the required source code package [root@nginx src] # ls # confirm that the uploaded source code package nginx-sticky-module.zip nginx-1.14.0.tar.gz ngx_cache_purge-2.3.tar.gz# will package the uploaded source code Extract [root@nginx src] # tar zxf nginx-1.14.0.tar.gz [root@nginx src] # unzip nginx-sticky-module.zip [root@nginx src] # tar zxf ngx_cache_purge-2.3.tar.gz [root@nginx src] # cd nginx-1.14.0/ # to the nginx directory [root@nginx nginx-1.14.0] # vim src/core/nginx.h # modify the configuration of the following two lines # define NGINX_VERSION "6.6" # here is the version number of the modified nginx # define NGINX_VER "IIS/" NGINX_VERSION # here is the name of the software I changed the original nginx to IIS. # after modification, save and exit [root@nginx nginx-1.14.0] # vim src/http/ngx_http_header_filter_module.c # edit the file # before modification: static u_char ngx_http_server_string [] = "Server: nginx" CRLF; # line 49 # after modification is as follows: static u_char ngx_http_server_string [] = "Server: IIS" CRLF; # the IIS is best consistent with the previous file changes. # after the changes are completed, save and exit. [root@nginx nginx-1.14.0] # vim src/http/ngx_http_special_response.c # modify this configuration file to prevent page errors from echoing with nginx and version number # before modification: static u_char ngx_http_error_tail [] = # there is a line very similar to this before, note that there is no word build after the error of this line. " Nginx "CRLF# line 36"CRLF"CRLF# is changed as follows: static u_char ngx_http_error_tail [] =" IIS "CRLF# changes the original nginx to IIS"CRLF"CRLF# after the change is completed. Save and exit [root@nginx nginx-1.14.0] # useradd-M-s / sbin/nologin www # create nginx to run user [root@nginx nginx-1.14.0]. / configure-- prefix=/usr/local/nginx1.14-- user=www-- group=www-- with-http_stub_status_module-- with-http_realip_module-- with-http_ssl_module-- with-http_gzip_static_module-- http-client -body-temp-path=/var/tmp/nginx/client-http-proxy-temp-path=/var/tmp/nginx/proxy-http-fastcgi-temp-path=/var/tmp/nginx/fcgi-with-pcre-add-module=/usr/src/ngx_cache_purge-2.3-with-http_flv_module-add-module=/usr/src/nginx-sticky-module-with-http_dav_module-with-http_addition_module-with -http_sub_module-- with-http_mp4_module & & make & & make install# compilation and installation [root@nginx nginx-1.14.0] # ln-sf / usr/local/nginx1.14/sbin/nginx / usr/local/sbin/ # create nginx command soft connection [root@nginx nginx-1.14.0] # mkdir-p / var/tmp/nginx/client # create a directory to store temporary files [root@nginx nginx-1.14. 0] # nginx- t # after checking that the configuration file [root@nginx nginx-1.14.0] # nginx # has no errors Start nginx [root @ nginx nginx-1.14.0] # netstat-anput | grep 80 # make sure port 80 is listening
The command line verifies that the software name and version number of the http header have changed:
[root@nginx conf] # curl-I 127.0.0.1 # visit native HTTP/1.1 200OKServer: IIS/6.6 # OK Let's change the name and version Date: Fri, 25 Oct 2019 00:10:17 GMTContent-Type: text/htmlContent-Length: 612Last-Modified: Fri, 25 Oct 2019 00:03:52 GMTConnection: keep-aliveETag: "5db23be8-2019" Accept-Ranges: bytes [root@nginx conf] # curl-I 127.0.0.1/a.html # visit a non-existent page HTTP/1.1 404Not FoundServer: IIS/6.6 # error page echo message We also changed it, OKDate: Fri, 25 Oct 2019 00:11:07 GMTContent-Type: text/htmlContent-Length: 164Connection: keep-alive
II. Optimization of Nginx configuration items
Nginx is a master/worker structure: a master process that generates one or more worker processes. The schematic diagram is as follows:
The master-worker design pattern mainly consists of two main components: master and worker,master maintain the worker queue, send the request to multiple worker and execute it, and worker mainly carries out the actual logical calculation and returns the result to master.
The advantage of Nginx using this process model is that it uses independent processes so that they will not affect each other. After one process exits, other processes are still working and the service will not be interrupted, while the master process quickly restarts the new worker process. When an exception occurs in worker, it causes all requests on the current worker to fail without affecting other worker processes.
All the next configuration parameters are written globally in the Nginx configuration file.
1. Adjust the number of working processes running in Nginx
[root@nginx ~] # vim / usr/local/nginx1.14/conf/nginx.conf # Edit the main configuration file worker_processes 4 # number of cores or cores generally set to CPU x2 [root@nginx ~] # nginx-s reload # restart Nginx service [root@nginx ~] # ps-ef | grep nginx | grep worker # View the opened worker process www 128282 6761 0 11:41? 00:00:00 nginx: worker process www 128283 6761 0 11:41? 00:00:00 nginx: worker process www 128284 6761 0 11:41? 00:00:00 nginx: worker process www 128285 6761 0 11:41? 00:00:00 nginx: worker process
2. Nginx runs CPU affinity
[root@nginx] # vim / usr/local/nginx1.14/conf/nginx.conf worker_processes 4: workerboards 0001 0010 0100 1000; worker_processes 8: workerboards CPUs 00000001 00000010 00000100 000000010000 00100000 01000000000000010: 00000000000000000010: 00535 [root@docker ~] # ulimit-n 1024 [root@docker ~] # vim / etc/security/limits.conf * soft nofile 65535 * hard nofile 65535 [root@nginx conf] # su-[root@nginx ~] # ulimit-n 65535 [root@nginx ~] # nginx-s reload
The maximum number of worker_processes enabled is 8. If there are more than 8, the performance will not be improved, and the stability will become lower, so 8 processes will be enough.
Since the next step is to add some configuration items to the configuration file, write the configuration items directly and explain
3. Nginx event handling model
# the configuration file is modified as follows: events {use epoll; worker_connections 65535; multi_accept on;}
Nginx uses the epoll event model with high processing efficiency. Work_connections is the maximum number of connections allowed by a single worker process, which is generally based on server performance and memory. The actual concurrency of the Nginx server is the number of worker processes multiplied by work_connections, and fill in a 65535, which means that the concurrency can reach a concurrency of 65535 x 4 = 262140. The concurrency of a website can reach such a large number.
Multi_accept tells nginx to accept as many connections as possible after receiving a new connection notification.
Note: all subsequent configurations will be written in the http {} module.
4. Turn on efficient transmission mode
Http {include mime.types; default_type application/octet-stream;.... # omit part of the content sendfile on; # this line is enabled by default tcp_nopush on; # remove the comment symbols at the beginning of this line
The above configuration is explained as follows:
Include mime.types: media type, include is just an instruction that contains the contents of another file in the current file. Default_type application/octet-stream: the default media type is sufficient. Sendfile on: enable efficient file transfer mode. The sendfile instruction specifies whether nginx calls the sendfile function to output files. For common applications, set it to on. If it is used for downloading and other application disk IO heavy-load applications, it can be set to off to balance the processing speed of disk and network Ibano and reduce the load of the system. Note: if the picture does not display properly, change this to off. Tcp_nopush on; must be enabled in sendfile mode to be effective, prevent network congestion and actively reduce the number of network segments (telling nginx to send all header files in one packet instead of one after another. )
5. Connection timeout
The main purpose of modifying and adding the following is to protect server resources, CPU, memory, and control the number of connections, because establishing connections also requires resource consumption.
# add the following to the http {} field: keepalive_timeout 65; tcp_nodelay on; client_header_buffer_size 4k; open_file_cache max=102400 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 1; client_header_timeout 15; client_body_timeout 15; reset_timedout_connection on; send_timeout 15; server_tokens off; client_max_body_size 10m Keepalived_timeout: the client connection persists the session timeout, after which the server disconnects the link Tcp_nodelay: also to prevent network congestion, but only valid if included in the keepalived parameter; client_header_buffer_size 4k: the buffer size of the client request header, which can be set according to the paging size of your system. Generally, the size of a request header will not exceed 1k, but since the general system paging is greater than 1k, it is set to the paging size here. The paging size can be obtained using the command getconf PAGESIZE; open_file_cache max=102400 inactive=20s: this will specify cache for open files. Default is not enabled. Max specifies the number of caches, which is recommended to be the same as the number of open files. Inactive refers to how long it takes for files not to be requested to delete the cache. Open_file_cache_valid 30s: this refers to how often to check the valid information of the cache. The minimum number of times a file is used within the time of the inactive parameter in the open_file_cache_min_uses 1:open_file_cache directive. If this number is exceeded, the file descriptor is always opened in the cache. For example, if a file is not used once in inactive time, it will be removed; client_header_timeout sets the timeout of the request header. We can also set this lower, and if no data is sent beyond this time, nginx will return an error of request timeout; client_body_timeout: set the timeout of the request body. We can also lower this setting and send no data beyond this time, with the same error prompt as above; reset_timeout_connection: tell nginx to close unresponsive client connections. This frees up the memory space occupied by that client; send_timeout: responds to the client timeout, which is limited to the time between two activities, after which the client has no activity and nginx closes the connection; server_tokens: does not make nginx execute faster, but it can close the nginx version number in the error page, which is good for security Client_max_body_size upload file size limit.
6. Fastcgi optimization
Fastcgi (Express Universal Gateway Interface) is an interface between static and dynamic services. Cache represents the write cache and Buffer represents the read cache.
In the production environment, we also need to decide whether to turn on the caching function of dynamic pages according to our own company's website. after all, dynamic pages change relatively fast, and in most cases, we will not open the caching of dynamic pages.
# the following configurations are written in the http {} field: fastcgi_connect_timeout 600; fastcgi_send_timeout 600; fastcgi_read_timeout 600; fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; fastcgi_temp_path / usr/local/nginx1.10/nginx_tmp; fastcgi_intercept_errors on Fastcgi_cache_path / usr/local/nginx1.10/fastcgi_cache levels=1:2 keys_zone=cache_fastcgi:128m inactive=1d max_size=10g
The above configuration items are explained as follows:
Fastcgi_connect_timeout: specifies the timeout for connecting to the backend FastCGI; fastcgi_send_timeout: timeout for sending requests to FastCGI; fastcgi_read_timeout 600: specifies the timeout for receiving FastCGI replies; fastcgi_buffer_size 64k: specifies how much buffer is needed to read the first part of the FastCGI reply. The default buffer size is the size of each block in the fastcgi_buffers instruction, which can be set to a smaller value. Fastcgi_buffers 4 64k: specify how many and how many buffers are needed locally to buffer FastCGI reply requests. If the page size generated by a php script is 256KB, then 4 64KB buffers will be allocated to cache. If the page size is larger than 256KB, then the parts larger than 256KB will be cached in the path specified by fastcgi_temp_path, but this is not a good method, because the data processing speed in memory is faster than that of disk. Generally speaking, this value should be the middle value of the page size generated by php scripts in the site. If the page size generated by most scripts in the site is 256KB, you can set this value to "8 32K", "464k", etc.; fastcgi_busy_buffers_size 128k: it is recommended to set it to twice the size of fastcgi_buffers, which represents the buffer in busy hours. Fastcgi_temp_file_write_size 128k: how many data blocks will be used when writing fastcgi_temp_path. The default value is twice as large as fastcgi_buffers. If the load is loaded, it may report 502 Bad Gateway;fastcgi_temp_path: cache temporary directory; fastcgi_intercept_errors on: this instruction specifies whether to pass 4xx and 5xx error messages to the client, or allow nginx to use error_page to process error messages. Note: if the static file does not exist, it will return 404 pages, but the php page will return blank pages. Fastcgi_cache_path / usr/local/nginx1.10/fastcgi_cache levels=1:2 keys_zone=cache_fastcgi:128m inactive=1d max_size=10g: fastcgi_cache cache directory, which can be set at the directory level. For example, 160256 subdirectories will be generated at 1:2. Cache_fastcgi is the name of this cache space, and how much memory is used by cache (nginx, which is such a popular content, is directly stored in memory to improve access speed). Inactive indicates the default expiration time. If the cached data is not accessed within the expiration time, it will be deleted. Max_size indicates the maximum amount of hard disk space used. # Note: the following configuration items are not written in the above configuration items. Optional: fastcgi_cache cache_fastcgi: means to enable FastCGI cache and specify a name for it. Turning on caching can effectively reduce the load of CPU and prevent the error release of 502. The name of the cache created by cache_fastcgi for the proxy_cache_path instruction; fastcgi_cache_valid 200302 1h: used to specify the caching time of the reply code. The value in the instance indicates that 200,302 responses will be cached for one hour, to be used with fastcgi_cache; fastcgi_cache_valid 301 1d: cache 301 responses for one day; fastcgi_cache_valid any 1m: cache other responses for 1 minute Fastcgi_cache_min_uses 1: this instruction is used to set how many requests the same URL will be cached; fastcgi_cache_key http://$host$request_uri: this instruction is used to set the key value of the web cache, and nginx is used to md5 the hash storage based on the key value. Generally, it is combined into proxy_cache_key based on variables such as $host (domain name) and $request_uri (path to the request); fastcgi_pass: specify the listening port and address of the FastCGI server, which can be local or other.
Summary:
The caching capabilities of nginx are as follows: the role of proxy_cache / fastcgi_cacheproxy_cache is to cache the content of the back-end server, which can be anything, including static and dynamic. The role of fastcgi_cache is to cache the content generated by fastcgi, in many cases dynamic content generated by php. Proxy_cache cache reduces the number of times nginx communicates with the back-end, saving transmission time and back-end broadband. Fastcgi_cache caching reduces the number of communications between nginx and php, and reduces the pressure on php and database (mysql).
7. Gzip tuning
# the following configuration items are written in the http {} field: gzip on;gzip_min_length 2kfield zipmakers buffers 4 32kbot gzipholders httpboxes version 1.1 switch gzipsets compounding level 6 countries types text/plain text/css text/javascript application/json application/javascript application/x-javascript application/xml;gzip_vary on;gzip_proxied any
Gzip on: enable compression
Gzip_min_length 1k: set the minimum number of bytes of pages allowed to be compressed. The number of page bytes is obtained from the Content-Length of the header header. The default value is 0. No matter how many pages are compressed, it is recommended to set it to greater than 1K. If the value is smaller than 1K, the pressure may become larger and larger. Gzip_buffers 4 32k: compression buffer size, which means applying for 4 units of 32K memory as compression result stream cache. The default value is to apply for the same memory space as the original data to store gzip compression results. Gzip_http_version 1.1: compression version, which is used to set and identify HTTP protocol version. Default is 1.1.At present, most browsers already support GZIP decompression, which can be used by default. Gzip_comp_level 6: compression ratio, used to specify GZIP compression ratio, 1 compression ratio is the smallest, processing speed is the fastest, 9 compression ratio is the largest, transmission speed is fast, but the processing is slow, and it consumes CPU resources; gzip_types text/css text/xml application/javascript: used to specify the type of compression, the 'text/html' type is always compressed. Default: gzip_types text/html (does not compress js/css files by default) # Compression type, matching MIME type for compression # cannot use wildcard text/*# (whether specified or not) text/html has been compressed by default # set which compressed text files can be supported by conf/mime.typesgzip_vary on:vary header. This option allows the front-end cache server to cache pages compressed by GZIP. For example, using Squid to cache nginx-compressed data
8. Expires cache tuning
Cache, mainly for pictures, css,js and other elements to change the opportunity to use relatively few cases, especially pictures, take up a large bandwidth, we can completely set the picture in the browser local cache 365drecoveryjsmenhtml can be cached for more than 10 days, so that users open and load slowly for the first time, the second time, it is very fast! When caching, we need to list the extension of the object that needs to be cached, and the Expires cache is configured in the server field.
Server {listen 80; server_name localhost;. # omit some content location ~ *\. (ico | jpe?g | gif | png | bmp | flv) ${expires 30d; log_not_found off; access_log off;} location ~ *\. (js | css) ${expires 7d Log_not_found off; access_log off;}. # omit part} # Note: log_not_found off: indicates whether to record an error that does not exist in error_log. The default is on.
The advantages of expire features are as follows:
(1) expires can reduce the bandwidth purchased by the website, save costs, and improve user access experience at the same time.
(2) it is a very important function of web service to reduce the pressure of service and save server cost.
The disadvantages of expire features are as follows:
(1) when the cached page or data is updated, the user may still see the old content, which will affect the user experience. Solution: the first one is to shorten the cache time, for example, 1 day, but not completely, unless the update frequency is more than 1 day; the second one renames the cached object
(2) content that the website does not want to be cached: website traffic statistics tools, files that are updated frequently (logo of web pages).
9. Configure hotlink protection
The function of hotlink protection is to prevent others from quoting links such as pictures directly from your website, which consumes our resources and network traffic, while we do not get any benefits and are thankless.
There are the following solutions:
1: watermark, brand promotion, your bandwidth, server is sufficient
2: firewall, direct control, provided you know the source of IP
3: hotlink protection strategy.
The following method is to give the error prompt of 404 directly or jump to the specified prompt page.
# the following configuration is written in the server {} field: location ~ * ^. +\. (jpg | gif | swf | flv | wma | wmv | asf | mp3 | mmf | zip | rar) ${valid_referers none blocked www.test.com test.com; # this field specifies the domain name if ($invalid_referer) {# return 302 http://www.test.com/img/nolink.png; that is allowed to jump # the configuration item of the comment indicates that it can be redirected to the specified file return 404; # here it is returned directly to the client status code 404 break;} access_log off # disable access log} location / {# hotlink protection configuration must be written before all location fields root html; index index.html index.htm;} # when redirecting to another file, you need to pay special attention to the suffix that does not match the hotlink protection rules, otherwise the client will be prompted with "too many redirects". # I choose the return status code 404 first.
The complete configuration file after the above modification is as follows:
# user nobody;worker_processes 4 use epoll; worker_connections workerplate computers 0001 0010 0100 1000 cross workerworkers log logs/error.log;#error_log logs/error.log notice;#error_log logs/error.log info;#pid logs/nginx.pid;events {use epoll; worker_connections 65535; multi_accept on;} http {include mime.types; default_type application/octet-stream # log_format main'$remote_addr-$remote_user [$time_local] "$request" #'$status $body_bytes_sent "$http_referer" #'"$http_user_agent"$http_x_forwarded_for"; # access_log logs/access.log main; sendfile on; tcp_nopush on; # keepalive_timeout 0; keepalive_timeout 65 Tcp_nodelay on; client_header_buffer_size 4k; open_file_cache max=102400 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 1; client_header_timeout 15; client_body_timeout 15; reset_timedout_connection on; send_timeout 15; server_tokens off; client_max_body_size 10m; fastcgi_connect_timeout 600; fastcgi_send_timeout 600; fastcgi_read_timeout 600 Fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; fastcgi_temp_path / usr/local/nginx1.10/nginx_tmp; fastcgi_intercept_errors on; fastcgi_cache_path / usr/local/nginx1.10/fastcgi_cache levels=1:2 keys_zone=cache_fastcgi:128m inactive=1d max_size=10g; gzip on; gzip_min_length 2k; gzip_buffers 4 32k Gzip_http_version 1.1; gzip_comp_level 6; gzip_types text/plain text/css text/javascript application/json application/javascript application/x-javascript application/xml; gzip_vary on; gzip_proxied any; server {listen 80; server_name localhost; # charset koi8-r; # access_log logs/host.access.log main Location ~ * ^. +\. (jpg | gif | png | swf | flv | wma | wmv | asf | mp3 | mmf | zip | rar) ${valid_referers none blocked 192.168.20.5 www.test.com; if ($invalid_referer) {return 302 http://192.168.20.5/img/nolink.png; # return 404; break;} access_log off } location / {root html; index index.html index.htm;} location ~ *\. (ico | jpe?g | gif | png | bmp | flv) ${expires 30d; log_not_found off; access_log off;} location ~ *\. (js | css) ${expires 7d Log_not_found off; access_log off;}.. # omit the following comments
10. Kernel parameter optimization
[root@nginx conf] # vim / etc/sysctl.conf # need to edit this configuration file # write the following configuration item at the end: fs.file-max = 999999net.ipv4.ip_forward = 0net.ipv4.conf.default.rp_filter = 1net.ipv4.conf.default.accept_source_route = 0kernel.sysrq = 0kernel.core_uses_pid = 1net.ipv4.tcp_syncookies = 1kernel.msgmnb = 65536kernel.msgmax = 65536kernel.shmmax = 68719476736kernel.shmall = 4294967296net.ipv4 .TCP _ max_tw_buckets = 6000net.ipv4.tcp_sack = 1net.ipv4.tcp_window_scaling = 1net.ipv4.tcp_rmem = 10240 87380 12582912net.ipv4.tcp_wmem = 10240 87380 12582912net.core.wmem_default = 8388608net.core.rmem_default = 8388608net.core.rmem_max = 16777216net.core.wmem_max = 16777216net.core.netdev_max_backlog = 262144net.core.somaxconn = 40960net.ipv4.tcp_max_orphans = 3276800net.ipv4.tcp_max_syn_backlog = 262144net.ipv4 .tcp _ timestamps = 0net.ipv4.tcp_synack_retries = 1net.ipv4.tcp_syn_retries = 1net.ipv4.tcp_tw_recycle = 1net.ipv4.tcp_tw_reuse = 1net.ipv4.tcp_mem = 94500000 915000000 927000000net.ipv4.tcp_fin_timeout = 1net.ipv4.tcp_keepalive_time = 30net.ipv4.ip_local_port_range = 1024 65 [root@nginx conf] # sysctl-p # Refresh Make the kernel modification take effect, if normal, all configuration items will be returned # after refreshing, all configuration items will be written to a directory in the form of files. You can specify the find command to find and verify # if you want to find the file corresponding to "net.core.netdev_max_backlog" You can execute the command "find /-name netdev_max_backlog" to find fs.file-max = 999999: this parameter represents the maximum number of handles that a process (such as a worker process) can open at the same time. This parameter limits the maximum number of concurrent connections in a straight line and needs to be configured according to the actual situation. Net.ipv4.tcp_max_tw_buckets = 6000: this parameter indicates the maximum number of TIME_WAIT sockets allowed by the operating system. If this number is exceeded, the TIME_WAIT socket will be cleared immediately and a warning message will be printed. This parameter defaults to 180000, and too many TIME_WAIT sockets will slow down the Web server. Note: actively closing the server side of the connection will result in a connection in TIME_WAIT state net.ipv4.ip_local_port_range = 1024 65000: the range of ports allowed to be opened by the system. Net.ipv4.tcp_tw_recycle = 1: enable timewait fast recycling. Net.ipv4.tcp_tw_reuse = 1: enable reuse. Allows TIME-WAIT sockets to be reused for new TCP connections. This makes sense for the server because there are always a large number of TIME-WAIT-state connections on the server. Net.ipv4.tcp_keepalive_time = 30: this parameter indicates how often TCP sends keepalive messages when keepalive is enabled. The default is 2 hours, and if you set it smaller, you can clean up invalid connections more quickly. Net.ipv4.tcp_syncookies = 1: enable SYN Cookies. When a SYN waiting queue overflow occurs, enable cookies to handle it. Net.core.somaxconn = 40960: the backlog of the listen function in the web application gives us the net.core.somaxconn of the kernel parameter by default, while the NGX_LISTEN_BACKLOG defined by nginx defaults to 511, so it is necessary to adjust this value. Note: for a TCP connection, Server and Client need a three-way handshake to establish a network connection. When the three-way handshake is successful, we can see that the state of the port changes from LISTEN to ESTABLISHED, and then the data can be transmitted over the link. Each port in the Listen state has its own listening queue. The length of the listening queue is related to the somaxconn parameter and the listen () function in the program that uses the port; the somaxconn parameter defines the maximum length of the listening queue for each port in the system, which is a global parameter with a default value of 128. for a high-load web service environment that often handles new connections, the default value of 128is too small. For most environments, it is recommended to increase this value to 1024 or more. Large listening queues can also help prevent DoS denial of service. Net.core.netdev_max_backlog = 262144: the maximum number of packets allowed to be sent to the queue when each network interface receives packets faster than the kernel processes them. Net.ipv4.tcp_max_syn_backlog = 262144: this parameter indicates the maximum length of the queue for SYN requests accepted by TCP during the establishment phase of the three-way handshake. The default is 1024. Setting it larger can prevent Linux from losing connection requests initiated by clients when Nginx is too busy to accept new connections. Net.ipv4.tcp_rmem = 10240 87380 12582912: this parameter defines the minimum, default, and maximum values of the TCP accept cache (for the TCP accept sliding window). Net.ipv4.tcp_wmem = 10240 87380 12582912: this parameter defines the minimum, default, and maximum values of the TCP send cache (for TCP send sliding windows). Net.core.rmem_default = 6291456: this parameter indicates that the kernel socket accepts the default size of the cache. Net.core.wmem_default = 6291456: this parameter represents the default size of the kernel socket send cache. Net.core.rmem_max = 12582912: this parameter indicates that the kernel socket accepts the maximum size of the cache. Net.core.wmem_max = 12582912: this parameter represents the maximum size of the kernel socket send cache. Net.ipv4.tcp_syncookies = 1: this parameter is independent of performance and is used to solve the SYN "gong hit" of TCP.
11. Optimization of the number of system connections
In the linux system, the default value of open files is 1024, which means that the server is only allowed to open 1024 files at the same time, use the command ulimit-a to view all the limits of the current system, and use the command ulimit-n to view the current maximum number of open files.
The newly installed linux is only 1024 by default, and it's easy to encounter error: too many open files on a server with a heavy load. Therefore, it needs to be enlarged.
[root@nginx core] # vim / etc/security/limits.conf # modify this configuration file # add the following fields at the end of the file: * soft nofile 65535 * hard nofile 65535 * soft noproc 65535 * hard noproc 6553users need to log in again for the changes to take effect. [root@nginx ~] # bash [root@nginx ~] # ulimit-a # to see if it works
Third, verify the above optimization and Nginx server stress test
1. Test hotlink protection:
Prepare two servers, one is the Nginx server just optimized, and the other is at will, which can provide web function.
The IP address of the Nginx server you just optimized is 192.168.20.5, and the IP address of the other server is 192.168.20.2.
1) the Nginx server is configured as follows:
[root@nginx html] # ls 50x.html index.html test.png [root@nginx html] # vim.. / conf/nginx.conf.. # omit some content location ~ * ^. +\. (jpg | gif | png | swf | flv | wma | wmv | asf | mp3 | zip | rar) ${valid_referers none blocked 192.168.20.5 www.test.com If ($invalid_referer) {# return 302 http://192.168.20.5/img/nolink.png; return 404; break;} access_log off }.. # omit part [root@nginx html] # nginx-s reload # restart Nginx server to make hotlink protection configuration effective
2) the web page file of the second server is as follows:
[root@daolian html] # cat index.html # its hyperlink address points to the test.png picture lianjie of the first Nginx server
3) if the client accesses the Nginx server directly, you will see the test.png page as follows:
4) the client accesses the second server test:
After clicking the hyperlink, you will see the following page:
5) now change to another hotlink protection rule and test it again:
[root@nginx html] # vim.. / conf/nginx.conf # Edit its configuration file location ~ * ^. +\. (jpg | gif | png | swf | flv | wma | wmv | asf | mp3 | mmf | zip | rar) ${valid_referers none blocked 192.168.20.5 www.test.com; if ($invalid_referer) {return 302 http://192.168.20.5/img/nolink.png; # redirect it to the nolink.png image # return 404 in the img directory; # will return the 404 status code and comment out the break;} access_log off } [root@nginx html] # nginx-s reload # restart to make the configuration effective [root@nginx img] # ls / usr/local/nginx1.14/html/img # redirect the following files under the directory: nolink.png
6) when the client clicks the hyperlink to test again, you will see the following page:
2. Stress test
For stress testing of both static and php dynamic pages of a web page, you need to install and deploy LNMP architecture. Here, a single deployment is used. For the deployment process, please refer to the document: https://blog.51cto.com/14227204/2435795
If it is only used for stress testing, the MySQL database may not be installed.
The process of deploying LNMP architecture is a little bit.
After the deployment is complete, you need to change the configuration file:
[root@nginx html] # vim.. / conf/nginx.conf # Edit the main configuration file # add the following to the server {} field: location ~\ .php$ {root html; fastcgi_pass 127.0.0.1 location 9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME / scripts$fastcgi_script_name; include fastcgi.conf } [root@nginx html] # pwd/usr/local/nginx1.14/html [root@nginx html] # cat test.php # the contents of the test.php file under the root directory of the web page are as follows:
When you are done, you need to make sure that when you access 192.168.20.5/test.php, you can see the following page:
1) start testing:
[root@nginx ~] # ab-c 50000-n 50000 127.0.0.1/index.html# tests static page performance with a total of 50000 requests at a time. Requests per second: 2407.58 [# / sec] (mean) # throughput of the Nginx server # We are mainly concerned with the above return information, that is, the throughput of the Nginx server # the better the server performance, the higher the throughput [root@nginx html] # ab-c 100-n 1000 127.0.0.1/test.php # choose the appropriate value according to the server performance
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.