In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Blogger QQ:819594300
Blog address: http://zpf666.blog.51cto.com/
Friends who have any questions can contact the blogger, the blogger will help you answer, thank you for your support!
Nginx is a very lightweight HTTP server written by Russians, Nginx, pronounced "engineX", is a high-performance HTTP and reverse proxy server, and is also an IMAP/POP3/SMTP proxy server. Nginx is developed by Russian Igor Sysoev for the second most visited Rambler.ru site in Russia.
Nginx is written in an event-driven (epoll) manner, so it has very good performance, and it is also a very efficient reverse proxy and load balancing. However, Nginx does not support cgi mode because it can reduce some program vulnerabilities. So you must use FastCGI to execute the PHP program.
Due to some advantages of Nginx itself, lightweight, open source, easy to use, more and more companies use nginx as their own web application server. This paper introduces in detail the installation of nginx source code and optimizes the configuration of nginx.
I. Optimization of Nginx
1. Optimization before compilation and installation
Pre-compilation optimization is mainly used to modify the program name and so on, in order to change the source code hidden software name and version number.
1) install zlib-devel, pcre-devel and other dependent packages
Expand knowledge:
Zlib-devel package: provides data compression algorithms
Pcre-devel package: provides regular expression
Openssl-devel package: provides secure communications that support nginx
2) download the source code package of nginx: http://nginx.org/download
① decompress the source code package:
② hides the software name and version number:
Modify it to the following:
The explanation is as follows:
Line 13 is modified to the version you want.
Line 14 is the name of the software you want to change.
③ modifies the connection field in the HTTP header to prevent the specific version number from being echoed:
Description: universal http header, which contains headers supported by request and response messages, and generic headers including Cache-Control, Connection, Date, Pragma, Transfer-Encoding, Upgrade and Via. The extension of the universal header requires that both sides of the communication support this extension, and if there is an unsupported universal header, it will generally be treated as an entity header. That is to say, some devices, or software, can get connection, but some can't, so you have to hide it thoroughly!
Modify it to the following:
④ defines the return of http error codes.
Description: sometimes our page program error, Nginx will return the corresponding error code on our behalf, echo, with nginx and version number, we hide it.
Modify it to the following:
2. Install ngnix
① adds a www group, creates a nginx running account www and joins the www group. Www users are not allowed to log in directly to the system.
② officially starts installation of nginx
The text in the above picture is as follows:
. / configure--prefix=/usr/local/nginx1.10-with-http_dav_module--with-http_stub_status_module-- with-http_addition_module--with-http_sub_module-- with-http_flv_module-- with-http_mp4_module--with-pcre-- with-http_ssl_module-- with-http_gzip_static_module-- user=www--group=www & & make & & make install
③ makes soft connections and detects configuration files
④ starts nginx
Expand knowledge:
Nginx-s stop / / stop the nginx service
Nginx / / enable nginx service
Nginx-s reload / / reload nginx service
⑤ tests whether the version and software name are hidden
⑥ nginx has many options to view help help information
The explanation is as follows:
-?-h: this helps
-v: display the version and exit
-V: display version and configuration options
-t: test configuration and exit
T: test the configuration, dump it and exit
-Q: suppresses non-error messages in configuration testing
-s signal: signal sent to the main process: stop, exit, restart, reload
-p prefix: sets the prefix path (default: / usr/local/nginx1.10/)
-c file name: set configuration file (default: conf / nginx.conf)
-g directive: set global directive configuration file
Note: nginx-V: you can also view all the configuration items followed by. / configure when installing nginx.
3. Nginx configuration item optimization
In the figure above, we can see that when viewing the nginx process, the work process is the user of the nginx program, but the master process is still root, where master is the monitoring process, also known as the main process, work is the worker process, and some cache-related processes.
The relationship is shown in the figure:
It can be understood that master is the administrator, and the work process is to provide services for users!
Process: the main process sends the received client requests to the worker process, and it does not process any requests itself. Further explanation is that the role of the main process is to manage the worker process, which is responsible for listening to the user's request, and then sending the request to a worker process (only one worker process by default), and then the worker process handles the client's request. The main process only listens and does not process any requests.
The number of worker processes running in ① Nginx. Generally speaking, we set the core of CPU or the number of cores x2
If you don't know the number of cores of cpu, you can see it by pressing 1 after the top command, or you can view the / proc/cpuinfo file:
Now start to modify the configuration items of the nginx main configuration file to optimize:
Global configuration section:
1) number of worker_processes / / worker processes
Description: the default work process is 1.
Suggestion: the number of work processes should be equal to or twice the number of cpu cores.
Verify the viewing process:
2) worker_cpu_affinity / / nginx runs cpu affinity
Description: this configuration item is not available by default and needs to be added manually
For example, four cores and four processes:
For example, the setting of eight cores and eight processes:
For example, the setting of four cores and eight processes:
The figure above explains: the first process and the fifth process use the first core (that is, 0001), the second process and the sixth process use the second core (that is, 0010), and so on.
Note 1 the number of N cores is N, the binary number is N digits. And the number is also N.
Note A maximum of 8 2:worker_processes is enabled. The performance improvement of more than 8 will not be improved any more, and the stability will become lower, so 8 processes will be sufficient.
3) worker_rlimit_nofile / / maximum number of files that can be opened by a worker process
Description: this configuration item is not available by default and needs to be added manually.
Recommendation: the result of a command with a value equal to "ulimit-n" is divided by the number of worker processes in nginx. The general recommendation is 65535, because if we set it to 65535, we need to change the value of "ulimit-n" (default 1024).
File resource limits can be set in / etc/security/limits.conf for individual users such as root/user or * on behalf of all users.
Note: 26214065535404
After modifying the above settings, you also need to modify "ulimit-n":
Nginx event handling model (that is, event settings):
1) use epoll; / / uses the epoll event model because of its high processing efficiency
Description: this configuration item is not available by default and needs to be added manually.
2) worker_connections 65535; / / maximum number of connections per worker process (default is 1024)
Suggestion: this value is generally based on server performance and memory. Here we fill in a 65535, which is large enough, and it is recommended that the maximum should not exceed 65535. Generally speaking, a website set to 65535 is a very large website.
Note: if we set 65535 for this configuration item, then the maximum number of connections for the entire server of this nginx server is:
655354th 262140 (i.e.: maximum number of connections per worker process * total number of worker processes)
3) multi_accept on; / / this configuration item tells nginx to accept as many connections as possible after receiving a new connection notification.
Note: the default is on, and when it is set to on, multiple worker processes connections in a serial manner, that is, only one worker is awakened for a connection, and the other is dormant. When set to off, multiple worker processes connections in parallel, that is, a connection will wake up all worker until the connection is assigned and no connection is obtained to continue sleeping.
Suggestion: when you have a small number of server connections, turning on this parameter will reduce the load to a certain extent, but when the server throughput is very large, you can turn this parameter off for efficiency.
Turn on efficient transfer mode (http configuration)
The above configuration items are existing by default.
Include mime.types; / / media type, include is just an instruction to include the contents of another file in the current file
Default_type application/octet-stream; / / the default media type is sufficient
1) sendfile on; / / enable efficient file transfer mode
Note: the sendfile instruction specifies whether nginx should call the sendfile function to output files. It is recommended that for ordinary applications, it should be set to on, and if it is used for downloading and other application disk IO heavy-load applications, it can be set to off to balance the processing speed of disk and network Icano and reduce the load of the system.
Note: if the picture does not display properly, change this to off.
This configuration item mode exists, and the default is the on state.
2) tcp_nopush on; / / prevent network congestion
Note: it must be enabled in sendfile mode to prevent network congestion and actively reduce the number of network segments (send the response header and the beginning of the body together instead of sending one after another. )
This line configuration item also exists by default, but it is annotated, and we just need to remove the comment "#" to enable it.
Connection timeout (also http configuration)
Description: the main purpose is to protect server resources, CPU, memory, control the number of connections, because the establishment of connections also need to consume resources.
The following is an explanation of each configuration item:
1) the keepalived_timeout client connection persists the session timeout, after which the server disconnects the link
2) tcp_nodelay; is also used to prevent network congestion, but it is only effective if it is included in the keepalived parameter.
3) client_header_buffer_size 4k
The buffer size of the client request header can be set according to the paging size of your system. Generally, the size of a request header will not exceed 1k, but since the paging of the system is generally larger than 1k, it is set to the paging size here. The page size can be obtained with the command getconf PAGESIZE.
4) open_file_cache max=262140 inactive=20s
This specifies the cache for open files, which is not enabled by default. Max specifies the number of caches, which is recommended to be the same as the total number of files opened by nginx (that is, it is recommended to be consistent with the number of ulimit-n). Inactive refers to how long it takes to delete the cache after the file has not been requested.
5) open_file_cache_valid 30s
This refers to how often the valid information in the cache is checked.
6) open_file_cache_min_uses 1
The minimum number of times a file is used during the time of the inactive parameter in the open_file_cache directive. If this number is exceeded, the file descriptor is always opened in the cache. As in the example above, if a file is not used once in inactive time, it will be removed.
7) client_header_timeout sets the timeout of the request header. We can also set this lower. If no data is sent beyond this time, nginx will return an error of request time out.
8) client_body_timeout sets the timeout of the request body. We can also lower this setting and send no data beyond this time, with the same error prompt as above.
9) reset_timeout_connection tells nginx to close unresponsive client connections. This will free up the memory space occupied by that client.
10) send_timeout responds to the client timeout, which is limited to the time between two activities. If the client has no activity beyond this time, nginx closes the connection.
11) server_tokens does not make nginx execute faster, but it can turn off the nginx version number in the error page, which is good for security.
12) size limit of files uploaded by client_max_body_size
Fastcgi tuning (also http configuration)
The following is an explanation of each configuration item:
1) fastcgi_connect_timeout 600; # specifies the timeout for connecting to the backend FastCGI. If the segment is set, the client access returns a 502 return code.
2) fastcgi_send_timeout 600; # timeout for sending the request to FastCGI.
3) fastcgi_read_timeout 600; # specifies the timeout for receiving FastCGI replies.
4) fastcgi_buffer_size 64k; # specifies how much buffer is needed to read the first part of the FastCGI reply. The default buffer size is the size of each block in the fastcgi_buffers instruction, which can be set to a smaller value.
5) fastcgi_buffers 4 64k; # specify how many and how many buffers need to be used locally to buffer FastCGI reply requests. If the page size generated by a php script is 256KB, then 4 64KB buffers will be allocated to cache. If the page size is larger than 256KB, then the parts larger than 256KB will be cached in the path specified by fastcgi_temp_path, but this is not a good method, because the data processing speed in memory is faster than that of disk. Generally, this value should be the middle value of the page size generated by the php script in the site. If the page size generated by most of the scripts in the site is 256KB, then you can set this value to "832K", "464k", and so on.
6) fastcgi_busy_buffers_size 128k; # it is recommended to set it to twice the size of fastcgi_buffers and buffer during busy hours. The suggestion is twice as much as 5).
7) fastcgi_temp_file_write_size 128k
# how many data blocks will be used when writing fastcgi_temp_path? the default value is twice that of fastcgi_buffers. If this value is set, it may be reported to 502BadGateway if the load is loaded.
8) fastcgi_temp_path # cache temporary directory
9) the fastcgi_intercept_errors on;# directive specifies whether to pass 4xx and 5xx error messages to the client, or to allow nginx to use error_page to handle error messages.
Note: if the static file does not exist, it will return 404 pages, but the php page will return blank pages!
10) fastcgi_cache_path/usr/local/nginx1.10/fastcgi_cache levels=1:2 keys_zone=cache_fastcgi:128minactive=1d max_size=10g
# fastcgi_cache cache directory, you can set the directory level. For example, 160256 subdirectories will be generated at 1:2. Cache_fastcgi is the name of this cache space, and how much memory is used by cache (such a popular content nginx is directly stored in memory to improve access speed). Inactive indicates the default expiration time. If the cached data is not accessed within the expiration time, it will be deleted. Max_size indicates the maximum amount of hard disk space used.
The following configuration items are not available in this screenshot:
Fastcgi_cache cache_fastcgi; # means to turn on the FastCGI cache and give it a name. It is very useful to turn on caching, which can effectively reduce the load of CPU and prevent 502 error release. The name of the cache created by cache_fastcgi for the proxy_cache_path instruction
Fastcgi_cache_valid 200302 1h; # is used to specify the caching time of the reply code. The value in the instance indicates that the 200,302 responses will be cached for one hour, to be used with fastcgi_cache.
Fastcgi_cache_valid 301 1D; # cache 301 responses for one day
Fastcgi_cache_valid any 1m; # caches other replies to 1 minute
Fastcgi_cache_min_uses 1; # this directive sets how many requests the same URL will be cached.
Fastcgi_cache_key http://$host$request_uri; # this instruction is used to set the key value of the web cache. Nginx stores the md5 hash according to the Key value. It is generally combined into proxy_cache_key based on variables such as $host (domain name), $request_uri (request path), and so on.
Fastcgi_pass # specifies the listening port and address of the FastCGI server, either native or other
Summary:
The caching functions of nginx are: proxy_cache/ fastcgi_cache
The role of proxy_cache is to cache the content of the back-end server, which can be anything, both static and dynamic.
The role of fastcgi_cache is to cache the content generated by fastcgi, in many cases dynamic content generated by php.
Proxy_cache cache reduces the number of times nginx communicates with the back-end, saving transmission time and back-end broadband.
Fastcgi_cache caching reduces the number of communications between nginx and php, and reduces the pressure on php and database (mysql).
Gzip tuning
Note: the use of gzip compression feature may save us bandwidth, speed up transmission speed, have a better experience, but also save costs for us, so this is a key point.
Nginx requires the ngx_http_gzip_module module to enable compression, while apache uses mod_deflate
Generally we need to compress the content are: text, js,html,css, for pictures, video, flash what is not compressed, but also note that we need to use the function of gzip is to consume CPU!
The following is an explanation of each configuration item:
1) gzip on; # enables compression
2) gzip_min_length 2k; # sets the minimum number of bytes of pages allowed to be compressed. The number of bytes of pages is obtained from the Content-Length of the header header. The default value is 0. No matter how many pages are compressed, it is recommended to set it to greater than 1K. If it is smaller than 1K, the greater the pressure may be.
3) gzip_buffers 4 32k; # compression buffer size, which means applying for 4 units of 32K memory as the compression result stream cache. The default value is to apply for memory space of the same size as the original data to store gzip compression results.
4) gzip_http_version 1.1; # compressed version, which is used to set and identify the HTTP protocol version. The default is 1.1. at present, most browsers already support GZIP decompression, and you can use the default.
5) gzip_comp_level 6; # compression ratio, which is used to specify the GZIP compression ratio. 1 compression ratio is the smallest, processing speed is the fastest, and 9 compression ratio is the largest, but the transmission speed is fast, but the processing is slow and consumes CPU resources.
6) gzip_types text/css text/xml application/javascript; # is used to specify the type of compression, and the 'text/html' type is always compressed.
Default: gzip_types text/html (js/css files are not compressed by default)
# Compression type, matching MIME type for compression
# cannot use the wildcard character text/*
# (whether specified or not) text/html has been compressed by default
# set which compressed text file can refer to conf/mime.types
7) gzip_vary on; # varyheader support. Change the option to allow the front-end cache server to cache GZIP-compressed pages, such as using Squid to cache nginx-compressed data.
8) when gzip_proxiedany; # Nginx is enabled as a reverse proxy, it is based on certain requests and responses to decide whether to enable gzip compression in the response to the proxy request. Whether or not compression depends on the "Via" field in the request header. Multiple different parameters can be specified in the instruction at the same time, as follows:
Expired- enables compression if the header header contains "Expires" header information
No-cache- enables compression if the header header contains "Cache-Control:no-cache" header information
No-store- enables compression if the header header contains "Cache-Control:no-store" header information
Private- enables compression if the header header contains "Cache-Control:private" header information
No_last_modified- enables compression if the header header does not contain "Last-Modified" header information
No_etag- enables compression if the header header does not contain "ETag" header information
Auth- enables compression if the header header contains "Authorization" header information
Any- unconditionally enables compression
Expires cache tuning
Description: cache, mainly for pictures, css,js and other elements to change the opportunity to use relatively few cases, especially pictures, take up a large bandwidth, we can completely set the picture in the browser local cache 365d force jsline html can be cached for more than 10 days, so that users the first time to open and load a little slower, the second time, very fast! When caching, we need to list the extension names that need to be cached, and the Expires cache is configured in the server field.
Note: whether log_not_found off; records errors that do not exist in error_log. The default is.
Summary:
Advantages of expire features:
(1) expires can reduce the bandwidth purchased by the website and save cost.
(2) improve user access experience at the same time
(3) it is a very important function of web service to reduce the pressure of service and save server cost.
Expire functional drawbacks:
When the cached page or data is updated, the user may still see the old content, which will affect the user experience.
Solution: the first one to shorten the cache time, for example, 1 day, but not completely, unless the update frequency is more than 1 day; the second renames the cached object.
Content that the site does not want to be cached 1) site traffic statistics tools 2) frequently updated files (google's logo)
Hotlink protection
Note: to prevent others from quoting pictures and other links directly from your website, consuming your resources and network traffic, then our solutions are as follows:
1: watermark, brand promotion, your bandwidth, server is sufficient
2: firewall, direct control, provided you know the source of IP
3: the following method of hotlink protection strategy is to give 404 an error message directly.
The following is an explanation of each configuration item:
1) none means a Referer header that does not exist (empty, that is, direct access, such as opening a picture directly in a browser)
2) blocked means to disguise the referer header according to the firewall, such as "Referer: XXXXXXX".
3) server_names is a list of one or more servers, and the "*" wildcard can be used in the name after version 0.5.33.
Kernel parameter optimization
The figure above is as follows:
Fs.file-max= 999999
Net.ipv4.ip_forward= 0
Net.ipv4.conf.default.rp_filter= 1
Net.ipv4.conf.default.accept_source_route= 0
Kernel.sysrq= 0
Kernel.core_uses_pid= 1
Net.ipv4.tcp_syncookies= 1
Kernel.msgmnb= 65536
Kernel.msgmax= 65536
Kernel.shmmax= 68719476736
Kernel.shmall= 4294967296
Net.ipv4.tcp_max_tw_buckets= 6000
Net.ipv4.tcp_sack= 1
Net.ipv4.tcp_window_scaling= 1
Net.ipv4.tcp_rmem= 10240 87380 12582912
Net.ipv4.tcp_wmem= 10240 87380 12582912
Net.core.wmem_default= 8388608
Net.core.rmem_default= 8388608
Net.core.rmem_max= 16777216
Net.core.wmem_max= 16777216
Net.core.netdev_max_backlog= 262144
Net.core.somaxconn= 40960
Net.ipv4.tcp_max_orphans= 3276800
Net.ipv4.tcp_max_syn_backlog= 262144
Net.ipv4.tcp_timestamps= 0
Net.ipv4.tcp_synack_retries= 1
Net.ipv4.tcp_syn_retries= 1
Net.ipv4.tcp_tw_recycle= 1
Net.ipv4.tcp_tw_reuse= 1
Net.ipv4.tcp_mem= 94500000 915000000 927000000
Net.ipv4.tcp_fin_timeout= 1
Net.ipv4.tcp_keepalive_time= 30
Net.ipv4.ip_local_port_range= 1024 65000
The following is an explanation of each configuration item:
1) fs.file-max = 999999: this parameter represents the maximum number of handles that a process (such as a worker process) can open at the same time. This parameter limits the maximum number of concurrent connections in a straight line and needs to be configured according to the actual situation.
2) the parameter net.ipv4.tcp_max_tw_buckets = 6000 # indicates the maximum number of TIME_WAIT sockets allowed by the operating system. If this number is exceeded, the TIME_WAIT socket will be cleared immediately and a warning message will be printed. This parameter defaults to 180000, and too many TIME_WAIT sockets will slow down the Web server.
Note: actively closing the server side of the connection will result in a connection in TIME_WAIT state
3) net.ipv4.ip_local_port_range = 1024 65000 # the range of ports that the system is allowed to open.
4) net.ipv4.tcp_tw_recycle = enable timewait fast recycling.
5) net.ipv4.tcp_tw_reuse = enable reuse. Allows TIME-WAIT sockets to be reused for new TCP connections. This makes sense for the server because there are always a large number of TIME-WAIT-state connections on the server.
6) net.ipv4.tcp_keepalive_time = 30: this parameter indicates how often TCP sends keepalive messages when keepalive is enabled. The default is 2 hours, and if you set it smaller, you can clean up invalid connections more quickly.
7) net.ipv4.tcp_syncookies = enable SYNCookies. When a SYN waiting queue overflow occurs, enable cookies to handle it.
8) net.core.somaxconn = 40960 # the backlog of the listen function in web applications limits the net.core.somaxconn of kernel parameters to 128by default, while the NGX_LISTEN_BACKLOG defined by nginx defaults to 511.Therefore, it is necessary to adjust this value.
Note: for a TCP connection, Server and Client need a three-way handshake to establish a network connection. When the three-way handshake is successful, we can see that the state of the port changes from LISTEN to ESTABLISHED, and then the data can be transmitted over the link. Each port in the Listen state has its own listening queue. The length of the listening queue is related to the somaxconn parameter and the listen () function in the program that uses the port.
Somaxconn parameter: defines the maximum listening queue length per port in the system. This is a global parameter with a default value of 128. for a high-load web service environment that often handles new connections, the default value of 128is too small. For most environments, it is recommended to increase this value to 1024 or more. Large listening queues can also help prevent DoS denial of service.
9) net.core.netdev_max_backlog = 262144 # the maximum number of packets allowed to be sent to the queue when each network interface receives packets faster than the kernel processes them.
10) the parameter net.ipv4.tcp_max_syn_backlog = 262144 # indicates the maximum length of the queue for SYN requests accepted by TCP during the establishment phase of the three-way handshake. The default is 1024. Setting it to a larger size can prevent the Linux from losing connection requests initiated by the client when the Nginx is too busy to make a new accept connection.
11) the parameter net.ipv4.tcp_rmem = 10240 87380 1258291 defines the minimum, default, and maximum values of the TCP accept cache (for the TCP accept slide window).
12) net.ipv4.tcp_wmem = 10240 87380 12582912: this parameter defines the minimum, default, and maximum values of the TCP send cache (for TCP send sliding windows).
13) net.core.rmem_default = 6291456: this parameter indicates that the kernel socket accepts the default size of the cache.
14) net.core.wmem_default = 6291456: this parameter represents the default size of the kernel socket send cache.
15) net.core.rmem_max = 12582912: this parameter indicates the maximum size of the kernel socket to accept the cache.
16) net.core.wmem_max = 12582912: this parameter represents the maximum size of the kernel socket send cache.
17) net.ipv4.tcp_syncookies = 1: this parameter is independent of performance and is used to solve the SYN*** of TCP.
After adding the configuration items, execute sysctl-p to make the kernel changes take effect:
On the Optimization of the number of system connections
Description: linux default open files is 1024
# ulimit-n
1024
Indicates that server only allows 1024 files to be opened at the same time.
Use ulimit-a to view all limits for the current system, and ulimit-n to view the current maximum number of open files.
The new linux is only 1024 by default, so it's easy to encounter error: too many open files when you use it as a server with a heavy load. Therefore, it needs to be enlarged.
At the end of / etc/security/limits.conf, add:
II. Deploy LNMP
1. Install php
(1) solve the dependency relationship
Install libmcrypt:
(2) compile and install php
The figure above is as follows:
. / configure--prefix=/usr/local/php5.6-- with-mysql=mysqlnd-- with-pdo-mysql=mysqlnd--with-mysqli=mysqlnd-- with-openssl-- enable-fpm-- enable-sockets--enable-sysvshm-- enable-mbstring-- with-freetype-dir-- with-jpeg-dir--with-png-dir-- with-zlib-- with-libxml-dir=/usr-- enable-xml-- with-mhash--with-mcrypt=/usr/local/libmcrypt-- with-config-file- Path=/etc--with-config-file-scan-dir=/etc/php.d-- with-bz2--enable-maintainer-zts & & make & & make install
(3) provide php configuration file
(4) provide scripts for php-fpm
(5) provide php-fpm configuration file and edit
Start the php-fpm service:
Firewall open port 9000 exception:
Create a php site file directory:
2. Add the following to the server of the nginx.conf file to support php
Here is a complete configuration file for nginx.conf:
User www www
Worker_processes 4
Worker_cpu_affinity0001 0010 0100 1000
Worker_rlimit_nofile65535
Error_log logs/error.log
# error_log logs/error.log notice
# error_log logs/error.log info
# pid logs/nginx.pid
Events {
Use epoll
Worker_connections 65535
Multi_accept on
}
Http {
Include mime.types
Default_type application/octet-stream
# log_format main'$remote_addr-$remote_user [$time_local] "$request"'
#'$status $body_bytes_sent "$http_referer"'
#'"$http_user_agent"$http_x_forwarded_for"'
# access_log logs/access.log main
Sendfile on
Tcp_nopush on
# keepalive_timeout 0
Keepalive_timeout 65
Tcp_nodelay on
Client_header_buffer_size 4k
Open_file_cache max=262140 inactive=20s
Open_file_cache_valid 30s
Open_file_cache_min_uses 1
Client_header_timeout 15
Client_body_timeout 15
Reset_timedout_connection on
Send_timeout 15
Server_tokens off
Client_max_body_size 10m
# fastcgi tiao you
Fastcgi_connect_timeout 600
Fastcgi_send_timeout 600
Fastcgi_read_timeout 600
Fastcgi_buffer_size 64k
Fastcgi_buffers 4 64k
Fastcgi_busy_buffers_size 128k
Fastcgi_temp_file_write_size 128k
Fastcgi_temp_path/usr/local/nginx1.10/nginx_tmp
Fastcgi_intercept_errors on
Fastcgi_cache_path/usr/local/nginx1.10/fastcgi_cache levels=1:2 keys_zone=cache_fastcgi:128minactive=1d max_size=10g
# gzip tiao you
Gzip on
Gzip_min_length 2k
Gzip_buffers 4 32k
Gzip_http_version 1.1
Gzip_comp_level 6
Gzip_types text/plain text/csstext/javascript application/json application/javascriptapplication/x-javascript application/xml
Gzip_vary on
Gzip_proxied any
Server {
Listen 80
Server_name www.benet.com
# charset koi8-r
# access_log logs/host.access.log main
Location ~ * ^. +\. (jpg | gif | png | swf | flv | wma | asf | mp3 | mmf | zip | rar) ${
Valid_referers none blocked www.benet.com benet.com
If ($invalid_referer) {
# return 302 http://www.benet.com/img/nolink.jpg;
Return 404
Break
}
Access_log off
}
Location / {
Root html
Index index.php index.html index.htm
}
Location *\. (ico | jpe?g | gif | png | bmp | swf | flv) ${
Expires 30d
# log_not_found off
Access_log off
}
Location *\. (js | css) ${
Expires 7d
Log_not_found off
Access_log off
}
Location = / (favicon.ico | roboots.txt) {
Access_log off
Log_not_found off
}
Location / status {
Stub_status on
}
Location. *\. (php | php5)? ${
Root / var/www/html/webphp
Fastcgi_pass 192.168.1.9:9000
Fastcgi_index index.php
Include fastcgi.conf
Fastcgi_cache cache_fastcgi
Fastcgi_cache_valid 200 302 1h
Fastcgi_cache_valid 301 1d
Fastcgi_cache_valid any 1m
Fastcgi_cache_min_uses 1
Fastcgi_cache_use_stale errortimeout invalid_header http_500
Fastcgi_cache_key http://$host$request_uri;
}
# error_page 404 / 404.html
# redirect server error pages to thestatic page / 50x.html
#
Error_page 500 502 503 504 / 50x.html
Location = / 50x.html {
Root html
}
# proxy the PHP scripts to Apachelistening on 127.0.0.1:80
#
# location ~\ .php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGIserver listening on 127.0.0.1:9000
#
# location ~\ .php$ {
# root html
# fastcgi_pass 127.0.0.1:9000
# fastcgi_index index.php
# fastcgi_param SCRIPT_FILENAME / scripts$fastcgi_script_name
# include fastcgi_params
#}
# deny access to .htaccess files, ifApache's document root
# concurs with nginx's one
#
# location ~ /\ .ht {
# deny all
#}
}
# another virtual host using mix of IP-,name-, and port-based configuration
#
# server {
# listen 8000
# listen somename:8080
# server_name somename alias another.alias
# location / {
# root html
# index index.html index.htm
#}
#}
# HTTPS server
#
# server {
# listen 443 ssl
# server_name localhost
# ssl_certificate cert.pem
# ssl_certificate_key cert.key
# ssl_session_cache shared:SSL:1m
# ssl_session_timeout 5m
# ssl_ciphers HIGH:!aNULL:!MD5
# ssl_prefer_server_ciphers on
# location / {
# root html
# index index.html index.htm
#}
#}
}
Reload the nginx service:
Open port 80 exception:
Third, verification and stress testing
(1) verify hotlink protection
Use apache as a test site with the domain name www.test.com. Make a hyperlink on the test page and link to a picture of the nginx site.
Click the link on the page:
From the picture above, you can see that the hotlink protection settings are in effect.
(2) verify the function of gzip
Use Google browser to test the access, as shown in the following figure: (hint: press F12 before accessing the test page)
(3) stress test
Thisis ApacheBench, Version 2.3
Copyright1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensedto The Apache Software Foundation, http://www.apache.org/
Benchmarkingwww.benet.com (be patient)
Completed5000 requests
Completed10000 requests
Completed15000 requests
Completed20000 requests
Completed25000 requests
Completed30000 requests
Completed35000 requests
Completed40000 requests
Completed45000 requests
Completed50000 requests
Finished50000 requests
ServerSoftware: IIS
ServerHostname: www.benet.com
ServerPort: 80
DocumentPath: / index.html
DocumentLength: 612 bytes
ConcurrencyLevel: 500
Timetaken for tests: 5.734 seconds
Completerequests: 50000
Failedrequests: 0
Writeerrors: 0
Totaltransferred: 41800000 bytes
HTMLtransferred: 30600000 bytes
Requestsper second: 8719.82 [# / sec] (mean)
Timeper request: 57.341 [ms] (mean)
Timeper request: 0.115 [ms] (mean,across all concurrent requests)
Transferrate: 7118.92 [Kbytes/sec] received
ConnectionTimes (ms)
Min mean [+ /-sd] median max
Connect: 1 25 4.2 25 38
Processing: 7 32 5.5 31 47
Waiting: 4 24 6.8 21 39
Total: 40 57 3.9 57 71
Percentageof the requests served within a certain time (ms)
50% 57
66% 59
75% 59
80% 60
90% 61
95% 62
98% 63
99% 64
100% 71 (longest request)
The second stress test, compare the difference between the two times.
Thisis ApacheBench, Version 2.3
Copyright1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensedto The Apache Software Foundation, http://www.apache.org/
Benchmarkingwww.benet.com (be patient)
Completed10000 requests
Completed20000 requests
Completed30000 requests
Completed40000 requests
Completed50000 requests
Completed60000 requests
Completed70000 requests
Completed80000 requests
Completed90000 requests
Completed100000 requests
Finished100000 requests
ServerSoftware: IIS
ServerHostname: www.benet.com
ServerPort: 80
DocumentPath: / index.html
DocumentLength: 612 bytes
ConcurrencyLevel: 1000
Timetaken for tests: 12.010 seconds
Completerequests: 100000
Failedrequests: 0
Writeerrors: 0
Totaltransferred: 83600000 bytes
HTMLtransferred: 61200000 bytes
Requestsper second: 8326.49 [# / sec] (mean)
Timeper request: 120.099 [ms] (mean)
Timeper request: 0.120 [ms] (mean,across all concurrent requests)
Transferrate: 6797.80 [Kbytes/sec] received
ConnectionTimes (ms)
Min mean [+ /-sd] median max
Connect: 1 53 8.9 53 82
Processing: 17 67 11.4 66 98
Waiting: 0 49 14.3 43 84
Total: 70 119 6.5 120 140
Percentageof the requests served within a certain time (ms)
50% 120
66% 122
75% 123
80% 124
90% 126
95% 128
98% 129
99% 130
100% 140 (longest request)
(5) xcache accelerates php
1) wget http://xcache.lighttpd.net/pub/Releases/3.2.0/xcache-3.2.0.tar.gz # download
After the installation is complete, the following interface appears. Remember the following path, which will be used later:
/ usr/local/php5.6/lib/php/extensions/no-debug-non-zts-20131226/
2) create xcache cache file
# touch / tmp/xcache
# chmod 777 / tmp/xcache
3) copy the xcache background management program to the root directory of the website
[root@wwwxcache-3.2.0] # cp-r htdocs/ / usr/local/nginx1.10/html/xcache
4) configure php to support xcache
Vi/ etc/php.ini # edit the configuration file by adding the following on the last line
[xcache-common]
Extension= / usr/local/php5.6/lib/php/extensions/no-debug-non-zts-20131226/xcache.so
[xcache.admin]
Xcache.admin.enable_auth= Off
[xcache]
Xcache.shm_scheme= "mmap"
Xcache.size=60M
Xcache.count=1
Xcache.slots=8K
Xcache.ttl=0
Xcache.gc_interval=0
Xcache.var_size=64M
Xcache.var_count=1
Xcache.var_slots=8K
Xcache.var_ttl=0
Xcache.var_maxttl=0
Xcache.var_gc_interval=300
Xcache.test=Off
Xcache.readonly_protection= Off
Xcache.mmap_path= "/ tmp/xcache"
Xcache.coredump_directory= ""
Xcache.cacher=On
Xcache.stat=On
Xcache.optimizer=Off
[xcache.coverager]
Xcache.coverager=On
Xcache.coveragedump_directory= ""
test
Servicephp-fpm restart # restart php-fpm
The browser opens the xcache under the root of the website
Http://http://www.benet.com/xcache can see the following page:
Testing stress testing of php dynamic pages
[root@www~] # ab-c 1000-n 100000 http://www.benet.com/test.php
Thisis ApacheBench, Version 2.3
Copyright1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensedto The Apache Software Foundation, http://www.apache.org/
Benchmarkingwww.benet.com (be patient)
Completed10000 requests
Completed20000 requests
Completed30000 requests
Completed40000 requests
Completed50000 requests
Completed60000 requests
Completed70000 requests
Completed80000 requests
Completed90000 requests
Completed100000 requests
Finished100000 requests
ServerSoftware: IIS
ServerHostname: www.benet.com
ServerPort: 80
DocumentPath: / test.php
DocumentLength: 85102 bytes
ConcurrencyLevel: 1000
Timetaken for tests: 13.686 seconds
Completerequests: 100000
Failedrequests: 0
Writeerrors: 0
Totaltransferred: 8527900000 bytes
HTMLtransferred: 8510200000 bytes
Requestsper second: 7306.71 [# / sec] (mean)
Timeper request: 136.861 [ms] (mean)
Timeper request: 0.137 [ms] (mean,across all concurrent requests)
Transferrate: 608504.46 [Kbytes/sec] received
ConnectionTimes (ms)
Min mean [+ /-sd] median max
Connect: 0 17 5.5 17 81
Processing: 21 119 10.8 121 140
Waiting: 1 17 6.7 16 68
Total: 50 136 8.1 137 151
Percentageof the requests served within a certain time (ms)
50% 137
66% 139
75% 140
80% 141
90% 143
95% 144
98% 146
99% 148
100% 151 (longest request)
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.