In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-03 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Blogger QQ:819594300
Blog address: http://zpf666.blog.51cto.com/
Friends who have any questions can contact the blogger, the blogger will help you answer, thank you for your support!
Proxy service can be simply divided into forward proxy and reverse proxy:
Forward proxy: used to proxy internal network connection requests to Internet (such as × × / NAT). The client specifies a proxy server, and sends the HTTP requests that should be sent directly to the target Web server to the proxy server first, and then the proxy server accesses the Web server, and sends the Response of the Web server back to the client:
Reverse proxy: contrary to the forward proxy, if the local area network provides resources to the Internet and allows other users on the Internet to access the resources in the local area network, or you can set up a proxy server, the service it provides is a reverse proxy. The reverse proxy server accepts the connection from the Internet, then forwards the request to the server on the internal network and sends the Response back to the client on the Internet requesting the connection:
1. Nginx reverse proxy: the scheduler of Web server
1. Reverse proxy (ReverseProxy) means that the client accepts the connection request by the proxy server, then forwards the request to the web server on the network (which may be apache, nginx, tomcat, iis, etc.), and returns the result from the web server to the client requesting the connection. At this time, the proxy server appears as a server.
As can be seen from the figure above: the reverse proxy server proxy website Web server receives the Http request and forwards the request. And nginx, as a reverse proxy server, can forward requests to different web servers at the back end according to the content of user requests, such as static and dynamic separation, and then create multiple virtual hosts on nginx, thus successfully accessing different web servers or web clusters at the back end when entering different domain names (url) in the browser.
2. The role of reverse agent
① protects the website: any request from Internet must first go through the proxy server
② accelerates Web requests by configuring caching: it can cache some static resources on the real Web server and reduce the load on the real Web server.
③ implements load balancing: acts as a load balancing server to distribute requests evenly and balance the load pressure on each server in the cluster
What is nginx
1. Introduction to nginx
Nginx is a lightweight web server, reverse proxy and e-mail proxy server. It is known for its stability, rich feature set, sample configuration files, and low consumption of system resources. Nginx (pronounced engine x), which was developed by Russian programmer Igor Sysoev. At first, it was used by a large Russian portal and search engine Rambler. This software is distributed under the BSD-like protocol and can be run in operating systems such as UNIX, GNU/Linux, BSD, Mac OS X, Solaris, and MicrosoftWindows.
Application status of Nginx
Nginx is already running on ── Rambler Media (www.rambler.ru), Russia's largest portal, while more than 20 per cent of Russia's virtual hosting platforms use Nginx as a reverse proxy server.
In China, many websites such as Taobao, Sina blog, Sina podcast, NetEase News, six rooms, 56.com, Discuz!, Shuimu Community, Douban, YUPOO, Haini, Xunlei online and other websites use Nginx as a Web server or reverse proxy server.
2. The core features of Nginx.
(1) Cross-platform: Nginx can be compiled and run in most OS, and there is also a version of Windows
(2) the configuration is extremely simple: it is very easy to use.
(3) non-blocking and high concurrent connections: the official test can support 50,000 concurrent connections and reach 20,000 to 30,000 concurrent connections in the actual production environment. (this is due to the fact that Nginx uses the latest epoll model)
Note:
For a Web server, first look at the basic process of a request: establishing a connection-receiving data-sending data. From the bottom of the system, the above process (establishing a connection-receiving data-sending data) is a read-write event at the bottom of the system.
If you use blocking calls, when the read-write event is not ready, you can only wait, the current thread is suspended, and wait for the event to be ready for the read-write event.
If you use a non-blocking call: the event returns immediately, telling you that the event is not ready, come back later. After a while, check the event again until the event is ready, in the meantime, you can do something else first, and then check to see if the event is ready. Although it is no longer blocked, you have to check the status of the event from time to time. You can do more, but the cost is not small. A non-blocking call means that the call does not block the current thread until the result is not immediately available
(4) event-driven: the communication mechanism adopts epoll model to support larger concurrent connections.
Non-blocking determines whether to read or write by constantly checking the status of events, which brings a lot of overhead, so there is an asynchronous non-blocking event handling mechanism. This mechanism allows you to monitor multiple events at the same time, calling them is non-blocking, but you can set the timeout, within the timeout, if an event is ready, return. This mechanism solves the above two problems of blocking calls and non-blocking calls.
Take the epoll model as an example: when the event is not ready, it is placed in the epoll (queue). If an event is ready, deal with it; when the event is not ready, wait in epoll. In this way, we can handle a large number of concurrency concurrently, which, of course, refers to outstanding requests. There is only one thread, so of course there is only one request that can be processed at the same time, just constantly switching between requests, which is also actively given up because the asynchronous event is not ready. The switching here is free of cost, and you can understand it as processing multiple prepared events in a loop.
Compared with multithreading, this kind of event handling has great advantages, there is no need to create threads, each request takes up very little memory, there is no context switching, and event handling is very lightweight. No matter how many concurrency is, it will not lead to unnecessary waste of resources (context switching). For apache servers, each request has an exclusive worker thread, and when the number of concurrency reaches thousands, there are thousands of threads processing requests at the same time. This is not a small challenge for the operating system: because the memory consumption caused by threads is very large, and the cpu overhead caused by thread context switching is very high, the natural performance can not go up, resulting in serious performance degradation in high concurrency scenarios.
Summary: through the asynchronous non-blocking event handling mechanism, Nginx implements that multiple prepared events are processed by the process loop, thus achieving high concurrency and lightweight.
(5) Master/Worker structure: a master process that generates one or more worker processes.
Note: the Master-Worker design pattern mainly consists of two main components, Master and Worker,Master, which maintain the Worker queue, send the request to multiple Worker for parallel execution, and Worker mainly carries out the actual logical calculation and returns the results to Master.
What are the benefits of adopting this process model for nginx? The use of independent processes, so that each other will not affect each other, after one process exits, other processes are still working, the service will not be interrupted, the Master process will quickly restart the new Worker process. Of course, the abnormal exit of the Worker process must be that the program has bug, and the abnormal exit will cause all requests on the current Worker to fail, but will not affect all requests, so the risk is reduced.
(6) small memory consumption: very small memory consumption for processing large concurrent requests. With 30, 000 concurrent connections, 10 Nginx processes open consume only 150 megabytes of memory (15M*10=150M).
(7) built-in health check function: if a Web server at the backend of the Nginx agent goes down, the access to the front end will not be affected.
(8) bandwidth saving: GZIP compression is supported, and Header headers cached locally by the browser can be added.
(9) High stability: for reverse proxy, the probability of downtime is minimal.
Third, Nginx+apache constructs the load balance of Web server cluster.
Nginx configuration reverse proxy
Configure nginx as a reverse proxy and load balancer, and make use of its caching function to cache static pages in nginx in order to reduce the number of connections to the back-end server and check the health of the back-end web server.
1. Install nginx
Environment:
OS:centos7.2
Nginx:192.168.1.6
Apache1:192.168.1.7
Apache2:192.168.1.8
Install dependent packages such as zlib-devel, pcre-devel, etc.
Note:
Implementation of back-end web load balancing with proxy and upstream Modules
Using proxy module to implement static file caching
Combine the default ngx_http_proxy_module module and ngx_http_upstream_module module of nginx to realize the health check of the back-end server, or you can use the third-party module nginx_upstream_check_module
Use the nginx-sticky-module extension module to implement Cookie session pasting (keep the session)
Use ngx_cache_purge for more powerful cache cleanup
The two modules mentioned above belong to third-party extension modules and need to be downloaded in advance and then installed together through-- add-moudle=src_path at compile time.
Install nginx
The figure above is as follows:
/ configure--prefix=/usr/local/nginx1.10-- user=www-- group=www--with-http_stub_status_module-- with-http_realip_module-- with-http_ssl_module--with-http_gzip_static_module-- http-client-body-temp-path=/var/tmp/nginx/client--http-proxy-temp-path=/var/tmp/nginx/proxy--http-fastcgi-temp-path=/var/tmp/nginx/fcgi-- with-pcre-- Add-module=../ngx_cache_purge-2.3-with-http_flv_module-add-module=../nginx-goodies-nginx-sticky-module-ng-08a395c66e42&& make & & make install
Note: all modules of nginx must be added at compile time and can no longer be dynamically loaded at run time.
Explanation:
-- add-module: add third-party modules
-- with-http_gzip_static_module: add gzip module
-- http-client-body-temp-path=/var/tmp/nginx/client: add a cache directory, which needs to be created manually
Optimize the execution path of nginx programs
2. Write nginx service script:
#! / bin/bash
# chkconfig: 2345 99 20
# description: Nginx Service Control Script
PROG= "/ usr/local/nginx1.10/sbin/nginx"
PIDF= "/ usr/local/nginx1.10/logs/nginx.pid"
Case "$1" in
Start)
Netstat-anplt | grep ": 80" & > / dev/null & & pgrep "nginx" & > / dev/null
If [$?-eq 0]
Then
Echo "Nginx service alreadyrunning."
Else
$PROG-t & > / dev/null
If [$?-eq 0]; then
$PROG
Echo "Nginx service start success."
Else
$PROG-t
Fi
Fi
Stop)
Netstat-anplt | grep ": 80" & > / dev/null & & pgrep "nginx" & > / dev/null
If [$?-eq 0]
Then
Kill-s QUIT $(cat $PIDF)
Echo "Nginx service stopsuccess."
Else
Echo "Nginx service already stop"
Fi
Restart)
$0 stop
$0 start
Status)
Netstat-anplt | grep ": 80" & > / dev/null & & pgrep "nginx" & > / dev/null
If [$?-eq 0]
Then
Echo "Nginx service is running."
Else
Echo "Nginx is stop."
Fi
Reload)
Netstat-anplt | grep ": 80" & > / dev/null & & pgrep "nginx" & > / dev/null
If [$?-eq 0]
Then
$PROG-t & > / dev/null
If [$?-eq 0]; then
Kill-s HUP $(cat $PIDF)
Echo "reload Nginx configsuccess."
Else
$PROG-t
Fi
Else
Echo "Nginx service is not run."
Fi
*)
Echo "Usage: $0 {start | stop | restart | reload}"
Exit 1
Esac
Note: if you want to add a third-party module to the installed nginx, you still need to recompile, but in order not to overwrite your original configuration, please do not makeinstall, but copy the executable file directly. The solution is as follows:
Nginx-V / / you can view the modules that have been installed in your current nginx
[root@wwwnginx-1.10.2] # / configure-- add-module=. # your third-party module
[root@wwwnginx-1.10.2] # do not make install after make, copy it manually instead, back it up first
[root@wwwnginx-1.10.2] # cp / usr/local/nginx1.10/sbin/nginx/usr/local/nginx1.10/sbin/nginx.bak
[root@wwwnginx-1.10.2] # cp objs/nginx/usr/local/nginx1.10/sbin/nginx
Configure nginx reverse proxy: reverse proxy + load balancing + health detection
View the modules loaded by nginx:
Again: all modules of nginx must be added at compile time and can no longer be dynamically loaded at run time.
3. Nginx-sticky-module module:
The function of this module is to send requests from the same client (browser) to the same back-end server through cookie sticking, which can solve the problem of session synchronization of multiple backend servers to a certain extent-- because synchronization is no longer needed, and RR polling mode requires operators to consider the implementation of session synchronization.
In addition, the built-in ip_hash can also distribute requests according to the client IP, but it is easy to cause load imbalance. If the nginx is preceded by a CDN network or access from the same local area network, it receives the same client IP, which is easy to cause load imbalance. The cookie expiration time of nginx-sticky-module expires when the default browser is closed.
This module is not suitable for browsers that do not support Cookie or manually disable cookie, and the default sticky will be switched to RR. It cannot be used with ip_hash.
Description: it is super easy to configure. Generally speaking, one sticky instruction is enough.
For more information, please see the official document https://bitbucket.org/nginx-goodies/nginx-sticky-module-ng.
4. Other scheduling schemes of load-balance:
By the way, we will introduce other scheduling algorithms supported by nginx's load balancing module:
Polling (default): each request is assigned to a different backend server one by one in chronological order. If a backend server goes down, the failed system is automatically eliminated, so that user access is not affected. Weight specifies the polling weight. The higher the weight value, the higher the access probability assigned, which is mainly used in the case of uneven performance of each server in the backend.
Ip_hash: each request is allocated according to the hash result of accessing IP, so that visitors from the same IP regularly access a back-end server, which effectively solves the session sharing problem of dynamic web pages. Of course, if this node is not available, it will be sent to the next node, and if there is no session synchronization, it will be logged out.
Least_conn: the request is sent to the realserver with the least active connections currently. The value of weight is considered.
Url_hash: this method distributes requests according to the hash results of accessing url, so that each url is directed to the same back-end server, which can further improve the efficiency of the back-end cache server. Nginx itself does not support url_hash, and if you need to use this scheduling algorithm, you must install Nginx's hash package nginx_upstream_hash.
Fair: this is a smarter load balancing algorithm than the above two. This algorithm can intelligently balance the load according to the page size and loading time, that is, the request can be allocated according to the response time of the back-end server, and the priority allocation of short response time. Nginx itself does not support fair, and if you need to use this scheduling algorithm, you must download the upstream_fair module of Nginx.
5. Load balancing and health check:
Strictly speaking, nginx comes with no health check for the back-end node of load balancer, but you can use the relevant instructions in the default ngx_http_proxy_module module and ngx_http_upstream_module module to automatically switch to the next node to provide access when the back-end node fails.
Weight: polling weights can also be used in ip_hash. The default value is 1.
Max_fails: the number of times a request is allowed to fail. Default is 1. Returns the error defined by the proxy_next_upstream module when the maximum number of times is exceeded.
Fail_timeout: there are two meanings: one is that a maximum of 2 failures are allowed within 10 seconds; the other is that requests are not assigned to this server within 10 seconds after 2 failures.
6. The proxy cache of nginx uses:
Caching means caching static files such as js, css and p_w_picpath from the back-end server to the cache directory specified by nginx, which can not only reduce the burden on the back-end server, but also speed up access. However, timely cache cleaning has become a problem, so you need the ngx_cache_purge module to manually clean the cache before the expiration time.
The commonly used instructions in proxy module are proxy_pass and proxy_cache.
The web cache function of nginx is mainly completed by proxy_cache, fastcgi_cache instruction set and related instruction set. Proxy_cache instruction is responsible for reverse proxy caching the static content of the back-end server, and fastcgi_cache is mainly used to deal with FastCGI dynamic process cache.
Description of related options:
Turn the response of the buffered backend server on or off when the proxy_buffering on; proxy is used.
When buffering is turned on, nginx receives the response from the proxied server as quickly as possible and stores it in the buffer.
Proxy_temp_path: cache the temporary directory. The back-end response is not returned directly to the client, but is first written to a temporary file and then placed in proxy_cache_path as a cache by rename. After version 0.8.9, temp and cache are allowed to be on different file systems (partitions). However, it is recommended that they be set on a single file system to reduce performance damage.
Proxy_cache_path: sets the cache directory, where the file name is the MD5 value of cache_key.
Levels=1:2keys_zone=my-cache:100m means a two-level directory structure. The first-tier directory has only one character, which is set by levels=1:2. It has a total of two-tier directories, and the subdirectory name consists of two characters. The name of the Web cache is my-cache, and the size of the memory cache is 100MB. This cache zone can be used multiple times. The cache file name seen on the file system is similar to
/ usr/local/nginx1.10/proxy_cache/c/29/b7f54b2df7773722d382f4809d65029c .
Inactive=600 max_size=2g said that the content that has not been accessed in 600 minutes is automatically cleared, and the maximum cache space of the hard disk is 2GB. Exceeding this size will erase the least recently used data.
By default, nginx does not cache objects with Set-Cookie in the http header of the response from the backend. If the client sends a request with Cookie header,varnish, it ignores the cache and passes the request directly to the backend. They are ignored through the proxy_ignore_headers setting in nginx as follows:
Solution:
Proxy_ignore_headersSet-Cookie
Proxy_hide_headerSet-Cookie
Proxy_cache: reference the cache my-cache defined earlier
Proxy_cache_key: defines how to generate the key of the cache, sets the key value of the web cache, and nginx stores the cache according to the key value md5 hash
Proxy_cache_valid: set different cache time for different response status codes. For example, the cache time for normal results such as 200,302 is longer, while the cache time for 404,500 is shorter, and the file will expire at this time, regardless of whether it has just been accessed or not.
Add_header directive to set responseheader, syntax: add_header name value
The variable $upstream_cache_status displays the status of the cache, and we can add a http header to the configuration to display this status
$upstream_cache_status contains the following states:
MISS missed and the request was sent to the backend
HIT cache hit
The EXPIRED cache expired request is passed to the backend
UPDATING is updating the cache and will use the old reply
The STALE backend will get an expired reply.
Expires: set Expires: or Cache-Control:max-age in the response header to return the browser cache expiration time to the client.
The following is an example of how nginx.conf implements nginx to do a complete configuration file for a reverse proxy server at the front end, dealing with static files such as js and png, and forwarding dynamic requests such as jsp/php to other servers tomcat/apache.
User www www
Worker_processes 4
Worker_cpu_affinity0001 0010 0100 1000
Error_log logs/error.log
# error_log logs/error.log notice
# error_log logs/error.log info
Worker_rlimit_nofile10240
Pid logs/nginx.pid
Events {
Use epoll
Worker_connections 4096
}
Http {
Include mime.types
Default_type application/octet-stream
Log_format main'$remote_addr-$remote_user [$time_local] "$request"'
'$status $body_bytes_sent "$http_referer"'
'"$http_user_agent"$http_x_forwarded_for"'
'' $upstream_cache_status''
Access_log logs/access.log main
Server_tokensoff
Sendfile on
# tcp_nopush on
# keepalive_timeout 0
Keepalive_timeout 65
# Compression Settings
Gzip on
Gzip_comp_level 6
Gzip_http_version 1.1
Gzip_proxied any
Gzip_min_length 1k
Gzip_buffers 16 8k
Gzip_types text/plain text/csstext/javascript application/json application/javascriptapplication/x-javascript application/xml
Gzip_vary on
# end gzip
# http_proxy Settings
Client_max_body_size 10m
Client_body_buffer_size 128k
Proxy_connect_timeout 75
Proxy_send_timeout 75
Proxy_read_timeout 75
Proxy_buffer_size 4k
Proxy_buffers 4 32k
Proxy_busy_buffers_size 64k
Proxy_temp_file_write_size 64k
Proxy_buffering on
Proxy_temp_path/usr/local/nginx1.10/proxy_temp
Proxy_cache_path/usr/local/nginx1.10/proxy_cache levels=1:2 keys_zone=my-cache:100mmax_size=1000m inactive=600m max_size=2g
# load balance Settings
Upstream backend {
Sticky
Server 192.168.1.7:80 weight=1max_fails=2 fail_timeout=10s
Server 192.168.1.8:80 weight=1max_fails=2 fail_timeout=10s
}
# virtual host Settings
Server {
Listen 80
Server_name localhost
Charset utf-8
Location ~ / purge (/. *) {
Allow 127.0.0.1
Allow 192.168.1.0/24
Deny all
Proxy_cache_purge my-cache$host$1 $is_args$args
}
Location / {
Index index.php index.html index.htm
Proxy_pass http://backend;
Proxy_redirect off
Proxy_set_header Host $host
Proxy_set_header X-Real-IP $remote_addr
Proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for
Proxy_ignore_headers Set-Cookie
Proxy_hide_header Set-Cookie
Proxy_next_upstream error timeoutinvalid_header http_500 http_502 http_503 http_504
}
Location ~. *\. (gif | jpg | png | html | htm | css | js | ico | swf | pdf) (. *) {
Proxy_pass http://backend;
Proxy_redirect off
Proxy_set_header Host $host
Proxy_set_header X-Real-IP$remote_addr
Proxy_set_header X-Forwarded-For$proxy_add_x_forwarded_for
Proxy_next_upstream error timeoutinvalid_header http_500 http_502 http_503 http_504
Proxy_cache my-cache
Add_header Nginx-Cache$upstream_cache_status
Proxy_cache_valid 200 304 301 302 8h
Proxy_cache_valid 404 1m
Proxy_cache_valid any 1d
Proxy_cache_key$host$uri$is_args$args
Expires 30d
}
Location / nginx_status {
Stub_status on
Access_log off
Allow 192.168.1.0/24
Deny all
}
}
}
Note: the nginx proxy server is server_name localhost; here while each server on the background web server must be ServerName www.benet.com.
Description of common instructions:
Main global configuration:
Woker_processes 4
In the top-level main section of the configuration file, the number of worker processes of the worker role, and the master process receives and assigns requests to worker for processing. This value can be simply set to the number of cores of cpu grep ^ processor / proc/cpuinfo | wc-l, which is also an auto value. If ssl and gzip are enabled, it should be set to the same or even 2 times the number of logical CPU, which can reduce the number of CPU O operations. If the nginx server has other services, you can consider an appropriate reduction.
Worker_cpu_affinity
Is also written in the main section. In the case of high concurrency, the performance loss caused by field reconstruction such as registers caused by multi-CPU core switching can be reduced by setting cpu viscosity. Such as worker_cpu_affinity0001 0010 0100 1000; (quad core).
Attached:
CPU working status: (after entering top, press 1 to view)
The above configuration indicates: 4-core CPU, start 4 processes. 0001 means to enable the first cpu kernel, 0010 means to enable the second cpu kernel, and so on; there are several digits as many cores as there are, 1 means that the kernel is enabled, and 0 means that the kernel is closed.
For example:
1, 2-core CPU, start 2 processes
Worker_processes 2
Worker_cpu_affinity01 10
2, 2-core CPU, start 4 processes
Worker_processes4
Worker_cpu_affinity01 10 01 10
3. 2-core CPU to start 8 processes
Worker_processes 8
Worker_cpu_affinity01 10 01 10 01 10 01 10
4, 8-core CPU, start 2 processes
Worker_processes 2
Worker_cpu_affinity10101010 01010101
Description: 10101010 means to turn on kernel 2, 4, 4, 6, 5, and 8, and 01010101 means to start 1, 3, 5, 5, 7, http
Use apache's ab test to view nginx's use of CPU:
If the utilization of multiple CPU cores is not much different, it is proved that nginx has successfully utilized multicore CPU.
At the end of the test, the load on the CPU kernel should be reduced at the same time.
Worker_connections 4096
Write it in the events section. The maximum number of connections that each worker process can handle (initiate) concurrently (including all connections to the client or back-end proxied server).
Worker_rlimit_nofile 10240
Write it in the main section. Limit on the maximum number of open files for the worker process. The default is not set, if not set, this value is the operating system limit (ulimit-n). Can be limited to the maximum operating system limit of 65535. Set this value high so that nginx does not have a "too many open files" problem.
Use epoll
Write it in the events section. Under the Linux operating system, nginx uses the epoll event model by default. Thanks to this, nginx is quite efficient under the Linux operating system. At the same time, Nginx adopts kqueue, an efficient event model similar to epoll, on OpenBSD or FreeBSD operating systems.
Http server:
Some configuration parameters related to providing http services. For example: whether to use keepalive ah, whether to use gzip for compression and so on.
Sendfile on
Turn on efficient file transfer mode.
Keepalive_timeout 65: long connection timeout (in seconds). When a long connection requests a large number of small files, it can reduce the cost of rebuilding the connection. If the setting time is too long and there are many users, keeping the connection for a long time will take up a lot of resources.
Client_max_body_size 10m
The maximum number of bytes of a single file that the client is allowed to request. If a large file is uploaded, please set its limit value
Client_body_buffer_size 128k
Maximum number of bytes requested by the buffer proxy buffer client
Server_tokens off
Hide the version number of nginx
Module http_proxy:
This module implements the function of nginx as a reverse proxy server, including caching
Proxy_connect_timeout
Nginx connection timeout with backend server (proxy connection timeout)
Proxy_read_timeout
Defines the timeout for reading the response from the back-end server. This timeout refers to the maximum time interval between adjacent read operations, not the maximum time for the entire response transmission to complete. If the back-end server does not transmit any data during the timeout period, the connection will be closed.
Proxy_send_timeout
Defines the timeout for transmitting requests to the back-end server. This timeout refers to the maximum time interval between two adjacent writes, rather than the maximum time that the entire request transmission is completed. If the back-end server does not receive any data during the timeout period, the connection will be closed.
Proxy_buffer_size 4k
Set the buffer size to size. When nginx reads the response from the proxied server, it uses this buffer to save the beginning of the response. This part usually contains a small response head. By default, the buffer size is equal to the size of a buffer set by the proxy_buffers instruction, but it can also be set to a smaller size.
Proxy_buffers 8 4k
Syntax: proxy_buffers the_number is_size
Set the number of buffers for each connection to number and the size of each buffer to size. These buffers are used to hold responses read from the proxied server. Each buffer is equal to the size of a memory page by default. Whether this value is 4K or 8K depends on the platform.
Attached: view Linux memory page size
[root@www~] # getconf PAGESIZE
4096
Or
[root@www~] # getconf PAGE_SIZE
4096
Proxy_busy_buffers_size 64k
Buffer size under high load (the default size is 2 times the single block buffer size set by the proxy_buffers instruction)
Proxy_max_temp_file_size
When proxy_buffers cannot hold the response content of the back-end server, it will save some of it to a temporary file on the hard disk. This value is used to set the maximum temporary file size, which defaults to 1024m.
Proxy_temp_file_write_size 64k
This option limits the size of the temporary file per write when the server that caches the proxy responds to the temporary file.
Module http_gzip:
Gzip on: enable gzip to compress output and reduce network transmission.
Gzip_min_length 1k: sets the minimum number of bytes of pages allowed to be compressed. The number of page bytes is obtained from the content-length of the header header. It is recommended to set the number of bytes greater than 1k. Less than 1k may increase the pressure.
Gzip_buffers 4 16k: set up the system to get several units of cache to store the compressed result data stream of gzip. 4 16k represents 4 times the amount of memory requested in 16k units according to the original data size. If it is not set, the default value is to request the same amount of memory space as the original data to store the gzip compression results
Gzip_http_version 1.1: used to identify the version of the http protocol, early browsers do not support Gzip compression, users will see garbled, so in order to support the previous version added this option, if you use the reverse proxy of Nginx and expect to also enable Gzip compression, because the end communication is http/1.1, please set it to 1.1.
Gzip_comp_level 6: gzip compression ratio, 1 compression ratio minimum processing speed is the fastest, 9 compression ratio is the largest but processing speed is the slowest (transmission is fast but consumes cpu)
Gzip_types: matches the mime type to compress, regardless of whether the "text/html" type is specified or not.
Default: gzip_types text/html (js/css files are not compressed by default)
# Compression type, matching MIME type for compression
# cannot use the wildcard character text/*
# (whether specified or not) text/html has been compressed by default
# set which compressed text file can refer to conf/mime.types
Gzip_proxied any: when Nginx is enabled as a reverse proxy, you can decide whether to enable gzip compression in the response to a proxy request based on certain requests and replies. Whether or not compression depends on the "Via" field in the request header. Multiple different parameters can be specified in the instruction at the same time, as follows:
Off- turns off compression of all agent result data
Expired- enables compression if the header header contains "Expires" header information
No-cache- enables compression if the header header contains "Cache-Control:no-cache" header information
No-store- enables compression if the header header contains "Cache-Control:no-store" header information
Private- enables compression if the header header contains "Cache-Control:private" header information
No_last_modified- enables compression if the header header does not contain "Last-Modified" header information
No_etag- enables compression if the header header does not contain "ETag" header information
Auth- enables compression if the header header contains "Authorization" header information
Any- unconditionally enables compression
Gzip_vary on: it has something to do with the http header. Add a vary header for the proxy server. Some browsers support compression, and some browsers do not support compression, so avoid wasting unsupported compression, so judge whether compression is needed according to the HTTP header of the client.
Module http_stream:
This module uses a simple scheduling algorithm to achieve load balancing from the client IP to the back-end server. The upstream is followed by the name of the load balancer, and the back-end realserver is organized in {} by host:portoptions;. If only one backend is proxied, it can also be written directly in proxy_pass.
Location:
Root / var/www/html
Define the default site root location for the server. If locationURL matches a subdirectory or file, root is useless and is usually placed inside or / under the server instruction.
Index index.jsp index.html index.htm
Define the file name accessed by default under the path, which is usually followed by root
Proxy_pass http:/backend
The request goes to the list of servers defined by backend, that is, the reverse proxy, which corresponds to the upstream load balancer. You can also proxy_pass http://ip:port.
Proxy_redirect off
Specifies whether to modify the location header and refresh header values in the response header returned by the proxy server
For example:
Set the replacement text for the back-end server "Location" response header and "Refresh" response header. Assuming that the response header returned by the back-end server is "Location: http://localhost:8000/two/some/uri/", then the instruction"
Proxy_redirect http://localhost:8000/two/ http://frontend/one/;
Will rewrite the string to
"Location: http://frontend/one/some/uri/".
Proxy_set_header Host $host
Allows you to redefine or add request headers destined for the back-end server.
Host means the hostname of the request, the nginx reverse proxy server sends the request to the real backend server, and the host field in the request header is rewritten to the server set by the proxy_pass directive. Because nginx is used as a reverse proxy, and if the real back-end server is equipped with similar hotlink protection or routing or judging functions based on the host field in the http request header, if the nginx in the reverse proxy layer does not override the host field in the request header, the request will fail.
Proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for
The back-end Web server can obtain the user's real IP through X-Forwarded-For
The X_Forward_For field indicates who initiated the http request? If the reverse proxy server does not rewrite the request header, the real backend server will assume that all requests come from the reverse proxy server. If the backend has an anti-proxy policy, the machine will be blocked. Therefore, it is common to add two configurations to the configuration of a nginx used as a reverse proxy to modify the request header of http:
Proxy_set_headerHost $host
Proxy_set_headerX-Forward-For $remote_addr
Proxy_next_upstreamerror timeout invalid_header http_500 http_502 http_503 http_504
Add failover. If the backend server returns errors such as 502,504 or execution timeout, the request will be automatically forwarded to another server in the upstream load balancer pool to achieve failover.
Proxy_set_header X-Real-IP $remote_addr
The web server gets the real ip of the user, but in fact, to get the real ip of the user, you can also use X-Forward-For
7. Verification: caching function, load balancing and health check of nginx reverse proxy
Description:
1) Let's test the caching function.
What if you need to update the cached static file within the cache time, then you need to clear the cache manually.
Instructions for the use of ngx_cache_pure clear cache module
When testing with Google browser, you can press F12 to call the development tool, select the Network option, and we can see, Response Headers, here we can see whether we are requesting a cache.
Description: the first visit is MISS, refresh this page will be HIT hit.
We can see from the figure that the server we visited was 192.168.1.6 and the cache hit.
You can also view the access log of the cache directory or nginx.
Clear the cache:
The above configured proxy_cache_purge instructions are used to easily clear the cache, but you must follow a third-party ngx_cache_purge module to use it.
Use the ngx_cache_purge module to clear the cache (directly deleting files in the cache directory is also a way):
Request URL in GET mode
Even using location ~ / purge (/. *) in the configuration file
The browser accesses http://192.168.1.6/purge/your/may/path to clear the cache.
Cache cleared successfully.
Note:
(1) purge is a ngx_cache_pure module instruction
(2) your/may/path is the URL path of the cache file to be cleared
2) if only one client wants to verify load balancer and health check, you can first turn off the cache function and maintain the session session.
Test:
Verify the health check:
First, shut down the web service of a backend web server:
Start validation:
There is no stutter in the middle, but I have been visiting the web page of apache2.
Restart the web service for the down apache1:
Verify again:
Normal access can be switched back and forth again.
View the access log on the back-end server:
As you can see, the visitor to the access log is the IP of the nginx reverse proxy server, so how do you make it record the client's real IP instead of the nginx proxy server?
The solution is as follows:
Check again:
Note: 192.168.1.4 is the IP of my client.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 216
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.