In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly shows you the "sample analysis of Nginx reverse proxy and load balancing", which is easy to understand and well-organized. I hope it can help you solve your doubts. Let me lead you to study and learn the article "sample Analysis of Nginx reverse proxy and load balancing".
Reverse proxy
Reverse proxy means that the access request of the user is received by the proxy server, the proxy user re-initiates the request to the internal server, and finally returns the response information of the internal server to the user. In this way, the proxy server is externally represented as a server, while the client that accesses the internal server uses the proxy server instead of the real website access user.
Why use reverse proxy
Can play a role in protecting the security of the website, because any request from Internet must first go through the proxy server.
Speed up Web requests by caching static resources.
Achieve load balancing
Reverse proxy example
Environment description
If you have two AB servers. A server provides web resources and is only accessible to the intranet. The B server has two network cards, one is in an internal network with the A server, so the block is the external network. At this point, it is not feasible for user C to directly access the A server. At this point, you can access the A server through the request of the B server agent user C.
Hostname Nic IP description moli-04ens33192.168.30.6 intranet IP, proxy server moli-04ens37192.168.93.129 extranet IP, proxy server moli-05ens33192.168.30.7 intranet server
Both machines have nginx installed
Moli-05 server access is wordpress blog, domain name blog.syushin.org
In the virtual machine experimental environment, the firewall will be turned off.
Configure virtual host
Edit the virtual host configuration file on the moli-04 machine as follows:
[root@moli-04 extra] $cat blog.syushin.org.conf server {listen 80; server_name blog.syushin.org; location / {proxy_pass http://192.168.30.7; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;}}
Change the hosts file
Modify the hosts file on windows to add configuration
192.168.93.129 blog.syushin.org
Browser testing
The access address is 192.168.93.129, and the interface appears on the 05 machine page, configured successfully.
Load balancing
Load balancing function
Schedule and manage users' access requests
Share the pressure on the user's access request
When a load balancing cluster is running, it usually sends customer access requests to a group of servers at the back end through one or more front-end load balancers.
Nginx load balancing
Strictly speaking, Nginx is only used as a Nginx Proxy reverse proxy, but because the effect of this reverse proxy function is the effect of a load balancing machine, nginx load balancing is a special reverse proxy.
The main components to implement Nginx load balancing:
The Nginx module describes the ngx_http_proxy_moduleproxy proxy module, which is used to send the request to the server node or the upstream server pool ngx_http_upstream_module load balancing module, which can realize the load balancing function of the website and the health check of the node.
Introduction of upstream module
The proxy methods supported by the ngx_http_upstream_module module are proxy_pass,fastcgi_pass and so on, mainly using proxy_pass.
The upstream module allows nginx to define one or more groups of node server groups, and when in use, requests from the website are sent to the defined corresponding node groups through the proxy_pass proxy.
Example: create a node server pool
Upstream blog {server 192.168.30.5:80 weight=5; server 192.168.30.6:81 weight=10; server 192.168.30.7:82 weight=15;}
Upstream: the keyword to create a node server group, which must have
Blog: the name of the node server group. It must have a customizable name.
Server: keyword, followed by IP or domain name or IP: Port. Default is 80 if port is not specified.
Weight: weight. The higher the number, the more requests are allocated. Default is 1
To set the status value of the node server, in addition to weight, there are:
Max_fails: the number of requests allowed to fail defaults to 1. When the maximum number of times is exceeded, the error defined by the proxy_next_upstream module is returned.
Time to pause after fail_timeout:max_fails failure.
Down: indicates that the current node server does not participate in the load, indicating that the machine is never available and can be used with iP_hash
Backup: all other non-backup machines down or request backup machines when they are busy. So this machine will be the least stressed.
Use the upstream of the domain name
Upstream blog2 {server www.syushin.com weight=5; server blog.syushin.org down; server blog.syushin.cc backup;}
Scheduling algorithm
Rr polling (default scheduling algorithm, static scheduling algorithm)
The client requests are assigned to different back-end node servers one by one according to the client request order.
Wrr (weighted polling, static scheduling algorithm)
Add weight on the basis of rr polling. When using this algorithm, the weight is proportional to user access. The higher the weight value, the more requests are forwarded.
For example, there are 30 requests, 2 servers A (10.0.0.1) and B (10.0.0.2). If you want A to process 10 requests and B to handle 20 requests, you can define it as follows:
Upstream pools {server 10.0.0.1 weight=1; server 10.0.0.2 weight=2;}
Ip_hash (static scheduling algorithm)
Each request is allocated according to the hash result of the client IP. When a new request arrives, the client IP is hashed out by a hash algorithm. In the subsequent allocation client request, as long as the hash value of the client IP is the same, it will be assigned to the same server.
Upstream blog_pool {ip_hash; server 192.168.30.5 server 8090;
Note: when using ip_hash, you cannot have weight and backup.
Least_conn algorithm
The least_conn algorithm will determine the allocation according to the number of connections of the back-end server, and the server with the least number of connections will distribute more requests.
There are many scheduling algorithms besides the ones listed above, so we will not enumerate them one by one.
Http_proxy_module module
Http_proxy_module can forward the request to another server. In the reverse proxy, it matches the specified URI through the location function, and then throws the request that matches the URI to the defined upstream node pool via proxy_pass.
Http_proxy module parameters
Parameter description: proxy_set_header sets the http request header item to be passed to the backend server node. For example, it is possible to let the server node of the proxy backend obtain the real IP address of the access client user client_body_buffer_size to specify the client request principal buffer size proxy _ connect_timeout represents the timeout of the reverse proxy backend node server connection That is, the timeout time proxy_send_timeout for initiating a handshake waiting for a response indicates the data return time of the agent's back-end server, that is, the back-end server must transmit all the data within a specified time, otherwise the nginx will disconnect the connection proxy_read_timeout sets the time for nginx to obtain information from the agent's back-end server, indicating the response time of the nginx waiting for the back-end server after the connection is established successfully In fact, nginx has entered the back-end queue waiting for processing time proxy_buffer_size sets the buffer size. By default, the buffer size is equal to the size set by instruction proxy_buffers. Proxy_buffers sets the number and size of buffers. The response information obtained by nginx from the agent's back-end server will be set to the buffer proxy_busy_buffers_size to set the proxy_buffers size that can be used when the same busy. The officially recommended size is proxy_buffers * 2proxy_trmp_file_write_size, which specifies the size of temporary proxy cache files.
Proxy_pass usage
Format: proxy_pass URL
Examples are as follows:
Proxy_pass http://blog.syushin.com/;
Proxy_pass http://192.168.30.7:8080/uri;
Proxy_pass http://tmp/www.sock;
URL can be a domain name, or an IP address can be an socket file.
There are a few things to note about the configuration of proxy_pass:
Example 1
Location / upload/ {
Proxy_pass http://192.168.30.7;
}
Example 2
Location / upload/ {
Proxy_pass http://192.168.30.7/; # notice the addition of a diagonal bar
}
Example 3
Location / upload/ {
Proxy_pass http://192.168.30.7/blog/;
}
Example 4
Location / upload/ {
Proxy_pass http://192.168.30.7/blog;
}
If server_name is blog.syushin.com, when requesting http://blog.syushin.com/uploa..., the request result of examples 1-4 above is:
Example 1: http://192.168.30.7/upload/index.html
Example 2: http://192.168.30.7/index.html
Example 3: http://192.168.30.7/blog/index.html
Example 4: http://192.168.30.7/blogindex.html
The above is all the contents of the article "sample Analysis of Nginx reverse proxy and load balancing". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.