In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains "how to configure the forward and backward agents of Nginx". Interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Next let the editor to take you to learn "how to configure the forward and backward agent of Nginx"!
1. Forward agent
The proxy object of the forward proxy is the client. A forward proxy is a proxy server that accesses the target server for the client.
1.1 actual combat one
Achieve results:
Enter www.google.com in the browser and the browser jumps to www.google.com.
Specific configuration:
Server {resolver 8.8.8.8; listen 80; location / {proxy_pass http://$http_host$request_uri;}}
Perform one of the following actions on the client that needs to access the public network:
1. Method 1 (recommended) export http_proxy=http:// your forward proxy server address: proxy port 2. Method 2 vim ~ / .bashrc export http_proxy=http:// your forward proxy server address: proxy port
two。 Reverse proxy
Reverse proxy refers to an intermediary server in which the proxy back-end server responds to client requests, and the object of the proxy is the server.
2.1 actual combat one
Achieve results:
Enter www.abc.com in the browser and jump from the nginx server to the linux system tomcat main page.
Specific configuration:
Server {listen 80; server_name 192.168.4.32; # listening address location / {root html; # / html directory proxy_pass http://127.0.0.1:8080; # request to go to index index.html index.htm; # to set the default page}}
2.2 actual combat II
Achieve results:
Jump to services on different ports depending on the path entered in the browser.
Specific configuration:
Server {listen 9000; server_name 192.168.4.32; # listening address location ~ / example1/ {proxy_pass http://127.0.0.1:5000;} location ~ / example2/ {proxy_pass http://127.0.0.1:8080; }}
The location directive states:
~: indicates that uri contains regular expressions and is case-sensitive.
~ *: indicates that uri contains regular expressions and is case-insensitive.
=: indicates that uri does not contain regular expressions and requires strict matching.
3 load balancing
3.1 actual combat one
Achieve results:
Enter http://192.168.4.32/example/a.html in the browser address bar and average it to ports 5000 and 8080 to achieve load balancing.
Specific configuration:
Upstream myserver {server 192.167.4.32 server 8080;} server {listen 80; # listening port server_name 192.168.4.32; # listening address location / {root html; # html directory index index.html index.htm # set the default page proxy_pass http://myserver; # request to the server list defined by myserver}}
Nginx assignment Server Policy
Polling (default)
is allocated one by one according to the time order of the request, and if the server down is dropped, it can be deleted automatically.
Weight
The higher the weight, the more clients are assigned, which defaults to 1. For example: upstream myserver {server 192.167.4.32 weight=5; 5000 weight=10; server 192.168.4.32 weight=5;} copy the code
Ip
The is allocated according to the hash value of the request ip, and each guest accesses a back-end server on a regular basis. For example: upstream myserver {ip_hash; server 192.167.4.32 upstream myserver 5000; server 192.168.4.32 server 8080;} copy code
Fair
is allocated according to the response time of the back-end server, and the short response time is given priority to the request. For example: upstream myserver {fair; server 192.168.4.32 server 8080; server 192.168.4.32
4. Nginx cache
4.1 actual combat one
Achieve results:
accesses http://192.168.4.32/a.jpg through the browser's address bar within 3 days, does not grab resources from the server, and re-downloads from the server after 3 days (expiration).
Specific configuration:
# add cache area configuration proxy_cache_path / tmp/nginx_proxy_cache levels=1 keys_zone=cache_one:512m inactive=60s max_size=1000m; # server area add cache configuration location ~\. (gif | jpg | png | htm | html | css | js) (. *) {proxy_pass http://192.168.4.32:5000 # turn to request proxy_redirect off; proxy_cache cache_one; proxy_cache_valid 200 1h if there is no cache; # set different caching time for different HTTP status codes: proxy_cache_valid 500 1d; proxy_cache_valid any 1m; expires 3d;}
Expires sets an expiration time for a resource. By setting the expires parameter, you can make the browser cache the content before the expiration time and reduce the request and traffic with the server. In other words, there is no need to go to the server to verify, just confirm whether it expires through the browser itself, so no additional traffic will be generated. This approach is very suitable for resources that do not change frequently.
5. Dynamic and static separation
5.1 actual combat one
Achieve results:
Access the www.abc.com/a.html through the browser address bar and access the static resource content of the static resource server. Access the www.abc.com/a.jsp through the browser address bar and access the dynamic resource content of the dynamic resource server.
Specific configuration:
Upstream static {server 192.167.4.31 upstream dynamic {server 192.167.4.32 server 8080;} server {listen 80; # listening port server_name www.abc.com; listening address # intercepting dynamic resources location ~. *\. (php | jsp) ${proxy_pass http://dynamic; } # intercepting static resources location ~. *\. (jpg | png | htm | html | css | js) ${root / data/; # html directory proxy_pass http://static; autoindex on;; # automatically open file list}
6. High availability
In general, accesses the backend target service cluster through the nginx master server. When the master server dies, it automatically switches to the backup server. At this time, the backup server acts as the master server and accesses the backend target server.
6.1 actual combat one
Achieve results:
prepares two nginx servers, access the virtual ip address through the browser address bar, stop the nginx of the main server, and access the virtual ip address again is still valid.
Specific configuration:
(1) install keepalived on two nginx servers.
Keepalived is the equivalent of a route, which uses a script to detect whether the current server is still alive, continue to access if it is alive, or switch to another backup server.
# install keepalived yum install keepalived- y # check version rpm-Q-a keepalived keepalived-1.3.5-16.el7.x86_64
(2) modify the master / slave server / etc/keepalived/keepalivec.conf configuration file (can be replaced directly) to complete the highly available master / slave configuration.
Keepalived binds the nginx server to a virtual ip, and the nginx high availability cluster exposes the virtual ip externally. Clients access the nginx server by accessing the virtual ip.
Global_defs {notification_email {acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc} notification_email_from_Alexandre.Cassen@firewall.loc smtp_server 192.168.4.32 smtp_connect_timeout 30 router_id LVS_DEVEL # is configured in the / etc/hosts file Through it, we can access our host} vrrp_script_chk_http_port {script "/ usr/local/src/nginx_check.sh" interval 2 # detection script execution interval weight 2 # weight plus 2} vrrp_instance VI_1 {interface ens7f0 # Nic Need to modify the state MASTER # backup server to change MASTER to BACKUP virtual_router_id 51 # active and standby virtual_router_id must be the same priority 100# master and standby machine to take different priority, the host value is larger, the backup machine value is smaller advert_int 1 # how often (default 1s) to send a heartbeat Check whether the server is still alive authentication {auth_type PASS auth_pass 1111} virtual_ipaddress {192.168.1.100 # VRRP H virtual address, you can bind multiple}}
Field description
Router_id: configured in the / etc/hosts file, through which we can access our host. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4:: 1 localhost localhost.localdomain localhost6 localhost6.localdomain6 127.0.0.1 LVS_DEVEL copy code
Interval: sets the interval between script execution
Weight: the value by which the weight increases (can be negative) when the script fails to execute, that is, keepalived or nginx fails.
Interface: enter the ifconfig command to see what the current network card name is.
Ens7f0: flags=4163 mtu 1500 inet 192.168.4.32 netmask 255.255.252.0 broadcast 192.168.7.255 inet6 fe80::e273:9c3c:e675:7c60 prefixlen 64 scopeid 0x20......
(3) add the detection script nginx_check.sh under the / usr/local/src directory.
#! / bin/bash A = `ps-C nginx-no-header | wc-l` if [$A-eq 0]; then / usr/local/nginx/sbin/nginx sleep 2 if [ps-C nginx-no-header | wc-l`-eq 0]; then killall keepalived fi fi
(4) start nginx and keepalived of the two servers.
# start nginx. / nginx # start keepalived systemctl start keepalived.service
(5) check the virtual ip address ip a. Stop the main server 192.168.4.32 nginx and keepalived, and then visit the virtual ip to see the high availability effect.
6. Principle analysis
After Nginx starts, there are two processes in the Linux system, one is master and the other is worker. Master, as an administrator, is not involved in any work and is only responsible for assigning different tasks to multiple worker (there are usually multiple worker).
Ps-ef | grep nginx root 20473 1 0 2019? 00:00:00 nginx: master process / usr/sbin/nginx nginx 4628 20473 0 Jan06? 00:00:00 nginx: worker process nginx 4629 20473 0 Jan06? 00:00:00 nginx: worker process
How does worker work?
The client first sends a request through master. After receiving the request, the administrator will notify worker of the request. Multiple worker will scramble for the task. The worker that gets the task will forward the request through tomcat, reverse proxy, access database, and so on (nginx itself does not directly support java).
What are the benefits of one master and multiple worker?
You can use nginx-s reload for hot deployment.
Each worker is an independent process. If there is a problem with one of the worker, the other worker will run independently and will continue to compete for tasks to implement the client request process without causing service interruption.
How many worker should be set?
Similar to redis, Nginx uses io multiplexing mechanism. Each worker is an independent process, and there is only one main thread in each process. Requests are processed asynchronously and non-blocking, and the thread of each worker can maximize the performance of a cpu. Therefore, it is most appropriate that the number of worker is equal to the number of cpu of the server.
Think about:
(1) how many connections does it take to send a request to worker?
(2) there is one master and four worker, and the maximum number of connections supported by each worker is 1024. What is the maximum number of concurrency supported by the system?
At this point, I believe you have a deeper understanding of "how to configure the forward and backward agents of Nginx". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.