In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article will explain in detail how to build a Nginx high availability cluster in Keepalived+Lvs+Nginx. The editor thinks it is very practical, so I share it with you as a reference. I hope you can get something after reading this article.
Nginx is an excellent reverse proxy tool that supports very useful functions such as request distribution, load balancing, caching and so on. In request processing, nginx uses the epoll model, which is a model based on event monitoring, so it has a very efficient request processing efficiency and can achieve millions of concurrent capabilities on a single machine. The requests received by nginx can be distributed to its lower-level application servers through load balancing strategy, and these servers are generally deployed in a cluster, so in the case of poor performance, the application server can expand traffic by adding machines. At this time, for some very large websites, the performance bottleneck comes from nginx, because the concurrency capacity of stand-alone nginx is limited, and nginx itself does not support cluster mode, so the horizontal expansion of nginx is particularly important at this time.
Keepalived is a tool for server state detection and failover. In its configuration file, you can configure the active and standby server and the status detection request for that server. In other words, keepalived can continuously send requests to the specified server during the service period according to the configured request. If the status code returned by the request is 200, the server status is normal. If it is not normal, keepalived will take the server offline and set the backup server online.
Lvs is a tool for four-tier load balancing. The so-called layer 4 load balancing corresponds to the network layer 7 protocol. Common protocols such as HTTP are based on layer 7 protocol, while lvs acts on layer 4 protocol, namely, transport layer, network layer, data link layer and physical layer. The main protocols of the transport layer here are TCP and UDP protocols, that is to say, the main modes supported by lvs are TCP and UDP. It is precisely because lvs is based on layer-4 load balancer, its request processing capacity is much higher than that of common servers. For example, the request processing of nginx is based on layer 7 of the network, and the load balancing capacity of lvs is more than 10 times that of nginx.
Through the above introduction, we can find that in very large websites, the application server can be expanded horizontally, while nginx does not support horizontal expansion, so nginx will become a performance bottleneck. Lvs is a load balancing tool, so if we combine lvs and nginx, then through the deployment of multiple nginx servers, through the load balancing ability of lvs, the requests can be evenly distributed to each nginx server, and then distributed by the nginx server to each application server, so that we can achieve the horizontal expansion of nginx. Because nginx is essentially an application server, it may also go down, so the fault detection and service switching of nginx can be realized by combining keepalived here. In other words, through keepalived+lvs+nginx, we have implemented the high-availability cluster mode of nginx.
In the above introduction, we will notice that although keepalived+lvs+nginx implements the cluster mode of nginx, when we use nginx, it has an ip and port, and the default listening ports are 80 and 443, so how does lvs distribute requests to nginx servers with different ip and ports? This is achieved through the virtual ip. The so-called virtual ip provides a public ip to the outside. After receiving the request from the virtual ip, the external client selects a target nginx server through the configured scheduler and load balancing policy, and then forwards the request to the server. Here, lvs has two concepts, namely, scheduler and load balancer strategy. The so-called scheduler refers to how lvs will process request and response data. There are three main schedulers:
Virtual Server via Network Address Translation (VS/NAT): the main principle of this method is that after the user sends the request to the virtual ip, lvs selects a target processing service according to the load balancing algorithm, and then modifies the target ip address in the request message to the calculated target server and sends it to the server. For the response message, the scheduler modifies the source address in the response data returned by the target server to a virtual ip address. In this way, for the client, it is formally oriented to a server. However, the disadvantage of this approach is that all response data need to go through the scheduler, if the number of requests is relatively large, then the scheduler will become the bottleneck of the whole system.
Virtual Server via IP Tunneling (VS/TUN): this approach mainly solves the problem that in VS/NAT, the response data will pass through the scheduler. Like VS/NAT, the scheduler still receives the requested data and modifies the target ip in the message to the ip of the target service, but after the target service processes the data, it directly modifies the source ip in the response message to virtual ip, and then sends the request to the client. In this way, the response data is processed by each target service without the need to return through the scheduler, which will greatly improve the throughput of the system, and because the general request message is much smaller than the response message, the scheduler only needs to process the request message, then the overall load of the system will be distributed to each server.
Virtual Server via Direct Routing (VS/DR): the main difference between this method and VS/TUN is that VS/NAT modifies the ip address in the request message to the ip address of the target service, while VS/DR directly modifies the MAC address in the request message to the target address, which is more efficient, because the ip address in VS/TUN eventually needs to be translated into MAC address to send data.
1. Environmental preparation
VMware; 4 CentOs7 virtual hosts: 172.16.28.130,172.16.28.131,172.16.28.132,172.16.28.133 system service: LVS, Keepalived Web server: nginx cluster building: LVS DR mode
two。 Software installation
On the four virtual machines, we set up the cluster as follows:
172.16.28.130 lvs+keepalived 172.16.28.131 lvs+keepalived 172.16.28.132 nginx 172.16.28.133 nginx
Here, we use 172.16.28.130 and 172.16.28.131 as the working machines of lvs+keepalived, that is, these two machines are mainly used for load balancing, fault detection and offline; we use 172.16.28.132 and 172.16.28.133 as application servers, mainly for external services. These four servers serve as the entire back-end cluster service, and the external virtual ip is 172.16.28.120. It should be noted that the services detected by keepalived here are two lvs servers, one as a master server and the other as a backup server, both of which are identical in the configuration of load balancer. Under normal circumstances, when the client requests a virtual ip, the lvs forwards the request to the master server, and then the master server selects an application server according to the configured load balancing policy and sends the request to the application server for processing. If, at some point, the master server of lvs goes down due to a fault, keepalived will detect the fault and take it offline, and then use the backup machine online to provide services, thus realizing the function of failover.
2.1 lvs+keepalived installation
Install ipvs and keepalived on 172.16.28.130 and 172.16.28.131:
# install ipvs sudo yum install ipvsadm # install keepalived sudo yum install keepalived
Install nginx on 172.16.28.132 and 172.16.28.133:
# install nginx sudo yum install nginx
It should be noted that the firewall needs to be turned off on the two nginx servers, otherwise the two lvs+keepalived machines will not be able to send requests to the two nginx servers:
# turn off Firewall systemctl disable firewalld.service
Check whether the two load balancer machines support lvs:
Sudo lsmod | grep ip_vs # if you see the following result, it is supported [zhangxufeng@localhost ~] $sudo lsmod | grep ip_vs ip_vs 145497 0 nf_conntrack 137239 1 ip_vs libcrc32c 12644 3 xfs,ip_vs,nf_conntrack
If the above command does not have any results, execute the sudo ipvsadm command to start ipvs, and then check it through the above command. After starting ipvs, we can edit the keepalived.conf file in the / etc/keepalived/ directory. We use the 172.16.28.130 machine as the master machine, and the master node is configured as follows:
# Global Configuration global_defs {lvs_id director1 # specify id of lvs} # VRRP Configuration vrrp_instance LVS {state MASTER # specify the current node as the master node interface ens33 # where ens33 is the name of the network card, you can view virtual_router_id 51 # via ifconfig or ip addr. What is specified here is the virtual route id,master node and backup node need to specify the same priority 151 # specifies the priority of the current node The higher the value, the higher the priority. The master node is higher than the backup node advert_int 1 # specifies the interval between sending VRRP advertisements, in seconds authentication {auth_type PASS # authentication By default, auth_pass 123456 # authenticated access password} virtual_ipaddress {172.16.28.120 # specifies the configuration of virtual ip} # Virtual Server Configuration-for www server # backend real host virtual_server 172.16.28.120 80 {delay_loop 1 # health check interval lb_algo rr # load balancing policy, here is polling lb_kind DR # scheduler type Here is DR persistence_time 1 # specifies the length of time that requests are continuously sent to the same real host protocol TCP # specifies the type of protocol to access the real host in the background # Real Server 1 configuration # specifies the ip of real host 1 and port real_server 172.16.28.132 80 {weight 1 # specifies the weight of the current host TCP_CHECK {connection_timeout 10 # Timeout for heartbeat check nb_get_retry 3 # specifies the number of repeats after heartbeat timeout delay_before_retry 3 # specifies how long to delay before trying} # Real Server 2 Configuration real_server 172.16.28.133 80 {weight 1 # specifies the weight of the current host TCP_CHECK {connection_timeout 10 # specifies the heartbeat check Timeout nb_get_retry 3 # specifies the number of repeats after the heartbeat timeout delay_before_retry 3 # specifies how long to delay before trying}
The above is the configuration of keepalived on the master node. For the backup node, the configuration is almost the same as that of master, except that its state and priority parameters are different. The complete configuration of the backup node is as follows:
# Global Configuration global_defs {lvs_id director2 # specify the id of lvs} # VRRP Configuration vrrp_instance LVS {state BACKUP # specify the current node as the master node interface ens33 # where ens33 is the name of the network card, you can view virtual_router_id 51 # through ifconfig or ip addr. Here the virtual route id,master node and backup node need to specify the same priority 150 # specifies the priority of the current node The higher the value, the higher the priority. The master node is higher than the backup node advert_int 1 # specifies the interval between sending VRRP advertisements, in seconds authentication {auth_type PASS # authentication By default, auth_pass 123456 # authenticated access password} virtual_ipaddress {172.16.28.120 # specifies the configuration of virtual ip} # Virtual Server Configuration-for www server # backend real host virtual_server 172.16.28.120 80 {delay_loop 1 # health check interval lb_algo rr # load balancing policy, here is polling lb_kind DR # scheduler type Here is DR persistence_time 1 # specifies the length of time that requests are continuously sent to the same real host protocol TCP # specifies the type of protocol to access the real host in the background # Real Server 1 configuration # specifies the ip of real host 1 and port real_server 172.16.28.132 80 {weight 1 # specifies the weight of the current host TCP_CHECK {connection_timeout 10 # Timeout for heartbeat check nb_get_retry 3 # specifies the number of repeats after heartbeat timeout delay_before_retry 3 # specifies how long to delay before trying} # Real Server 2 Configuration real_server 172.16.28.133 80 {weight 1 # specifies the weight of the current host TCP_CHECK {connection_timeout 10 # specifies the heartbeat check Timeout nb_get_retry 3 # specifies the number of repeats after the heartbeat timeout delay_before_retry 3 # specifies how long to delay before trying}
The reason for configuring master and backup to be exactly the same is that when master goes down, services can be seamlessly switched according to the configuration of backup.
After the lvs+keepalived machine configuration is complete, let's configure the nginx configuration of the two application servers. Here, we use nginx as the application server, configure the return status code as 200 in its configuration file, and return the ip of the current host, as follows:
Worker_processes auto; # pid / run/nginx.pid; events {worker_connections 786;} http {server {listen 80; # here is a direct return of the 200 status code and a text location / {default_type text/html; return 200 "Hello, Nginx! Server zhangxufeng@172.16.28.132\ n ";}} worker_processes auto; # pid / run/nginx.pid; events {worker_connections 786;} http {server {listen 80; # here is directly returned a 200 status code and a text location / {default_type text/html; return 200" Hello, Nginx! Server zhangxufeng@172.16.28.133\ n ";}
As you can see, the host ip is different in the text returned by the two machines. After the nginx configuration is complete, you can start it with the following command:
Sudo nginx
After starting nginx, we need to configure virtual ip because the lvs scheduler we use is DR mode. As we mentioned earlier, in this mode, the response to the client is returned directly to the client by the real server, while the real server needs to modify the source ip in the response message to virtual ip. The virtual ip configured here plays this role. Let's edit the / etc/init.d/lvsrs file and write the following:
#! / bin/bash ifconfig lo:0 172.16.28.120 netmask 255.255.255.255 broadcast 172.16.28.120 up route add-host 172.16.28.120 dev lo:0 echo "0" > / proc/sys/net/ipv4/ip_forward echo "1" > / proc/sys/net/ipv4/conf/lo/arp_ignore echo "2" > / proc/sys/net/ipv4/conf/lo/arp_announce echo "1" > / proc/sys/net/ Ipv4/conf/all/arp_ignore echo "2" > / proc/sys/net/ipv4/conf/all/arp_announce exit 0
Lo: indicates the name of the real Nic of the current host
172.16.28.120: represents a virtual ip
Run the script file after writing it. Then start the keepalived service on the two lvs+keepalived machines:
Sudo service keepalived start
Finally, you can view the policy of the configured lvs+keepalived with the following command:
[zhangxufeng@localhost keepalived] $sudo ipvsadm-ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 172.16.28.120 Prot LocalAddress:Port Scheduler Flags 80 rr-> 172.16.28.132 ln IP Virtual Server version 80 Route 1 00
2.2 Cluster testing
According to the above steps, we have configured a cluster of lvs+keepalived+nginx. In the browser, we can visit http://172.16.28.120 and see the following response:
Hello, Nginx! Server zhangxufeng@172.16.28.132
After refreshing the browser several times, you can see that the text displayed in the browser switches as follows, which is due to the load balancing policy of lvs:
Hello, Nginx! This is the end of Server zhangxufeng@172.16.28.133 's article on "how to build a Nginx high availability cluster in Keepalived+Lvs+Nginx". I hope the above content can be helpful to you, so that you can learn more knowledge. if you think the article is good, please share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.