Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Build: LVS+Keepalived highly available Web service cluster environment

2025-01-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

This service involves many technologies. For more information on related technical documents, please refer to the following link:

Detailed explanation of Centos 7 load balancing configuration based on DR (Direct routing) Mode

Detailed explanation of Centos 7 load balancing configuration based on NAT (address Translation) Mode

Detailed explanation of LVS load balancing Cluster

Centos 7's keepalived dual Hot standby Theory + detailed explanation of configuration File

With the combination of the above blog posts, we can build a high-availability web cluster in keepalived+DR/NAT mode. This blog post uses the keepalived+DR environment to build a high-availability web service cluster.

The blog post is mainly configured and configured according to the production environment, which can be copied as follows:

I. Environmental analysis:

1. 2 schedulers and 2 web nodes use the same network segment address and can communicate directly with the external network. In order to share storage

Security. Generally, web nodes and storage servers are planned to the intranet environment, so web nodes must have two or more.

The interface of the network card.

2. I have limited resources here, and for the sake of convenient configuration, there are only two scheduler and web nodes respectively. Please visit web.

If the request is small, it is enough, but if the access request is large, at least three schedulers and three schedulers should be configured respectively.

Web node, if there are only two web nodes, and the traffic is relatively large, then once one of them is down, then the remaining

An only child is bound to be killed because he cannot withstand the surge of requests for visits.

3. Prepare the system image in order to install related services.

4. Configure the firewall policy and the IP address other than VIP (I turn off the firewall directly here).

5. Keepalived automatically calls the IP_vs module, so there is no need to load it manually.

Second, the final effect:

1. The client visits the VIP of the cluster many times and gets the same web page.

2. After the master scheduler goes down, the VIP address of the cluster will automatically drift to the slave (backup) scheduler. At this time, all

Scheduling tasks are assigned from the scheduler. When the primary scheduler resumes operation, the VIP address of the cluster is automatically transferred back to

The master scheduler, which continues to work, returns to the backup state from the scheduler.

3. After the web node is down, it will be detected by the keepalived health check function, and then automatically removed from the web node pool.

When the web node resumes operation, the down node will be automatically added to the web node pool.

3. Start building:

1. Configure the main scheduler (LVS1):

[root@LVS1 ~] # yum-y install keepalived ipvsadm # tools required for installation [root@LVS1 ~] # vim / etc/sysctl.conf # adjust kernel parameters Write the following three lines. Net.ipv4.conf.all.send _ redirects = 0net.ipv4.conf.default.send_redirects = 0net.ipv4.conf.ens33.send_redirects = 0 [root@LVS1 ~] # sysctl-p # refresh net.ipv4.conf.all.send_redirects = 0net.ipv4.conf.default.send_redirects = 0net. Ipv4.conf.ens33.send_redirects = 0 [root@LVS1 ~] # cd / etc/keepalived/ [root@LVS1 keepalived] # cp keepalived.conf keepalived.conf.bak # backup configuration file [root@LVS1 keepalived] # vim keepalived.conf # Edit keepalived configuration file # the lines marked below need to be changed If there are no annotated lines, just keep the default! Configuration File for keepalivedglobal_defs {notification_email {acassen@firewall.loc # recipient address (without modification if necessary) failover@firewall.loc sysadmin@firewall.loc} notification_email_from Alexandre.Cassen@firewall.loc # sender name and address (without modification) smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id LVS_DEVEL1 # change the name of this server Must be unique among all scheduler names in the cluster. } vrrp_instance VI_1 {state MASTER # set up the primary scheduler interface ens33 # the physical network card interface that carries the VIP address change virtual_router_id 51 priority 100 advert_int 1 authentication {auth_type PASS auth_pass 1111} virtual_ipaddress { 200.0.0.100 # specify drift IP address (VIP) There can be more than one. }} virtual_server 200.0.0.100 80 {# change the VIP address and the required port delay_loop 6 lb_algo rr # change the load scheduling algorithm as needed, rr means polling. Lb_kind DR # sets the operating mode to DR (Direct routing) mode. ! Persistence_timeout 50 # in order to test later to see the effect, keep the connection on this line before adding "!" Comment out the line. Protocol TCPreal_server 200.0.0.480 {# configuration of one web node, real_server 200.0.0.480 {. } is to copy the following. After copying it, just change the node IP address. Weight 1 TCP_CHECK {connect_port 80 connect_timeout 3 nb_get_retry 3 delay_before_retry 3}} real_server 200.0.0.380 {# configuration of a web node. After the changes are completed, several nodes will copy several copies of real_serve 200.0.0.380 {. } at the top of the line, it is best not to paste below in case the curly braces are lost. Weight 1 TCP_CHECK {connect_port 80 # add this line to configure the connection port connect_timeout 3 nb_get_retry 3 delay_before_retry 3} # there are many configuration items below. I have 98 lines here, all of which can be deleted. If you do not delete, restart the service may get an error. [root@LVS1 ~] # systemctl restart keepalived # restart service [root@LVS1 ~] # systemctl enable keepalived # set boot self-startup

At this point, the main scheduler is configured!

2. Configure slave scheduler (LVS2):

[root@LVS2 ~] # yum-y install ipvsadm keepalived # tools required for installation [root@LVS2 ~] # scp root@200.0.0.1:/etc/sysctl.conf / etc/ # copy the / proc parameter file of the main scheduler to root@200.0.0.1 s password: # enter the user password sysctl.conf of the main scheduler 205.8KB/s 00:00 [root@LVS2 ~] # sysctl-p # Refresh net.ipv4.conf.all.send_redirects = 0net.ipv4.conf.default.send_redirects = 0net.ipv4.conf.ens33.send_redirects = 0 [root@LVS2 ~] # scp root@200.0.0.1:/etc/keepalived/keepalived.conf / etc/keepalived/ # copy the keepalived configuration file of the main scheduler Just change it a little bit. Root@200.0.0.1 s password: # enter the user password of the main scheduler keepalived.conf 1053 2.5MB/s 00:00 [root@LVS2 ~] # vim / etc/keepalived/keepalived.conf # Edit the copied configuration file # if both servers are ens33 network cards Then only the following three items are needed (other items remain by default): router_id LVS_DEVEL2 # changes the route_id to something different. Route_id must be unique. State BACKUP # status is changed to BACKUP, pay attention to case. The priority 90 # priority is smaller than the primary scheduler and must not conflict with other backup scheduler priorities. [root@LVS2 ~] # systemctl restart keepalived # Startup Service [root@LVS2 ~] # systemctl enable keepalived # set boot self-startup

At this point, the slave scheduler is also configured. If you need to deploy multiple slave schedulers, follow the above slave (backup) scheduler configuration.

3. Configure the web1 node:

[root@Web1 ~] # cd / etc/sysconfig/network-scripts/ [root@Web1 network-scripts] # cp ifcfg-lo ifcfg-lo:0 # copy a configuration file for the loopback address [root@Web1 network-scripts] # vim ifcfg-lo:0 # edit the loopback address to host the cluster's VIP. DEVICE=lo:0 # change the name of the network card IPADDR=200.0.0.100 # configure the cluster's VIPNETMASK=255.255.255.255 # subnet mask must be 4255. ONBOOT=yes# can keep the above four lines of configuration items, and delete the redundant ones. [root@Web1 network-scripts] # ifup lo:0 # launch loopback interface [root@Web1 ~] # ifconfig lo:0 # check whether the VIP configuration is correct lo:0: flags=73 mtu 65536 inet 200.0.0.100 netmask 255.255.255.255 loop txqueuelen 1 (Local Loopback) [root@web1 ~] # route add-host 200.0.0.100 dev lo:0 # add VIP local access routing record [root@web1 ~] # vim / etc/rc.local # set automatic boot Add this routing record.. / sbin/route add-host 200.0.0.100 dev lo:0 [root@web1 ~] # vim / etc/sysctl.conf # adjust / proc response parameters Write the following six lines .net.ipv4.conf.all.arp _ ignore = 1net.ipv4.conf.all.arp_announce = 2net.ipv4.conf.default.arp_ignore = 1net.ipv4.conf.default.arp_announce = 2net.ipv4.conf.lo.arp_ignore = 1net.ipv4.conf.lo.arp_announce = 2 [root@web1 ~] # sysctl-p # refresh net.ipv4.conf.all.arp_ignore = 1net.ipv4.conf.all.arp_announce = 2net.ipv4.conf.default.arp_ignore = 1net.ipv4.conf.default.arp_announce = 2net.ipv4.conf.lo.arp_ignore = 1net.ipv4.conf.lo.arp_announce = 2 [root@web1 ~] # yum-y install httpd # install the http service [root@Web1 ~] # echo 111111111111 > / var / www/html/index.html # prepare to test the web page file [root@Web1 ~] # systemctl start httpd # start the http service [root@Web1 ~] # systemctl enable httpd # set boot self-startup

At this point, the first web node has been configured.

4. Configure the web2 node:

[~] # scp root@200.0.0.3:/etc/sysconfig/network-scrts/ifcfg-lo:0 / etc/sysconfig/network-scripts/ # copy the lo:0 profile The authenticity of host '200.0.0.3 (200.0.0.3)' can't be established.ECDSA key fingerprint is b8:ca:d6:89:a2:42:90:97:02:0a:54:c1:4c of the web1 node 1e:c2:77.Are you sure you want to continue connecting (yes/no)? Yes # enter yesWarning: Permanently added '200.0.0.3' (ECDSA) to the list of known hosts.root@200.0.0.3's password: # enter the password of the web1 node user ifcfg-lo:0 100% 66 0.1KB/s 00:00 [root@Web2 ~] # ifup lo:0 # enable lo: 0 [root @ Web2 ~] # ifconfig lo:0 # confirm that VIP is correct lo:0: flags=73 mtu 65536 inet 200.0.100 netmask 255.255.255.255 loop txqueuelen 1 (Local Loopback) [root@Web2 ~] # scp root@200.0.0.3:/etc/sysctl.conf / etc/ # copy the kernel file root@200.0.0.3's password: # enter the password of the web1 node user sysctl.conf 100659 0.6KB/s 00:00 [root@Web2 ~] # sysctl-p # refresh net.ipv4.conf.all.arp_ignore = 1net.ipv4.conf.all.arp_announce = 2net.ipv4.conf.default.arp_ignore = 1net.ipv4.conf .default.ARP _ announce = 2net.ipv4.conf.lo.arp_ignore = 1net.ipv4.conf.lo.arp_announce = 2 [root@Web2 ~] # route add-host 200.0.0.100 dev lo:0 # add VIP local access routing record [root@Web2 ~] # vim / etc/rc.local # set to boot automatically Add this routing record.. / sbin/route add-host 200.0.0.100 dev lo:0 [root@web1 ~] # yum-y install httpd # install the http service [root@Web1 ~] # echo 2222222222222 > / var/www/html/index.html # ready for testing Web file [root@Web1 ~] # systemctl start httpd # start the http service [root@Web1 ~] # systemctl enable httpd # set boot self-startup

Now that the web2 is configured, use the client to test whether the cluster is in effect:

5. Client access test:

If you visit the same page, if you eliminate configuration errors, you can open multiple pages, or refresh them later, because it may have a time to stay connected, so there will be delays.

For testing, different web page files are prepared on each web node to test whether there is a load balancing effect, and now the effect is there, so it is necessary to build a shared storage server, in which all web nodes read web page files from the shared storage server and provide them to client in order to provide the same web page files to client.

The following is to build a simple shared storage server, if you need to build a highly available storage server, you can follow my blog: warrent, I will write in the future blog how to build a highly available storage server.

6. Configure the NFS shared storage server:

[root@NFS /] # yum-y install nfs-utils rpcbind # installation related package [root@NFS /] # systemctl enable nfs # set to boot [root@NFS /] # systemctl enable rpcbind # set to boot self-boot [root@NFS /] # mkdir-p / opt/wwwroot # prepare the shared directory [root@NFS /] # echo www. Baidu.com > / opt/wwwroot/index.html # create a new web page file [root@NFS /] # vim / etc/exports # set the shared directory (the file content is empty by default) / opt/wwwroot 192.168.1.0 account 24 (rw Sync,no_root_squash) # write to the line [root@NFS /] # systemctl restart rpcbind # restart related services Pay attention to the order in which services are started [root@NFS /] # systemctl restart nfs [root@NFS /] # showmount-e # View the local shared directory Export list for NFS:/opt/wwwroot 192.168.2.0

7. All web nodes mount shared storage servers:

1) Web1 node server configuration:

[root@Web1 ~] # showmount-e 192.168.1.5 # View the storage server shared directory Export list for 192.168.1.5:/opt/wwwroot 192.168.1.0 24 [root@Web1 ~] # mount 192.168.1.5:/opt/wwwroot / var/www/html/ # mount to the root directory of the web page [root@Web1 ~] # df-hT / var/www/html/ # confirm that the file system type capacity is available for successful mount.% mount point 192.168.1.5:/opt/wwwroot nfs4 39G 5.5g 33G 15% / var/www/html [root@Web1 ~] # vim / etc/fstab # set automatic mount. .192.168.1.5: / opt/wwwroot / var/www/html nfs defaults _ netdev 0 write to the above

2) Web2 node server configuration:

[root@Web2 ~] # showmount-e 192.168.1.5 # View the storage server shared directory Export list for 192.168.1.5:/opt/wwwroot 192.168.1.0 24 [root@Web2 ~] # mount 192.168.1.5:/opt/wwwroot / var/www/html/ # mount to the root directory of the web page [root@Web2 ~] # df-hT / var/www/html/ # confirm that the file system type capacity is available for successful mount.% mount point 192.168.1.5:/opt/wwwroot nfs4 39G 5.5g 33G 15% / var/www/html [root@Web2 ~] # vim / etc/fstab # set automatic mount. .192.168.1.5: / opt/wwwroot / var/www/html nfs defaults _ netdev 0 write to the above

8. Client access test again:

This time, no matter how much the client refreshes, it will see the same web page, as follows:

9. Change the keepalived configuration files on all master / slave schedulers to apply to the production environment:

1) main scheduler:

[root@LVS1] # vim / etc/keepalived/keepalived.conf. # omit part of the content virtual_server 200.0.0.100 80 {delay_loop 6 lb_algo rr lb_kind DR persistence_timeout 50 # will add the exclamation point "!" Delete. Protocol TCP. # omit some content} [root@LVS1 ~] # systemctl restart keepalived # restart the service to make the changes take effect

2) from the scheduler:

[root@LVS1] # vim / etc/keepalived/keepalived.conf. # omit part of the content virtual_server 200.0.0.100 80 {delay_loop 6 lb_algo rr lb_kind DR persistence_timeout 50 # will add the exclamation point "!" Delete. Protocol TCP. # omit some content} [root@LVS2 ~] # systemctl restart keepalived # restart the service to make the changes take effect

10. Attach some viewing commands:

1) on which scheduler is VIP, query the physical interface of the VIP address hosted by the scheduler, and you can see the VIP address (VIP address cannot be found on the backup scheduler):

[root@LVS1 ~] # ip a show dev ens33 # query the physical network card ens332: ens33: ate UP groupn 1000 link/ether 00:0c:29:77:2c:03 brd ff:ff:ff:ff:ff:ff inet 200.0.0.1 scope global noprefixroute ens33 valid_lft forever preferred_lft forever inet 24 brd 200.0.0.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever inet 200.0.0.100 scope global ens33 # VIP address. Valid_lft forever preferred_lft forever inet6 fe80::95f8:eeb7:2ed2:d13c/64 scope link noprefixroute valid_lft forever preferred_lft forever

2) query which web nodes are available:

[root@LVS1 ~] # ipvsadm-ln # query web Node Pool and VIPIP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 200.0.100 VIPIP Virtual Server version 80 rr 200.0.0.3 ln 80 Route 100 200.0.0.4 web 80 Route 100

3) simulate the downtime of the Web2 node and the primary scheduler, and query the VIP and web nodes again on the backup scheduler:

[root@LVS2 ~] # ip a show dev ens33 # you can see that the VIP address has been transferred to the backup scheduler 2: ens33: link/ether 00:0c:29:9a:09:98 brd ff:ff:ff:ff:ff:ff inet 200.0.0.2 scope global noprefixroute ens33 valid_lft forever preferred_lft forever inet 24 brd 200.0.0.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever inet 200.0.0.100 Universe 32 scope global ens33 # VIP. After the valid_lft forever preferred_lft forever inet6 fe80::3050:1a9b:5956:5297/64 scope link noprefixroute valid_lft forever preferred_lft forever [root@LVS2 ~] # ipvsadm-ln # Web2 node goes down, it can't be found. IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 200.0.0.100 rr-> 200.0.0.3 rr 80 Route 100 # when the main scheduler or Web2 node returns to normal, it will be automatically added to the cluster and run normally.

4) View the log messages when the scheduler fails over:

[root@LVS2] # tail-30 / var/log/messages

It's done.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report