Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Set up a LVS+HA website service cluster

2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Build a high-availability web cluster in keepalived+DR/NAT mode. This blog uses the keepalived+DR environment to build a highly available web service cluster.

Related technical documents can be viewed at my home page: https://blog.51cto.com/14227204, https://blog.51cto.com/14227204/2438901

The environment is as follows:

I. Environmental analysis:

1. 2 schedulers and 2 web nodes use the same network segment address and can communicate directly with the external network. In order to share storage

Security. Generally, web nodes and storage servers are planned to the intranet environment, so web nodes must have two or more.

The interface of the network card.

2. I have limited resources here, and for the sake of convenient configuration, there are only two scheduler and web nodes respectively. Please visit web.

If the request is small, it is enough, but if the access request is large, at least three schedulers and three schedulers should be configured respectively.

Web node, if there are only two web nodes, and the traffic is relatively large, then once one of them is down, then the remaining

An only child is bound to be killed because he cannot withstand the surge of requests for visits.

3. Prepare the system image in order to install related services.

4. Configure the firewall policy and the IP address other than VIP (I turn off the firewall directly here).

5. Keepalived automatically calls the IP_vs module, so there is no need to load it manually.

2. Start building:

Configure the primary scheduler:

[root@lvs1 /] # yum-y install ipvsadm keepalived # install keepalived and ipvsadm management tools [root@lvs1 keepalived] # vim / etc/sysctl.conf # adjust kernel parameters Turn off ICMP redirection .net.ipv4.conf.all.send _ redirects = 0net.ipv4.conf.default.send_redirects = 0net.ipv4.conf.ens33.send_redirects = 0 [root@lvs1 /] # sysctl-p # refresh to make the configuration effective net.ipv4.conf.all.send_redirects = 0net.ipv4.conf.default.send_redirects = 0net.ipv4.conf.ens33.send_redirects = 0 [root@lvs1 /] # cd / etc/keepalived/ [root@lvs1 keepalived] # cp keepalived.conf keepalived.conf.bak # copy a keepalived master file as backup In order to avoid errors during modification [root@lvs1 /] # vim / etc/keepalived/keepalived.conf # edit the master file! Configuration File for keepalivedglobal_defs {notification_email {acassen@firewall.loc failover@firewall.loc # the recipient address to which the error message is sent when an error occurs You can fill in the name and address of the sender sysadmin@firewall.loc} notification_email_from Alexandre.Cassen@firewall.loc # (without modification) smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id LVS1 # as needed. Change the name of this server. Vrrp_instance VI_1 {state MASTER # must be unique among all scheduler names in the cluster. Set the physical Nic interface of the primary scheduler interface ens33 # to carry the VIP address. Change the priority of the virtual_router_id 51 priority 100 # primary scheduler advert_int 1 authentication {# according to the actual situation. Master-slave hot standby authentication information auth_type PASS auth_pass 1111} virtual_ipaddress {# specify cluster VIP address 200.0.0.100}} virtual_server 200.0.0.100 80 {# virtual server address (VIP) port delay_loop 15 # interval for health check Lb_algo rr # polling scheduling algorithm lb_kind DR # specifies the working mode Here is DR, or you can change it to NAT! Persistence_timeout 50 # in order to test later to see the effect, keep the connection on this line before adding "!" Comment out the line protocol TCP real_server 200.0.0.3 80 {# web node address and port weight 1 TCP_CHECK {connect_port 80 connect_timeout 3 nb_get_retry 3 delay_before_retry 3} real_server 200.0.0.4 80 {# another web node address and port weight 1 TCP_CHECK {connect_port 80 # configure connection port connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } [root@lvs1 /] # systemctl restart keepalived [root@lvs1 /] # systemctl enable keepalived

This is the end of the main scheduler and the configuration is complete:

Configure the slave scheduler:

[root@localhost /] # yum-y install keepalived ipvsadm [root@localhost /] # scp root@200.0.0.1:/etc/sysctl.conf / etc/ # you can copy the complicated configuration through the scp command root@200.0.0.1's password: sysctl.conf 0.6KB/s 00:00 [root@localhost /] # sysctl-p [root@localhost /] # sysctl-p # refresh to make the configuration effective net.ipv4.conf.all.send_redirects = 0net.ipv4.conf.default.send_redirects = 0net.ipv4. Conf.ens33.send_redirects = 0 [root@localhost /] # vim / etc/keepalived/keepalived.conf.. router_id LVS2 # route-id should be different vrrp_instance VI_1 {state BACKUP # status should be changed to BACKUP preferably uppercase interface ens33 # Nic if the same But if you do not change the priority of virtual_router_id 51 priority 90 #, it is smaller than the main scheduler advert_int 1 authentication {auth_type PASS auth_pass 1111} virtual_ipaddress {# then you need to change these other configurations are the same as the main scheduler 200.0.0.100}} [root@localhost /] # systemctl enable Keepalived [root@localhost /] # systemctl restart keepalived # restart the service to make the configuration effective

If you need to deploy multiple slave schedulers, follow the above slave (backup) scheduler configuration

Web1 node configuration:

[root@web1 /] # cd / etc/sysconfig/network-scripts/ [root@web1 network-scripts] # cp ifcfg-lo ifcfg-lo:0 [root@web1 network-scripts] # vim ifcfg-lo:0DEVICE=lo:0IPADDR=200.0.0.100 # VIP address NETMASK=255.255.255.255 # Mask is 1ONBOOT=yes [root@web1 network-scripts] # ifup lo:0 # launch virtual interface [root@web1 network-scripts] # Ifconfig lo:0 # check whether the configuration is valid lo:0: flags=73 mtu 65536 inet 200.0.0.100 netmask 255.255.255.255 loop txqueuelen 1 (Local Loopback) # route add-host 200.0.0.100 dev lo:0 # add Local Route [root@web1 /] # vim / etc/rc.local # set to boot automatically Add this routing record.. / sbin/route add-host 200.0.0.100 dev lo:0 [root@web1 /] # vim / etc/sysctl.conf # adjust / proc parameter Turn off the ARP response net.ipv4.conf.all.arp_ignore = 1net.ipv4.conf.all.arp_announce = 2net.ipv4.conf.default.arp_ignore = 1net.ipv4.conf.default.arp_announce = 2net.ipv4.conf.lo.arp_ignore = 1net.ipv4.conf.lo.arp_announce = 2 [root@web1 /] # sysctl-p # refresh to make the configuration effective net.ipv4.conf.all.arp_ignore = 1net.ipv4.conf .all.arp _ announce = 2net.ipv4.conf.default.arp_ignore = 1net.ipv4.conf.default.arp_announce = 2net.ipv4.conf.lo.arp_ignore = 1net.ipv4.conf.lo.arp_announce = 2 [root@web1 /] # yum-y install httpd [root@web1 /] # echo test1.com > / var/www/html/index.html [root@web1 /] # systemctl start httpd [root@web1 /] # systemctl enable httpd

The configuration of the web2 node is the same as that of the web1 node, which I omitted here, but here I write the test file of web2 as test2.com in order to see the verification effect.

If you visit the same page, if you eliminate configuration errors, you can open multiple pages, or refresh them later, because it may have a time to stay connected, so there will be delays.

3. Build NFS shared storage service:

[root@nfs /] # mkdir opt/wwwroot [root@nfs /] # vim / etc/exports # write configuration file / opt/wwwroot 192.168.1.0 Action24 (rw,sync No_root_squash) [root@nfs /] # systemctl restart nfs # restart the service to make the configuration take effect [root@nfs /] # systemctl restart rpcbind [root@nfs /] # showmount-e # View the local mount directory Export list for nfs:/opt/wwwroot 192.168.1.0 echo nfs.test.com 24 [root@nfs /] # echo nfs.test.com > / opt/wwwroot/index.html

All nodes mount the shared storage directory:

[root@web1 /] # showmount-e 192.168.1.5 # View all directories shared by the shared server Export list for 192.168.1.5:/opt/wwwroot 192.168.1.0 var/www/html/ 24 [root@web1 /] # mount 192.168.1.5:/opt/wwwroot/ / var/www/html/ # mount to local [root@web1 /] # vim / etc/fstab # set auto mount .192.168.1.5: / opt/wwwroot / var/www/html nfs defaults _ netdev 0 0

Both web1 and web2 need to be mounted

1) on which scheduler is VIP, query the physical interface of the VIP address hosted by the scheduler, and you can see the VIP address (VIP address cannot be found on the backup scheduler):

[root@LVS1 ~] # ip a show dev ens33 # query the physical network card ens332: ens33: ate UP groupn 1000 link/ether 00:0c:29:77:2c:03 brd ff:ff:ff:ff:ff:ff inet 200.0.0.1 scope global noprefixroute ens33 valid_lft forever preferred_lft forever inet 24 brd 200.0.0.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever inet 200.0.0.100 scope global ens33 # VIP address. Valid_lft forever preferred_lft forever inet6 fe80::95f8:eeb7:2ed2:d13c/64 scope link noprefixroute valid_lft forever preferred_lft forever

2) query which web nodes are available:

[root@LVS1 ~] # ipvsadm-ln # query web Node Pool and VIPIP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 200.0.100 VIPIP Virtual Server version 80 rr 200.0.0.3 ln 80 Route 100 200.0.0.4 web 80 Route 100

3) simulate the downtime of the Web2 node and the primary scheduler, and query the VIP and web nodes again on the backup scheduler:

[root@LVS2 ~] # ip a show dev ens33 # you can see that the VIP address has been transferred to the backup scheduler 2: ens33: link/ether 00:0c:29:9a:09:98 brd ff:ff:ff:ff:ff:ff inet 200.0.0.2 scope global noprefixroute ens33 valid_lft forever preferred_lft forever inet 24 brd 200.0.0.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever inet 200.0.0.100 Universe 32 scope global ens33 # VIP. After the valid_lft forever preferred_lft forever inet6 fe80::3050:1a9b:5956:5297/64 scope link noprefixroute valid_lft forever preferred_lft forever [root@LVS2 ~] # ipvsadm-ln # Web2 node goes down, it can't be found. IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 200.0.0.100 rr-> 200.0.0.3 rr 80 Route 100 # when the main scheduler or Web2 node returns to normal, it will be automatically added to the cluster and run normally.

4) View the log messages when the scheduler fails over:

[root@LVS2] # tail-30 / var/log/messages

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report