In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
For the related concepts of LVS load balancing cluster technology, please refer to the blog article: detailed explanation of LVS load balancing Cluster.
This blog post is mainly about configuration. The explanation of the relevant commands in the configuration process has been written in the blog post in the link above. The environment is as follows:
The final results are as follows:
Using NAT mode clustering technology, the LVS load scheduler is the gateway server for all nodes to access the Internet, and its 200.0.0.1 is also used as the VIP address for the entire cluster.
A scheduling algorithm that uses polling (rr).
Web1 and web2 first set up web services and prepare different web files for client access, so as to make sure that client accesses 200.0.0.1 of the LVS server and can access two wbe servers.
After the client test is successful, web1 and web2 can mount the shared directory provided by the NFS server to provide the same web page files for client.
1. Preparatory work:
Configure the network correctly, and those of the same network segment can communicate with each other.
Prepare the system image and related software packages to build httpd services and install related tools. For convenience, use the installation package in the system image directly to build the httpd service.
The firewall releases related traffic, and I turn off the firewall and selinux directly here.
2. Configure the load balancer scheduler:
1. Enable route forwarding rules:
[root@localhost] # vim / etc/sysctl.conf. # omit some contents net.ipv4.ip_forward = 1 [root@localhost ~] # sysctl-pnet.ipv4.ip_forward = 1
2. Configure the load distribution policy and export the backup (to know the meaning of the configuration below, please refer to the link at the beginning of the blog post. ):
[root@localhost ~] # modprobe ip_vs # load ip-vs module [root@localhost ~] # yum-y install ipvsadm # install ipvsadm management tool [root@localhost ~] # ipvsadm-C [root@localhost ~] # ipvsadm-A-t 200.0.0.1 rr 80-s rr [root@localhost ~] # ipvsadm-a-t 200.0.0.1ip-vs 80-r 192.168.1.2 ip-vs 80-m-w 1 [root@localhost ~] # ipvsadm-a-t 200.0.0.1root@localhost 80-r 192.168.1.3 root@localhost 80-m-w 1 [hostname] # change hostname Prevent the VIP address from being changed to 127.0.0.1 [root@localhost ~] # bash [root@LVS ~] # ipvsadm-save # Save Policy-A-t LVS:http-s rr-a-t LVS:http-r 192.168.1.2:http-m-w 1Mir a-t LVS:http-r 192.168.1.3:http-m-w 1 [root@LVS ~] # ipvsadm during backup -save > / etc/sysconfig/ipvsadm.bak # Export policy as backup [root@LVS ~] # cat / etc/sysconfig/ipvsadm.bak # View backup policy-A-t LVS:http-s rr-a-t LVS:http-r 192.168.1.2:http-m-w 1 Musi a-t LVS:http-r 192.168.1.3:http-m-w 1
3. Configure the node web server:
The web1 configuration is as follows:
[root@web1 ~] # yum-y install httpd # install httpd service [root@web1 ~] # echo "web server 1" > / var/www/html/index.html # prepare web page file [root@web1 ~] # systemctl start httpd # start httpd service [root@localhost ~] # systemctl enable httpd # set boot self-startup
The web2 configuration is as follows:
[root@web1 ~] # yum-y install httpd # install httpd service [root@web1 ~] # echo "web server 2" > / var/www/html/index.html # prepare web page file [root@web1 ~] # systemctl start httpd # start httpd service [root@localhost ~] # systemctl enable httpd # set boot self-startup
4. Test whether the LVS load balancer server is working properly (if client visits 200.0.0.1 multiple times, you can get two different pages):
If you visit the same page, if you eliminate configuration errors, you can open multiple pages, or refresh them later, because it may have a time to stay connected, so there will be delays.
After achieving the above effect, you can set up a NFS server. The two web servers mount the same directory shared by the NFS server together to provide the same web page files:
5. Configure the NFS shared storage server:
[root@NFS /] # yum-y install nfs-utils rpcbind # installation related package [root@NFS /] # systemctl enable nfs # set to boot [root@NFS /] # systemctl enable rpcbind # set to boot self-boot [root@NFS /] # mkdir-p / opt/wwwroot # prepare the shared directory [root@NFS /] # echo www. Baidu.com > / opt/wwwroot/index.html # create a new web page file [root@NFS /] # vim / etc/exports # set up a shared directory (the file content is empty by default) / opt/wwwroot 192.168.2.0lap24 (rw Sync,no_root_squash) # write to the line [root@NFS /] # systemctl restart rpcbind # restart related services Pay attention to the order in which services are started [root@NFS /] # systemctl restart nfs [root@NFS /] # showmount-e # View the local shared directory Export list for NFS:/opt/wwwroot 192.168.2.0
6. View the shared directory of the NFS server on the two web servers, mount it, and set automatic mount when boot:
① configure web2 server
[root@web2 ~] # yum-y install rpcbind nfs-utils # install related software packages Can you access and query NFS's shared directory [root@web2 ~] # systemctl enable rpcbind # set to boot [root@web2 ~] # systemctl start rpcbind # start the service [root@web2 ~] # showmount-e 192.168.2.1 # query the directory Export list for 192.168.2.1:/opt/wwwroot 192.168.2.0 [root@] shared by NFS hosts Web2 ~] # mount 192.168.2.1:/opt/wwwroot / var/www/html/ # Mount the shared directory [root@web2 ~] # df-hT / var/www/html/ # to see if the file system type capacity has been successfully mounted.% used mount point 192.168.2.1:/opt/wwwroot nfs4 39G 4.3G 35g 12% / var/www/html [root@web2 ~] # vim / etc/fstab # set automatic mount. 192.168.2.1: / opt/wwwroot / var/www/html nfs defaults _ netdev 0 write to the above
At this point, the configuration of the web2 server is complete, and now you can configure the above configuration on the web1 server.
If you do another access test at this time and the page you see is not provided by NFS shared Storage, then you need to check whether the selinux of the web node is turned off. If it is enabled, it is very likely that the httpd process will not be able to read the first page file of the shared storage.
At this point, no matter how you refresh the visit, you will see the same web page. Finally, the effect of LVS load balancing is realized.
When the LVS server is restarted, the LVS rule will be lost, which requires backup. It should be noted that the host name at the backup and the hostname at the recovery must be the same, and the priority of the network card should be paid attention to. Otherwise, after recovery, you will find that VIP (the virtual IP of the cluster) has become another IP address of the LVS server.
[root@localhost ~] # ipvsadm- ln # View policy IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConn [root@localhost ~] # ipvsadm-restore
< /etc/sysconfig/ipvsadm.bak #恢复策略[root@localhost ~]# ipvsadm -ln #查看策略是否恢复IP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags ->RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 200.0.0.1 VIP 80 rr # need to pay attention to whether it is still the original VIP-> 192.168.1.2 Masq 80 Masq 1 00-> 192.168.1.3 Masq 1 00
OK .
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.