Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Address Translation pattern (LVS-NAT)

2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

For more information on the basic configuration of LVS, please refer to the blog post: https://blog.51cto.com/14306186/2437030

Case environment:

The implementation results are as follows:

Using NAT mode clustering technology, the LVS load scheduler is the gateway server for all nodes to access the Internet, and its 200.0.0.1 is also used as the VIP address for the entire cluster. A scheduling algorithm that uses polling (rr). Web1 and web2 first set up web services and prepare different web files for client access, so as to make sure that client accesses 200.0.0.1 of the LVS server and can access two wbe servers. After the client test is successful, web1 and web2 can mount the shared directory provided by the NFS server to provide the same web page files for client.

First, prepare:

1. Debug the network to interworking first.

2. Prepare system images and related software packages in order to build httpd services and install related tools. For convenience, use the installation package in the system image directly to build the httpd service.

3. Configure firewall policy to release traffic (for convenience, I just stopped the firewall here)

2. Configure load scheduler:

Enable route forwarding: [root@localhost /] # vim / etc/sysctl.conf. / / omit part net.ipv4.ip_forward = 1 [root@localhost /] # sysctl-pnet.ipv4.ip_forward = 1

Configure the load distribution policy:

[root@localhost /] # modprobe ip_vs # load ip_vs module [root@localhost /] # cat / proc/net/ip_vs # to view version information, pop up as follows, indicating that the module has started IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags- > RemoteAddress:Port Forward Weight ActiveConn InActConn

Ipvsadm is a LVS cluster management tool used on the load scheduler that invokes the ip_vs module to add, remove server nodes, and view the operational status of the cluster (which requires manual installation).

[root@localhost /] # yum-y install ipvsadm # install ipvsadm [root@localhost /] # ipvsadm-v # View version information ipvsadm v1.27 2008-5-15 (compiled with popt and IPVS v1.2.1) [root@localhost /] # ipvsadm-C # clear the original policy [root@localhost /] # ipvsadm-A-t 200.0.0.1: 80-s rr [root@localhost /] # ipvsadm- a-t 200.0.0.1 rr 80-r 192.168.1.10 ipvsadm- 80-m-w 1 [root@localhost /] # ipvsadm- a-t 200.0.1 ipvsadm- 80-r 192.168.1.20 ipvsadm- 80-m-w 1 [root@localhost /] # ipvsadm-save # Preservation Policy-A-t localhost.localdomain:http-s- T localhost.localdomain:http-r 192.168.1.10:http-m-w 1mura-t localhost.localdomain:http-r 192.168.1.20:http-m-w 1 [root@localhost /] # systemctl enable ipvsadm.service # is set to boot automatically

Configure the node server:

Web1 configuration:

[root@localhost /] # yum-y install httpd # install the http service [root@localhost /] # echo test1.com > / var/www/html/index.html # prepare the test page [root@localhost /] # systemctl start httpd # start the service [root@localhost /] # systemctl enable httpd # boot itself

Web2 configuration:

[root@localhost /] # yum-y install httpd # install http service [root@localhost /] # echo test2.com > / var/www/html/index.html # prepare to test the web page [root@localhost /] # systemctl start httpd [root@localhost /] # systemctl enable httpd

Of course, in the actual production environment, the content of the web page is the same. Here, in order to distinguish whether the experiment is successful or not, I wrote two test files with different contents.

Test if LVS is working:

Refresh the web page:

3. NFS shared storage service:

Build a NFS share to make two web servers mount the same directory shared by the NFS server, so as to provide the same web files.

Install nfs-utils and rpcbind packages:

[root@localhost /] # yum-y install nfs-utils rpcbind # installation package [root@localhost /] # systemctl enable nfs # set to boot [root@localhost /] # systemctl enable rpcbind

I have these two software packages when I use rpm-qa to check on Centos 7, and I am not very clear here whether they are included when installing the system, so when I do it, I should hit this command, just in case, if the boss knows, please let me know. (thank you)

Set up the shared directory:

[root@localhost /] # mkdir-p / opt/wwwroot # create a shared directory [root@localhost /] # vim / etc/exports # modify the NFS main configuration file / opt/wwwroot 192.168.2.0 Lex24 (rw,sync,no_root_squash)-- rw: allow read and write Ro is read-only-sync: synchronous write-no_root_squash: local root permission is given when the client is accessed as root (default is root_squash Will be treated as nfsnobody user reduced rights) [root@localhost wwwroot] # echo nfs.test.com > index.html # prepare the test file [root@localhost /] # systemctl restart rpcbind # restart the service to make the configuration effective [root@localhost /] # systemctl restart nfs [root@localhost /] # showmount-e # View the shared directory Export list for localhost.localdomain published on this machine : / opt/wwwroot 192.168.2.0/24

Mount the NFS shared directory on two web hosts and configure it to mount automatically on boot

[root@localhost /] # mount 192.168.2.1:/opt/wwwroot / var/www/html/ # do this on both web hosts [root@localhost /] # df-hT / var/www/html/ # to see if Filesystem Type Size Used Avail Use% Mounted on192.168.2.1:/opt/wwwroot nfs4 50G 4.0G 47g is mounted successfully 8% / var/www/html [root@localhost /] # vim / etc/fstab # set to automatically mount 192.168.2.1:/opt/wwwroot / var/www/html nfs defaults on boot _ netdev 0 0

Test verification:

.

We can see that the test file written on NFS has been accessed successfully, but no matter how it is refreshed, it will not change, which means that LVS load balancer is also successful.

When the LVS server is restarted, the LVS rule will be lost, which requires backup. It should be noted that the host name at the backup and the hostname at the recovery must be the same, and the priority of the network card should be paid attention to. Otherwise, after recovery, you will find that VIP (the virtual IP of the cluster) has become another IP address of the LVS server.

[root@localhost /] # ipvsadm-save > / etc/sysconfig/ipvsadm.bak # first backup [root@localhost /] # ipvsadm- ln # View IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConn [root@localhost /] # ipvsadm-restore

< /etc/sysconfig/ipvsadm.bak # 恢复策略[root@localhost /]# ipvsadm -lnIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags ->

RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 200.0.0.1 VIP 80 rr # Note whether this is still the original VIP-> 192.168.1.10 Masq 80 Masq 100-> 192.168.1.20 Masq 100

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report