In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
The following brings you a detailed introduction to the experimental configuration process and steps of NAT mode, one of the working modes of LVS, in the hope that it can bring some help to you in practical application. Load balancing involves more things, there are not many theories, and there are many books on the Internet. Today, we will use the accumulated experience in the industry to do an answer.
I. Preface
The previous article described the relevant theoretical knowledge of LVS load balancing. Today, we will mainly experiment with the configuration of NAT mode, which is one of the working modes of LVS.
II. Review and brief introduction of NAT Model Theory
First of all, we need to be clear about what is the most important feature of the NAT model.
It can be summarized as follows: the NAT mode of LVS load balancing (NAT acts as a gateway) is based on network address translation technology, which implements highly concurrent data requests through load balancers and the same architecture that uses scheduling algorithms to optimize service responses, with high availability and high security performance.
The biggest disadvantage is that the entrances and exits of the data are all in the load balancer (NAT CVM). As a result, highly concurrent data requests (huge amounts) cannot be supported, and the response and return process of the data aggravates this disadvantage. That's why there are follow-up improvements.
III. Case environment
First, we need four servers: a load balancing scheduler, two web (two Apache servers are used here), and a storage server (NFS mode). Use a Windows as the client host of the external network for simulation.
The structure is as follows: 4 Centos7 and 1 win10
The ip address assignment of the network segment is shown in the following table:
Device ip address win10 client 10.0.0.10ax 24 load scheduler external network card: 10.0.0.1ax 24 internal network card: 192.168.10.1/24HTTP server 1192.168.10.10/24HTTP server 2192.168.10.20/24NFS storage server 192.168.10.100max 24
So when we visit the website from a client of the external network, we actually visit the address of the external network card of the load balancer, but we know nothing about the server-side client. Then we need to interwork between the internal network and the external network, so we can use NAT to achieve interworking. Thus, the service request is sent to the real server, the required resources are obtained, and then returned to the load balancer for NAT network address translation, and the resources are returned to the client. In a production environment, the storage server in the background generally has multiple backups, and the resources are consistent, but in order to verify the round robin mechanism of the scheduling algorithm, we need to write different content on the two websites to distinguish and verify the scheduling algorithm.
Take a look at the specific configuration below:
IV. Deployment steps and detailed explanation
According to the architecture diagram and address assignment above, we need to configure the following steps to complete this experiment.
4.1NFS Storage Server configuration
1. View related software packages nfs service
[root@nfs ~] # rpm-Q nfs-utils nfs-utils-1.3.0-0.48.el7.x86_64 [root@nfs ~] # rpm-Q rpcbind rpcbind-0.2.0-42.el7.x86_64 [root@nfs ~] # mkdir / opt/ll / opt/cc # create a site file storage directory
two。 Mount two new disks and set them up
Sdb disk configuration:
[root@nfs ~] # fdisk / dev/sdb Welcome to fdisk (util-linux 2.23.2). The changes remain in memory until you decide to write them to disk. Think twice before using the write command. Device does not contain a recognized partition table uses the disk identifier 0x400f42da to create a new DOS disk label. Command (enter m for help): nPartition type: P primary (0 primary, 0 extended, 4 free) e extendedSelect (default p): Using default response p partition number (1-4, default 1): start sector (2048-41943039, default is 2048): the default value 2048Last sector will be used, + sector or + size {KMagg G} (2048-41943039, default is 41943039): will use the default value 41943039 partition 1 has been set to Linux type Set the size to 20 GiB command (enter m for help): wThe partition table has been altered calling ioctl () to re-read partition table. Synchronizing disks. [root@nfs ~] # mkfs.xfs / dev/sdb1meta-data=/dev/sdb1 isize=512 agcount=4, agsize=1310656 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0data = bsize=4096 blocks=5242624 Imaxpct=25 = sunit=0 swidth=0 blksnaming = version 2 bsize=4096 ascii-ci=0 ftype=1log = internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1realtime = none extsz=4096 blocks=0, rtextents=0
Sdc disk configuration:
[root@nfs ~] # fdisk / dev/sdc Welcome to fdisk (util-linux 2.23.2). The changes remain in memory until you decide to write them to disk. Think twice before using the write command. Device does not contain a recognized partition table uses the disk identifier 0x1ef07039 to create a new DOS disk label. Command (enter m for help): nPartition type: P primary (0 primary, 0 extended, 4 free) e extendedSelect (default p): Using default response p partition number (1-4, default 1): start sector (2048-41943039, default is 2048): the default value 2048Last sector will be used, + sector or + size {KMagg G} (2048-41943039, default is 41943039): will use the default value 41943039 partition 1 has been set to Linux type Set the size to 20 GiB command (enter m for help): wThe partition table has been altered calling ioctl () to re-read partition table. Synchronizing disks. [root@nfs ~] # mkfs.xfs / dev/sdc1meta-data=/dev/sdc1 isize=512 agcount=4, agsize=1310656 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0data = bsize=4096 blocks=5242624 Imaxpct=25 = sunit=0 swidth=0 blksnaming = version 2 bsize=4096 ascii-ci=0 ftype=1log = internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1realtime = none extsz=4096 blocks=0, rtextents=0
3. Mount two disks-use permanent mount
[root@nfs ~] # vim / etc/fstab # add mount information at the end of the file [root@nfs ~] # tail / etc/fstab # # Accessible filesystems, by reference, are maintained under'/ dev/disk'# See man pages fstab (5), findfs (8) as shown below Mount (8) and/or blkid (8) for more info#UUID=3f9b526a-3a51-4f87-b68a-37292b4e2e59 / xfs defaults 0 0UUID=33d508c7-a776-4d6a-9c9b-a51bf3855004 / boot xfs defaults 0 0UUID=90be4302-e340-4fe3-9ed2-3c40e346979e / home xfs defaults 0 0UUID=09112ee8-0d24-4c5e-83d2-08c1f16bc738 swap swap Defaults 0 0/dev/sdb1 / opt/ll xfs defaults 0 0/dev/sdc1 / opt/cc xfs defaults 0 [root@nfs ~] # mount-a [root@nfs ~] # df-hT File system Type capacity used available mount point / dev/sda2 xfs 15G 3.7g 12G 25% / devtmpfs devtmpfs 898M 0898M 0% / devtmpfs tmpfs 912M 0 912M 0% / dev/shmtmpfs tmpfs 912M 9.0M 903M 1% / runtmpfs tmpfs 912M 0912M 0% / sys/fs/cgroup/dev/sda5 xfs 11G 33M 11G 1% / home/dev/sda1 xfs 30G 174M 30G 1% / boottmpfs tmpfs 183M 4.0K 183M 1 % / run/user/42tmpfs tmpfs 183M 16K 183M 1% / run/user/0/dev/sdb1 xfs 20G 33M 20G 1% / opt/ll/dev/sdc1 xfs 20G 33M 20G 1% / opt/cc# Editing / etc/exports File [root@nfs network-scripts] # vim / etc/exports [root@nfs network-scripts] # cat / etc/exports/opt/ll 192.168.10.0Gram 24 (rw Sync,no_root_squash) / opt/cc 192.168.10.0 Plus 24 (rw,sync,no_root_squash)
4. Turn off the firewall to configure the network card to host-only mode and set the static ip address
[root@nfs ~] # systemctl stop firewalld.service [root@nfs ~] # setenforce 0 [root@nfs ~] # cd / etc/sysconfig/network-scripts/ [root@nfs network-scripts] # vim ifcfg-ens33 [root@nfs network-scripts] # systemctl restart network [root@nfs network-scripts] # ifconfig ens33ens33: flags=4163 mtu 1500 inet 192.168.10.100 netmask 255.255.0 broadcast 192.168.10.255 inet6 fe80::9be8:a170:f918:1f5 prefixlen 64 scopeid 0x20 ether 00:0c:29:0b:d9:36 txqueuelen 1000 (Ethernet) RX packets 1151 bytes 685357 (669.2 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 444 bytes 39849 (38.9 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
5. Start the service
[root@nfs network-scripts] # systemctl start nfs [root@nfs network-scripts] # systemctl start rpcbind [root@nfs network-scripts] # showmount-e # View the mount information Export list for nfs:/opt/cc 192.168.10.0/24/opt/ll 192.168.10.0 and 244.2 web server configuration
1. Install the httpd website service on two Centos7 virtual machines
[root@localhost ~] # hostnamectl set-hostname web1 [root@localhost ~] # su [root@web1 ~] # yum install-y httpd...// omitted part [root@localhost ~] # hostnamectl set-hostname web2 [root@localhost ~] # su [root@web2 ~] # yum install-y httpd...// omitted part
two。 Turn off two web site server firewalls
[root@web1 ~] # systemctl stop firewalld.service [root@web1 ~] # setenforce 0 [root@web2 ~] # systemctl stop firewalld.service [root@web2 ~] # setenforce 0
3. Configure the network card
# web1 configuration [root@web1 ~] # cd / etc/sysconfig/network-scripts/ [root@web1 network-scripts] # vim ifcfg-ens33 [root@web1 network-scripts] # systemctl restart network [root@web1 network-scripts] # ifconfig ens33ens33: flags=4163 mtu 1500 inet 192.168.10.10 netmask 255.255.255.0 broadcast 192.168.10.255 inet6 fe80::bdab:b59b:d041:d8b0 prefixlen 64 scopeid 0x20 ether 00:0c:29:e6:6d: Eb txqueuelen 1000 (Ethernet) RX packets 726004 bytes 1067841474 (1018.3 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 345476 bytes 21387015 (MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0#web2 configuration [root@web2 ~] # cd / etc/sysconfig/network-scripts/ [root@web2 network-scripts] # vim ifcfg-ens33 [root@web2 network-scripts] # systemctl restart network [root@web2 network-scripts] # ifconfig Ens33ens33: flags=4163 mtu 1500 inet 192.168.10.20 netmask 255.255.255.0 broadcast 192.168.10.255 inet6 fe80::bdab:b59b:d041:d8b0 prefixlen 64 scopeid 0x20 ether 00:0c:29:e6:6d:eb txqueuelen 1000 (Ethernet) RX packets 726004 bytes 1067841474 (1018.3 MiB) RX errors 0 dropped 0 overruns 0 TX packets 345476 bytes 21387015 (20.3 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
4. Start two web site services and verify the mount
# web1 [root@web1 network-scripts] # systemctl start httpd [root@web1 network-scripts] # netstat-natp | grep 80tcp6 00: 80: * LISTEN 59242/httpd [root@web1 network-scripts] # showmount-e 192.168.10.100Export list for 192.168.10.100:/opt/cc 192.168.10.0/24/opt/ll 192 .168.10.0 / 24#web2 [root@web2 network-scripts] # netstat-natp | grep 80 [root@web2 network-scripts] # systemctl start httpd.service [root@web2 network-scripts] # netstat-natp | grep 80tcp6 00: 80: * LISTEN 54271/httpd [root@web2 network-scripts] # showmount-e 192.168.10.100Export list for 192 .168.10.100: / opt/cc 192.168.10.0/24/opt/ll 192.168.10.0/24
5. Sites are provided on both servers to write test information
# web1 [root@web1 network-scripts] # vim / etc/fstab192.168.10.100:/opt/ll / var/www/html nfs defaults._netdev 00 [root@web1 network-scripts] # cd / var/www/html/ [root@web1 html] # ls [root@web1 html] # vim index.html [root@web1 html] # cat index.html this is ll web#web2 [root@web2 network-scripts] # vim / etc/fstab192.168.10.100: / opt/cc / var/www/html nfs defaults._netdev 0 [root@web2 network-scripts] # cd / var/www/html/ [root@web2 html] # ls [root@web2 html] # vim index.html [root@web2 html] # cat index.html this is cc web4.3LVS load balancer scheduling server configuration
1. Install the environment package
[root@localhost ~] # hostnamectl set-hostname lvs [root@localhost ~] # su [root@lvs ~] # yum install-y ipvsadm loaded plug-in: fastestmirror LangpacksLoading mirror speeds from cached hostfile * base: mirrors.aliyun.com * extras: mirrors.163.com * updates: mirrors.tuna.tsinghua.edu.cn is resolving dependencies-- > checking transactions-- > package ipvsadm.x86_64.0.1.27-7.el7 will be installed-- > resolve dependencies complete dependency resolution = Package schema version Source size = installing: ipvsadm x86 7.el7.x86_64.rpm 64 1.27-7.el7 base 45k transaction Summary = installation 1 Total package downloads: 45 k installation size: 75 kDownloading packages: warning: / var/cache/yum/x86_64/7/base/packages/ipvsadm-1.27-7.el7.x86_64.rpm: header V3 RSA/SHA256 Signature Key ID f4a80eb5: NOKEYipvsadm-1.27-7.el7.x86_64.rpm public key is not installed ipvsadm-1.27-7.el7.x86_64.rpm | 45 kB 00:00:00 retrieve key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 import GPG key 0xF4A80EB5: user ID: "CentOS-7 Key (CentOS 7 Official) Signing Key) "fingerprint: 6341 ab27 53d7 8a78 a7c2 7bb1 24c6 a8a7 f4a8 0eb5 package: centos-release-7-4.1708.el7.centos.x86_64 (@ anaconda) from: / etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7Running transaction checkRunning transaction testTransaction test succeededRunning transaction is being installed: ipvsadm-1.27-7.el7.x86_64 Ipvsadm-1.27-7.el7.x86_64 1Accord 1 is being verified: ipvsadm.x86_64 0VX 1.27-7.el7 is over!
two。 Configure dual network cards
Add the network card, and then set it
[root@lvs ~] # cd / etc/sysconfig/network-scripts/ [root@lvs network-scripts] # lsifcfg-ens33 ifdown-isdn ifup ifup-plip ifup-tunnelifcfg-lo ifdown-post ifup-aliases ifup-plusb ifup-wirelessifdown ifdown-ppp ifup-bnep ifup-post init.ipv6-globalifdown-bnep ifdown-routes ifup-eth ifup-ppp network-functionsifdown-eth ifdown-sit Ifup-ib ifup-routes network-functions-ipv6ifdown-ib ifdown-Team ifup-ippp ifup-sitifdown-ippp ifdown-TeamPort ifup-ipv6 ifup-Teamifdown-ipv6 ifdown-tunnel ifup-isdn ifup-TeamPort [root@lvs network-scripts] # vim ifcfg-ens33 [root@lvs network-scripts] # cp-p ifcfg-ens33 ifcfg-ens36# configure the network card and restart the network [root@lvs network-scripts] # systemctl restart network
The information of the two network cards is as follows:
[root@lvs network-scripts] # ifconfig ens33: flags=4163 mtu 1500 inet 10.0.0.1 netmask 255.255.255.0 broadcast 10.0.0.255 inet6 fe80::7eb1:2dde:8a54:6927 prefixlen 64 scopeid 0x20 ether 00:0c:29:56:d3:4a txqueuelen 1000 (Ethernet) RX packets 397693 bytes 574961333 (548.3 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 92656 bytes 5683776 (5. 4 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0ens36: flags=4163 mtu 1500 inet 192.168.10.1 netmask 255.255.255.0 broadcast 192.168.10.255 inet6 fe80::e638:fc7c:8a5b:dc5d prefixlen 64 scopeid 0x20 ether 00:0c:29:56:d3:54 txqueuelen 1000 (Ethernet) RX packets 51 bytes 6809 (6.6 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX Packets 83 bytes 13712 (13.3 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
3. Test whether it is interoperable
Root@lvs network-scripts] # ping 192.168.10.10PING 192.168.10.10 (192.168.10.10) 56 (84) bytes of data.64 bytes from 192.168.10.10: icmp_seq=1 ttl=64 time=0.552 ms64 bytes from 192.168.10.10: icmp_seq=2 ttl=64 time=0.299 ms64 bytes from 192.168.10.10: icmp_seq=3 ttl=64 time=0.255 Ms ^ C-- 192.168.10.10 ping statistics-- 3 packets transmitted, 3 received, 0 packet loss Time 2001msrtt min/avg/max/mdev = 0.255 ms 0.552 ms [root@lvs network-scripts] # ping 192.168.10.20PING 192.168.10.20 (192.168.10.20) 56 (84) bytes of data.64 bytes from 192.168.10.20: icmp_seq=1 ttl=64 time=0.536 ms64 bytes from 192.168.10.20: icmp_seq=2 ttl=64 time=0.340 Ms ^ C-- 192.168.10.20 ping statistics-- 2 packets transmitted, 2 received, 0 packet loss Time 1000msrtt min/avg/max/mdev = 0.340 ms 0.438 ms 0.536 Universe 0.098
4. Enable routing forwarding function
[root@lvs network-scripts] # vim / etc/sysctl.conf # end with root@lvs network-scripts] # cat / etc/sysctl.conf # sysctl settings are defined through files in# / usr/lib/sysctl.d/, / run/sysctl.d/, and / etc/sysctl.d/.## Vendors settings live in / usr/lib/sysctl.d/.# To override a whole file, create a new file with the same in# / etc/sysctl.d/ and put new settings there. To override# only specific settings, add a file with a lexically later# name in / etc/sysctl.d/ and put new settings there.## For more information, see sysctl.conf (5) and sysctl.d (5). Net.ipv4.ip_forward=1
5. Set up the firewall and its rules
[root@lvs network-scripts] # systemctl status firewalld.service ● firewalld.service-firewalld-dynamic firewall daemon Loaded: loaded (/ usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled) Active: active (running) since four 2020-02-20 09:34:20 CST; 8min ago Docs: man:firewalld (1)
At this point, the firewall is on, and there is no need to turn it off. We can use iptables to set it up.
# clear forwarding table [root@lvs network-scripts] # iptables-F# clear nat address forwarding table [root@lvs network-scripts] # iptables-t nat-F# configure forwarding rules (four tables and five chains) [root@lvs network-scripts] # iptables-t nat-A POSTROUTING-o ens33-s 192.168.10.0amp 24-j SNAT-- to-source 10.0.0.load route forwarding function [root@lvs network-scripts] # sysctl-pnet.ipv4.ip_forward = 1
6. Load LVS kernel module
[root@lvs network-scripts] # modprobe ip_vs # load command [root@lvs network-scripts] # cat / proc/net/ip_vs # View IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConn [root@lvs network-scripts] # systemctl start ipvsadmJob for ipvsadm.service failed because the control process exited with error code. See "systemctl status ipvsadm.service" and "journalctl-xe" for details.#Centos7 system needs to use-- save save [root@lvs network-scripts] # ipvsadm-- save > / etc/sysconfig/ipvsadm [root@lvs network-scripts] # systemctl start ipvsadm
7. Define script
[root@lvs network-scripts] # cd / opt/ [root@lvs opt] # vim nat.sh [root@lvs opt] # chmod 777 nat.sh [root@lvs opt] #. / nat.sh IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP lvs:http rr-> 192.168.10.10:http Masq 100- > 192.168.10.20:http Masq 100
The script for nat.sh is as follows:
#! / bin/bash#echo "1" > / proc/sys/net/ipv4/ip_forward was previously set here, so comment ipvsadm-C # initialize ipvsadm-A-t 10.0.0.1 ipvsadm 80-s rr # specify access entry Rr specifies the round-robin algorithm ipvsadm-a-t 10.0.0.1 ip 80-r 192.168.10.10 ip address mapping ipvsadm-a-t 10.0.0.1 ipvsadm 80-r 192.168.10.20 ip 80-mipvsadm # Open 4.4 use the public network client for test verification
We can test it with a win10 virtual machine or a Centos7 client
However, you need to set the Nic to host-only mode and the ip address is 10.0.0.10 (network segment 10.0.0.0). The gateway is the ip address of the network port outside the lvs load balancer, which is also accessed in the client's browser. The figure of the test result is as follows:
1. Network card and network test
two。 Test website service and round-robin mechanism
To access the lvs external network gateway, the role of lvs as a middleware or bridge is essentially to visit the website and storage server.
Refresh access switch server
The content of the two stored pages on the online network here is actually the same, here is mainly to test the effect of the round robin mechanism, we need to pay attention to it.
After reading the above detailed introduction of the experimental configuration process and steps of NAT mode, one of the working modes of LVS, if there is anything else you need to know, you can find what you are interested in in the industry information or find our professional and technical engineer to answer, the technical engineer has more than ten years of experience in the industry.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.