In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
The following brings you some key points of the load scheduler in NAT mode, hoping to give you some help in practical application. Load balancing involves more things, there are not many theories, and there are many books on the Internet. Today, we use the accumulated experience in the industry to do an answer.
NAT mode address translation:
is abbreviated to NAT mode, which is similar to the private network structure of firewall. As the gateway of all CVM nodes, the load scheduler serves as the access entrance for clients and the access exit for each node to respond to clients.
The server node uses a private IP address, which is located on the same physical network as the load scheduler, and its security is better than the other two methods.
Experimental schematic diagram
Experimental environment LVS scheduler as the gateway of web server pool, LVS two network cards, respectively connected to the internal and external network, using round-robin (rr) scheduling algorithm LVS load scheduler intranet 33 gateway: 192.168.13.1 external network 36:12.0.0.1web1 192.168.13.151web2 192.168.13.170nfs server 192.168.13.145client test machine 12.0.0.121, add two hard drives to the nfs server for shared storage Formatting
[root@nfs ~] # fdisk / dev/sdb # # partition order (enter m for help): n # # create a new partition Partition type: P primary (0 primary, 0 extended, 4 free) e extendedSelect (default p): P # # main partition number (1-4, default 1): # # enter start sector (2048-41943039) Default is 2048): # # enter will use the default value of 2048Last sector, + sector or + size {K _ MagneM _ G} (2048-41943039, default is 41943039): # # enter will use the default value of 41943039 partition 1 has been set to Linux type Set the size to 20 GiB command (enter m for help): W # # Save [root@nfs ~] # mkfs.xfs / dev/sdb1 # # format disk # # format / dev/sdc disk [root@nfs ~] # mkdir / opt/kgc / opt/accp # # create mount point [root@nfs ~] # vim / etc/fstab # # create automatic mount # # last line insert mount / dev/sdb1 / opt/kgc xfs defaults 0 0/dev/sdc1 / opt/accp xfs defaults 0 [root@nfs ~] # mount-a # # refresh mount [root@nfs ~] # df-hT # check disk mount status file system type capacity available available mount point / dev/sdb1 xfs 20g 33m 20G 1% / opt/kgc/dev/sdc1 xfs 20G 33m 20G 1% / opt/accp [root@nfs ~] # systemctl stop firewalld.service # # turn off the firewall [root@nfs ~] # setenforce 0 [root@nfs ~] # rpm-Q nfs-utils # # check whether nfs-utils-1.3.0-0.48.el7.x86_64 [root@nfs ~] # rpm-Q rpcbindrpcbind- is installed in the nfs two packages 0.2.0-42.el7.x86_64 [root@nfs ~] # vim / etc/exports # # configure shared storage configuration file / opt/kgc 192.168.13.0 Universe 24 (rw Sync,no_root_squash) # # support reading and writing of 13 network segments Synchronization, does not support downgrade processing / opt/accp 192.168.13.0 to 24 (rw,sync,no_root_squash) 2, switch the NFS server to host-only mode, start the service
[root@nfs ~] # systemctl start nfs # # enable the nfs service [root@nfs ~] # systemctl start rpcbind [root@nfs ~] # showmount-e # # View to provide shared information Export list for nfs:/opt/accp 192.168.13.0/24/opt/kgc 192.168.13.0 account 24 [root@nfs ~] # vim / etc/sysconfig/network-scripts/ifcfg-ens33 # # modify the network card information BOOTPROTO=static # # to be static. IPADDR=192.168.13.145 # # add address Subnet mask, gateway NETMASK=255.255.255.0GATEWAY=192.168.13.1 [root@nfs ~] # service network restart # # restart network 3, install http service on web1,web2 Switch the network card to host-only mode / / web1 server / / [root@web1 ~] # yum install httpd-y # # install the web service [root@web1 ~] # systemctl stop firewalld.service # # turn off the firewall [root@web1 ~] # setenforce 0 [root@web1 ~] # vim / etc/sysconfig/network-scripts/ifcfg-ens33 # # modify the network card information BOOTPROTO=static # # to static. IPADDR=192.168.13.151 # # add address, subnet mask Gateway NETMASK=255.255.255.0GATEWAY=192.168.13.1 [root@web1 ~] # service network restart [root@web1 ~] # showmount-e 192.168.13.145 # # View nfs sharing information Export list for 192.168.13.145:/opt/accp 192.168.13.0/24/opt/kgc 192.168.13.0 Taber 24 [root@web1 ~] # vim / etc/fstab # # set automatic mount 192.168.13.145:/opt / kgc / var/www/html nfs defaults _ netdev0 0 # # Mount to the site Mode is nfs Default network device [root@web1 ~] # mount-a # # refresh mount [root@web1] # df-hT # # View disk mount information file system type capacity available available% mount point 192.168.13.145:/opt/kgc nfs4 20G 32m 20G 1% / var/www/html [root@web1 ~] # echo "this is Kgc web "> / var/www/html/index.html # # create a web page [root@web1 ~] # systemctl start httpd.service # # start the web service / / web2 server / / [root@web2 ~] # yum install httpd-y # # install the web service [root@web2 ~] # systemctl stop firewalld.service # # close the firewall [root@web2 ~] # setenforce 0 [root@web2 ~] # vim / etc/sysconfig/network-scripts/ifcfg-ens33 # # modify the Nic information BOOTPROTO=static # # change it to static... IPADDR=192.168.13.170 # # add address Subnet mask Gateway NETMASK=255.255.255.0GATEWAY=192.168.13.1 [root@web2 ~] # service network restart [root@web2 ~] # showmount-e 192.168.13.145 # # View nfs sharing information Export list for 192.168.13.145:/opt/accp 192.168.13.0/24/opt/kgc 192.168.13.0 Taber 24 [root@web2 ~] # vim / etc/fstab # # set automatic mount 192.168.13.145:/opt / accp / var/www/html nfs defaults _ netdev0 0 # # Mount to the site Mode is nfs Default network device [root@web2 ~] # mount-a # # refresh mount [root@web2] # df-hT # # View disk mount information file system type capacity available available% mount point 192.168.13.145:/opt/accp nfs4 20G 32m 20G 1% / var/www/html [root@web2 ~] # echo "this is Accp web "> / var/www/html/index.html # # create a web page [root@web2 ~] # systemctl start httpd.service # # start web service 4 Add two network cards to the lvs load scheduler and set up route forwarding [root@lvs] # yum install ipvsadm-y # # install the ipvsadm scheduling management tool to add the network card of the external network, and switch to host-only mode
[root@lvs ~] # cd / etc/sysconfig/network-scripts [root@lvs network-scripts] # cp-p ifcfg-ens33 ifcfg-ens36 # # copy the Nic configuration file for the ens36 configuration file [root@lvs network-scripts] # vim ifcfg-ens33BOOTPROTO=staticIPADDR=192.168.13.1 # # set the private gateway ipNETMASK=255.255.255.0 [root@lvs network-scripts] # vim ifcfg-ens36BOOTPROTO=static # # Delete UUIDNAME=ens36 # # modify it to 36DEVICE=ens36IPADDR=12.0.0.1 # # set up the public gateway NETMASK=255.255.255.0 [root@lvs network-scripts] # service network restart # # restart the network card service # # you can use ping on web to test whether you can enable route forwarding function [root@lvs network-scripts] # vim / etc/sysctl.conf # # Route forwarding profile net.ipv4.ip_forward=1 # # enable route forwarding [root@lvs network-scripts] # sysctl-p # on the web. # load route translation function [root@lvs network-scripts] # iptables-F # # clear forwarding table [root@lvs network-scripts] # iptables-t nat-F # # clear nat address translation table [root@lvs network-scripts] # iptables-t nat-A POSTROUTING-o ens36-s 192.168.13.0 SNAT 24-j SNAT-- to-source 12.0.0.1 address # specify nat address translation table Specify the data outflow interface, specify the original ip address, specify the source address translation SNAT, specify the translated ip address 5, load the lvs kernel module Configuration management software ipvsadm [root@lvs network-scripts] # modprobe ip_vs # # load lvs kernel [root@lvs network-scripts] # cat / proc/net/ip_vs # # View ipvs information IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConn [root@lvs network-scripts] # ipvsadm-- save > / etc/sysconfig/ipvsadm # # centos7 must first save [root@lvs network-scripts] ] # systemctl start ipvsadm # # start the ipvsadm service [root@lvs network-scripts] # vim / opt/nat.sh # # write an ipvsadm script #! / bin/bashipvsadm-C # # clear the ipvs cache ipvsadm-A-t 12.0.1purge 80-s rr # specify the virtual service access entry Specify polling algorithm ipvsadm-a-t 12.0.0.1 80-r 192.168.13.151VR 80-m # # specify real server Nat mode ipvsadm-a-t 12.0.0.1 mipvsadm 80-r 192.168.13.170 mipvsadm 80-mipvsadm # # launch [root@lvs network-scripts] # chmod + x / opt/nat.sh # # give execution permission [root@lvs network-scripts] # cd / opt/ [root@lvs opt] #. / nat.sh # # startup script IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP lvs:http rr-> 192.168.13.151:http Masq 100-> 192.168.13.170:http Masq 100 6 Access 12.0.0.1 with a client test machine (host mode only)
After reading some of the main points about the load scheduler in NAT mode, if you have anything else you need to know, you can find what you are interested in in the industry information or find our professional and technical engineer to answer, the technical engineer has more than ten years of experience in the industry. Official website link www.yisu.com
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.