Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

LVS load balancing-NAT mode

2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Article directory 1, enterprise cluster application overview 2, enterprise cluster classification 3, load balancing cluster working mode analysis (1) NAT mode-address translation (network address translation) (2) IP tunnel (IP Tunnel) (3) DR mode 4, load balancing cluster architecture 5, LVS virtual server 6, NAT mode specific case experiment step 1: configure storage server step 2: configure two Apache servers step 3: configure LVS step 4: verify results 1, enterprise cluster application overview: in Internet applications As the site has higher and higher requirements for hardware performance, response speed, service stability and data reliability, a single server is far from meeting the demand, so multiple servers are needed to form a cluster, but the external performance is still a whole, similar to a "representative". What is cluster: Cluster, cluster, cluster is made up of multiple hosts, but it only shows as a whole. 2. Enterprise cluster classification: according to the target difference of cluster, it can be divided into three types: load balancing cluster; high availability cluster; high performance computing cluster. (1) load balancing Cluster (Load Balance Cluster) aims to improve the responsiveness of the application system, handle as many access requests as possible, and reduce latency, so as to achieve the overall performance of high concurrency and high load (LB). The load distribution of LB depends on the shunting algorithm of the master node. (2) High availability Cluster (High Availability Cluster) aims to improve the reliability of the application system, reduce the downtime as much as possible, ensure the continuity of services, and achieve the fault tolerance effect of High availability (HA). The working mode of HA includes duplex mode and master-slave mode. (3) High performance Computing Cluster (High Performmance Computer Cluster) aims to improve the CPU computing speed, expand hardware resources and analysis capabilities of application systems, and obtain the high performance computing (HPC) capabilities equivalent to large, supercomputers. The high performance of high-performance computing clusters depends on "distributed computing" and "parallel computing", which integrates CPU, memory and other resources of multiple servers through dedicated hardware and software. 3. Analysis of working mode of load balancing cluster: load balancing cluster is the most commonly used cluster type in enterprises at present. There are three working modes of load balancing scheduling technology of cluster: 1, address translation (NAT) 2, IP tunnel 3, direct routing (1) NAT mode-address translation (network address translation)

Similar to the private network structure of the firewall, the load scheduler acts as the gateway for all server nodes, that is, as the access entrance for the client, but also as the access exit for each node to respond to the client; the server node uses a private IP address, which is on the same physical network as the load scheduler, and the security needs to be optimized in the same two ways as the other two ways; this is a long mode used in the enterprise. (2) IP Tunnel (IP Tunnel)

The open network structure is adopted, and the load scheduler is only used as the access entrance of the client, and each node responds directly to the client through its own Internet connection, instead of going through the load scheduler; the server nodes are scattered in different locations in the Internet, have independent public network IP addresses, and communicate with the load scheduler through a dedicated IP tunnel. (3) DR mode

The semi-open network structure is similar to the structure of TUN mode, but each node is not scattered everywhere, but is located in the same physical network as the scheduler; the load scheduler connects with each node server through the local network, so there is no need to establish a dedicated IP tunnel. 4. Load balancing cluster architecture: layer 1: load balancer layer 2: server pool layer 3: shared storage load balancing structure chart:

5. LVS virtual server: (1) LVS load scheduling algorithm: 1. Polling: distribute the received access requests to each node in the cluster (real server) in turn, and treat each server equally, regardless of the actual number of connections and system load of the server. 2. Weighted polling: the received access requests are allocated in turn according to the processing capacity of the real server, and the scheduler can automatically query the load of each node and dynamically adjust its weight; ensure that servers with strong processing capacity bear more access traffic. (2) LVS load scheduling algorithm: 1. Minimum connections: according to the number of connections established by the real server, the received access requests are allocated to the node with the least number of connections. 2. Weighted least connection: when the performance of the server node is different, the weight can be adjusted automatically for the real server, and the node with higher weight will bear a larger proportion of the active connection load. (3) LVS clusters are created in management (using ipvsadm tools)

6. The specific case experiment of NAT model:

Role IP address web server 1 192.168.100.110web server 2 192.168.100.111NFS storage 192.168.100.120LVS ens33 (internal network card): 192.168.100.1 persent 36 (external network card): 12.0.0.1 Experimental environment description: prepare five virtual machines, one client (window system) as test, one server as LVS, two web servers (Apache), and one as server as NFS storage. Set all hosts to host-only mode; set up two network cards on the LVS server, one as a private network address and one as a public network address, and map the NAT address Experimental verification: the public network client can access NFS storage (that is, web cluster) by accessing the public network address. Step 1: configure the storage server (1) the storage server needs to be installed. Rpcbind and nfs-utils are two packages. If not, you can install them with yum. (2) Editing access rules: 1. First, create the benet and accp files [root@localhost ~] # cd / opt/ [root@localhost opt] # mkdir benet accp [root@localhost opt] # chmod 777 accp/ benet/ Authorization 2 in the / opt directory. Set the rules vim / etc/exports and add the following code to the file: / usr/share * (ro,sync) / opt/benet 192.168.200.0x24 (rw) Sync) / opt/accp 192.168.200.0Universe 24 (rw,sync) 3, release shared exportfs-rv

(3) start the service: systemctl start nfs.service systemctl start rpcbind second step: configure two Apache servers (1) install Apache service yum install httpd-y (2) mount the storage server files locally 1, the first Apache server: mount.nfs 192.168.200.130:/opt/accp / var/www/html/ and create a web page As a test trial: echo "this is accp web" > index.html2, the second Apache server: mount.nfs 192.168.200.130:/opt/benet / var/www/html/ create a web page, test: echo "this is benet web" > index.html Note: the memory and resources of the storage server are used here, and do not occupy any local resources. Step 3: configure LVS (1) install the ipvsadm service yum install ipvsadm-y (2) as the gateway Need to enable route forwarding function vim / etc/sysctl.conf add this line of code net.ipv4.ip_forward=1 start sysctl-p (3) do address mapping iptables-t nat-F iptables-F iptables-t nat-A POSTROUTING-o ens36-s 192.168.200.0 SNAT 24-j SNAT-- to-source 12.0.0.1 (4) load module modprobe ip_vs (5) enable ipvsadm1, first backup: ipvsadm-- save > / etc/sysconfig/ipvsadm2, Set LVS rule 1 in enabling service systemctl start ipvsadm.service (6) and write a script to add the rule: #! / bin/bashipvsadm-C pact / clear all records in the kernel virtual server table ipvsadm-A-t 12.0.0.1 rripvsadm-a-t 12.0.0.1 rripvsadm 80-r 192.168.200.110 rripvsadm 80-mipvsadm-a-t 12.0.0.1 80-r 192.168.200.120Suzhou 80-mipvsadm2, Add execution permission After execution: chmod + x nat.sh. / nat.sh step 4: verify the result on the win7 browser, visit 12.0.0.1, because of the NAT mode and storage sharing, you can directly access the web pages on the two Apache servers, which shows that the cluster works.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report