In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
The following gives you an overview of the meaning and working mode of load balancing cluster, hoping to give you some help in practical application. There are many things involved in load balancing, not many theories, and there are many books on the Internet. Today, we will use the accumulated experience in the industry to make an answer.
The meaning of clustering
Cluster, Cluster, Cluster
It is composed of multiple hosts, but only as a whole.
In Internet applications, as the site has higher and higher requirements for hardware performance, corresponding speed, service stability, data reliability and so on, a single CVM is unable to solve the problem:
Use expensive minicomputers, mainframes
Build a service cluster using a normal server
Enterprise cluster classification can be divided into three types according to the target differences of the cluster.
Load balancing cluster
High availability cluster
High performance computing cluster
1. Load balancing cluster
In order to improve the response ability of the application system, handle as many access requests as possible, reduce latency as the goal, and obtain the overall performance of high concurrency and load L B).
The load distribution of LB depends on the shunting algorithm of the master node.
2. Highly available clusters
The goal is to improve the reliability of the application system, reduce the interruption time as much as possible, ensure the continuity of services, and achieve the fault tolerance effect of high availability (HA).
HA works in duplex mode and master-slave mode.
3. High performance computing cluster
With the goal of improving the CPU computing speed of the application system, expanding resources and analytical capabilities, the high performance computing (HPC) capability equivalent to that of a large, supercomputer is obtained.
The high performance of high-performance computing clusters depends on "distributed computing" and "parallel computing". Through dedicated hardware and software, resources such as CPU and memory of multiple servers are integrated together to realize the computing power that only large supercomputers have.
Analysis of load balancing Cluster working Mode load balancing Cluster is the most commonly used cluster type in enterprises at present. There are three working modes of load scheduling technology for clusters:
Address translation
IP tunnel
Direct routing
1. NAT mode address translation:
Referred to as NAT mode, similar to the private network structure of the firewall, the load scheduler acts as the gateway of all server nodes, that is, as the access entrance of the client and the access exit of each node to respond to the client.
The server node uses a private IP address, which is located on the same physical network as the load scheduler, and its security is better than the other two methods.
2. TUN mode
IP Tunnel (IP Tunnel)
Referred to as the TUN mode, it adopts an open network structure. The load scheduler only serves as the access entrance for the client. Each node responds directly to the client through its own Internet connection instead of going through the load scheduler.
The server nodes are scattered in different locations in the Internet, have independent public network IP addresses, and communicate with the load scheduler through a dedicated IP tunnel.
3. DR mode
Direct routing (Direct Routing)
Referred to as DR mode, it adopts a semi-open network structure, which is similar to that of TUN mode, but each node is not scattered in different places, but on the same physical network as the scheduler.
The load scheduler connects with each node server through the local network, so there is no need to establish a dedicated IP tunnel.
Load balancing Cluster structure load balancing structure:
The first layer, load scheduler
Tier 2, server pool
The third floor, shared services.
About LVS virtual server confirming kernel support for LVS:
Modprobe ip_vs
Cat / proc/net/ip_vs
Load scheduling algorithm of LVS 1. Polling:
The access requests received are distributed sequentially to each node in the cluster (real server), and each server is treated equally, regardless of the actual number of connections and system load of the server.
2. Weighted polling:
According to the processing capacity of the real server, the scheduler can automatically query the load of each node and adjust its weight dynamically.
Ensure that servers with strong processing power bear more access traffic
3. Minimum connection:
Allocate requests according to the number of connections established by the real server, and give priority to the node with the least number of connections
4. Weighted least connection:
When the performance of the server node varies greatly, the weight can be adjusted automatically for the real server.
Nodes with higher weights will bear a greater proportion of the active connection load
Using ipvsadm tools to create and manage LVS clusters
1, create a virtual server
2, add and delete server nodes
3. Check the status of clusters and nodes
4. Save the load distribution policy
The ARP problem in LVS-DR in the LVS-DR load balancing cluster, the load balancer and the node server have to configure the same VIP address and have the same IP address in the local area network, which is bound to cause confusion in the ARP communication of each server.
When an ARP broadcast is sent to the LVS-DR cluster, because the load balancer and the node server are connected to the same network, they both receive the ARP broadcast
Only the front-end load balancer responds, and other node servers should not respond to ARP broadcasts
Process the node server so that it does not respond to ARP requests for VIP
Use the virtual interface 10:0 to host the VIP address to set the kernel parameter arp_ignore=1: the system only responds to ARP requests whose destination IP is the local IP
After the router receives the ARP request, it will update the ARP entry. The original MAC address of Director corresponding to VIP will be updated to the MAC address of VIP corresponding to RealServer keepalived implementation principle.
Keepalived adopts vrrp hot backup protocol to realize the multi-machine hot backup function of Linux server.
Vrrp, virtual routing redundancy protocol, is a backup solution for routers.
Keepalivd case series explanation
Keepalived can realize multi-machine hot backup. There are multiple servers in each hot backup group, and the most commonly used one is dual-machine hot backup.
The failover of dual-computer hot standby is realized by the drift of virtual IP address, which is suitable for all kinds of application servers.
After reading the above overview of the meaning and working mode of load balancing cluster, if you have anything else you need to know, you can find out what you are interested in in the industry information or find our professional and technical engineer to answer it. Technical engineers have more than ten years of experience in the industry.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.