In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
The following gives you an overview of the application and classification of clusters, hoping to give you some help in practical application. There are many things involved in load balancing, not many theories, and there are many books on the Internet. Today, we will use the accumulated experience in the industry to make an answer.
Overview of Cluster Application
The meaning of clustering
1.Cluster, cluster, cluster
two。 It is composed of multiple hosts, but only as a whole.
In Internet applications, as the site has higher and higher requirements for hardware performance, response speed, service stability, data reliability and so on, a single CVM is unable to achieve its goals.
Solution method
1. Use expensive minicomputers and mainframes
two。 Build a service cluster using a normal server
Enterprise cluster classification
Depending on the target differences targeted by the cluster, it can be divided into three types:
1. Load balancing cluster (polling, weighting of minimum connections)
two。 Highly available clusters (speed of access, reliability)
3. High performance computing cluster (concurrent processing tasks)
Load balancing Cluster (Load Balance Cluster)
1. With the goal of improving the responsiveness of the application system, handling as many access requests as possible, and reducing latency, the overall performance of high concurrency and load (LB) can be achieved.
The load distribution of 2.LB depends on the shunting algorithm of the master node.
High availability Cluster (High Availability Cluster)
1. The goal is to improve the reliability of the application system and reduce the downtime as much as possible, so as to ensure the continuity of services and achieve the fault tolerance effect of high availability (HA).
2.HA works in duplex mode and master-slave mode.
High performance Computing Cluster (High Performance Computer Cluster)
1. With the goal of improving the CPU operation speed, expanding hardware resources and analysis ability of the application system, the high performance computing (HPC) capability equivalent to that of large and supercomputers is obtained.
two。 The high performance of high performance computing cluster depends on "distributed computing" and "parallel computing". Through special hardware and software, resources such as CPU and memory of multiple servers are integrated together to realize the computing power that only large and supercomputers have.
Analysis of working Mode of load balancing Cluster
Load balancing cluster is the most widely used cluster type in enterprises at present.
The load scheduling technology of the cluster has three working modes:
1. Address translation
2.IP tunnel
3. Direct routing (DR)
NAT mode
Address Translation (Network Address Translation):
1. Referred to as NAT mode, similar to the private network structure of the firewall, the load scheduler acts as the gateway of all server nodes, that is, as the access entrance of the client and the access exit of each node to respond to the client.
two。 The server node uses a private IP address, which is located on the same physical network as the load scheduler, and its security is better than the other two methods.
TUN mode
TUN mode
IP Tunnel (IP Tunnel):
1. Referred to as the TUN mode, it adopts an open network structure. The load scheduler only serves as the access entrance for the client. Each node responds directly to the client through its own Internet connection instead of going through the load scheduler.
two。 The server nodes are scattered in different locations in the Internet, have independent public network IP addresses, and communicate with the load scheduler through a dedicated IP tunnel.
DR mode
Direct routing (Direct Routing)
1. Referred to as DR mode, it adopts a semi-open network structure, which is similar to that of TUN mode, but each node is not scattered in different places, but on the same physical network as the scheduler.
two。 The load scheduler connects with each node server through the local network, so there is no need to establish a dedicated IP tunnel.
Load balancing cluster architecture
The structure of load balancing:
1. Layer 1, load scheduler (Load Balancer or Director)
two。 Tier 2, server pool (Server Pool)
3. Layer 3, shared storage (Share Storage)
Load scheduling algorithm of LVS
1. Polling (Round Robin):
1. The received access requests are assigned sequentially to each node in the cluster (real server)
two。 Treat each server equally, regardless of the actual number of connections and system load
two。 Weighted polling (Weighted Round Robin):
1. According to the processing capacity of the real server, the scheduler can automatically query the load of each node and adjust its weight dynamically.
two。 Ensure that servers with strong processing power bear more access traffic
3. Minimum connection (Least Connections)
According to the number of connections established by the real server, the access requests received are given priority to the node with the least number of connections.
4. Weighted least connection (Weighted Least Connections)
1. When the performance of the server node varies greatly, the weight can be adjusted automatically for the real server.
two。 Nodes with higher weights will bear a greater proportion of the active connection load
NFS shared storage service
Network File System, network file system
1. Dependent on RPC (remote procedure call)
two。 Nfs-utils and rpcbind packages need to be installed.
3. System services: nfs, rpcbind
4. Shared profile: / etc/exports
Access NFS shared resources in the client
1. Install the rpcbind package and start the rpcbind service
two。 Manually mount the NFS shared directory
3.fstab auto mount settings
Experimental procedure
Install ipvsadm management tools on the scheduler server
# first add a network adapter to become two network cards
# install ipvsadm management tools
[root@localhost ~] # yum install ipvsadm-y
Operations on two web node servers
# install httpd service
[root@localhost ~] # yum install httpd-y
*
Operations on shared storage servers
# use rpm to query whether nfs-utils and rpcbind packages are available
[root@localhost] # rpm-Q nfs-utils
Nfs-utils-1.3.0-0.48.el7.x86_64
[root@localhost] # rpm-Q rpcbind
Rpcbind-0.2.0-42.el7.x86_64
Configure a shared storage server
# modify the configuration of ens33 Nic
[root@localhost ~] # vim / etc/sysconfig/network-scripts/ifcfg-ens33
BOOTPROTO= "static" # change dhcp to static
DEVICE= "ens33"
ONBOOT= "yes"
IPADDR=192.168.200.130 # append under the last line: IP address, subnet mask and gateway
NETMASK=255.255.255.0
GATEWAY=192.168.200.1
# restart network services
[root@localhost ~] # systemctl network restart
# turn off firewall and security features
[root@localhost ~] # systemctl stop firewalld.service
[root@localhost ~] # setenforce 0
# enable nfs sharing service
[root@localhost ~] # systemctl start nfs.service
[root@localhost ~] # systemctl start rpcbind.service
# Editing shared directory configuration files
[root@localhost ~] # vim / etc/exports
* > # write to the shared directory entry and grant read and write permission
/ usr/share (ro,sync)
/ opt/accp 192.168.200.0ax 24 (rw,sync)
/ opt/kgc 192.168.200.0ax 24 (rw,sync)
[root@localhost ~] # cd / opt/
[root@localhost opt] # mkdir kgc accp
[root@localhost opt] # chmod 777 kgc/ accp/ # elevate directory permissions
[root@localhost opt] # exportfs-rv # to publish shared directory
Exporting 192.168.200.0/24:/opt/kgc
Exporting 192.168.200.0/24:/opt/accp
Exporting: / usr/share*
Configure the Web1 node server
# modify the configuration of ens33 Nic
[root@localhost ~] # vim / etc/sysconfig/network-scripts/ifcfg-ens33
BOOTPROTO= "static" # change dhcp to static
DEVICE= "ens33"
ONBOOT= "yes"
IPADDR=192.168.200.110 # append IP address, subnet mask and gateway
NETMASK=255.255.255.0
GATEWAY=192.168.200.1
# restart the service
[root@localhost ~] # systemctl network restart
# turn off firewall and security features
[root@localhost ~] # systemctl stop firewalld.service
[root@localhost ~] # setenforce 0
[root@localhost ~] # systemctl start httpd.service
[root@localhost ~] # netstat-ntap | grep 80
Tcp6 0 0: 80: LISTEN 7315/httpd
[root@localhost ~] # ping 192.168.200.130
PING 192.168.200.130 (192.168.200.130) 56 (84) bytes of data.
64 bytes from 192.168.200.130: icmp_seq=1 ttl=64 time=0.754 ms
64 bytes from 192.168.200.130: icmp_seq=2 ttl=64 time=0.372 ms
64 bytes from 192.168.200.130: icmp_seq=3 ttl=64 time=0.372 ms
64 bytes from 192.168.200.130: icmp_seq=3 ttl=64 time=0.372 ms
[root@localhost] # showmount-e 192.168.200.130
Export list for 192.168.200.130:
/ usr/share
/ opt/kgc 192.168.200.0/24
/ opt/accp 192.168.200.0/24
# Mount a website
[root@localhost ~] # mount.nfs 192.168.200.130:/opt/kgc / var/www/html/
[root@localhost ~] # cd / var/www/html/
[root@localhost html] # echo "this is kgc web" > index.html
[root@localhost html] # ls
Index.html
Make sure to check for site files on the storage server
[root@localhost ~] # cd / opt/
[root@localhost opt] # ls
Kgc accp rh
[root@localhost opt] # cd kgc/
[root@localhost accp] # cat index.html
This is kgc web
Verify the web pages provided by the Web1 node server
Configure the Web2 node server
# modify the configuration of ens33 Nic
[root@localhost ~] # vim / etc/sysconfig/network-scripts/ifcfg-ens33
BOOTPROTO= "static" # change dhcp to static
DEVICE= "ens33"
ONBOOT= "yes"
IPADDR=192.168.200.120 # append IP address, subnet mask and gateway
NETMASK=255.255.255.0
GATEWAY=192.168.200.1
[root@localhost ~] # systemctl network restart
[root@localhost ~] # systemctl stop firewalld.service
[root@localhost ~] # setenforce 0
[root@localhost ~] # systemctl start httpd.service
[root@localhost ~] # netstat-ntap | grep 80
Tcp6 0 0: 80: LISTEN 7315/httpd
[root@localhost ~] # ping 192.168.200.130
PING 192.168.200.130 (192.168.200.130) 56 (84) bytes of data.
64 bytes from 192.168.200.130: icmp_seq=1 ttl=64 time=0.853 ms
64 bytes from 192.168.200.130: icmp_seq=2 ttl=64 time=0.853 ms
64 bytes from 192.168.200.130: icmp_seq=3 ttl=64 time=0.624 ms
64 bytes from 192.168.200.130: icmp_seq=3 ttl=64 time=0.624 ms
[root@localhost] # showmount-e 192.168.200.130
Export list for 192.168.200.130:
/ usr/share
/ opt/kgc 192.168.200.0/24
/ opt/accp 192.168.200.0/24
[root@localhost ~] # mount.nfs 192.168.200.130:/opt/accp / var/www/html/
[root@localhost ~] # cd / var/www/html/
[root@localhost html] # echo "this is accp web" > index.html
[root@localhost html] # cat index.html
This is accp web
Make sure to check for site files on the storage server
[root@localhost ~] # ls / opt/
Kgc accp rh
[root@localhost opt] # cd accp/
[root@localhost accp] # cat index.html
This is accp web
Verify the web pages provided by the Web2 node server
Configure the scheduling server
# modify the configuration of ens33 Nic
[root@localhost network-scripts] # vim / etc/sysconfig/network-scripts/ifcfg-ens33
BOOTPROTO= "static" # change dhcp to static
DEVICE= "ens33"
ONBOOT= "yes" # append IP address, subnet mask and gateway
IPADDR=192.168.200.1
NETMASK=255.255.255.0
[root@localhost ~] # cd / etc/sysconfig/network-scripts/
[root@localhost network-scripts] # ls
Ifcfg-ens33 ifdown-ppp ifup-ib ifup-Team
Ifcfg-lo ifdown-routes ifup-ippp ifup-TeamPort
[root@localhost network-scripts] # cp ifcfg-ens33 ifcfg-ens36
# modify ens36 Nic
[root@localhost network-scripts] # vim ifcfg-ens36
BOOTPROTO= "static" # change dhcp to static
NAME= "ens36" # renamed to ens36
Delete UUID number
DEVICE= "ens36" # renamed to ens36
ONBOOT= "yes"
IPADDR=12.0.0.1 # append IP address, subnet mask and gateway
NETMASK=255.255.255.0
[root@localhost network-scripts] # vim / etc/sysconfig/network-scripts/ifcfg-ens33
BOOTPROTO= "static" # change dhcp to static
DEVICE= "ens33"
ONBOOT= "yes" # append IP address, subnet mask and gateway
IPADDR=192.168.200.1
NETMASK=255.255.255.0
[root@localhost network-scripts] # systemctl network restart
[root@localhost network-scripts] # vim / etc/sysctl.conf
# append an entry to the trailing line
Net.ipv4.ip_forward=1
# load routing forwarding
[root@localhost network-scripts] # sysctl-p
Net.ipv4.ip_forward = 1
[root@localhost network-scripts] # iptables-t nat-F
[root@localhost network-scripts] # iptables-F
# configure SNAT forwarding rules
[root@localhost network-scripts] # iptables-t nat-A POSTROUTING-o ens36-s 192.168.200.0Accord 24-j SNAT-- to-source 12.0.0.1
Load LVS kernel module
[root@localhost network-scripts] # modprobe ip_vs
[root@localhost network-scripts] # cat / proc/net/ip_vs
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
Save the configuration item and start the service
# Save the settings
[root@localhost network-scripts] # ipvsadm-- save > / etc/sysconfig/ipvsadm
[root@localhost network-scripts] # systemctl start ipvsadm.service
# configure load distribution policy
[root@localhost network-scripts] # cd / opt/
[root@localhost opt] # vim nat.sh
#! / bin/bash
# clear all records in the kernel virtual server table
Ipvsadm-C
# add a new virtual server
Ipvsadm-A-t 12.0.0.1 80-s rr
Ipvsadm-a-t 12.0.0.1 80-r 192.168.200.110V 80-m
Ipvsadm-a-t 12.0.0.1 80-r 192.168.200.120V 80-m
# effective load allocation policy
[root@localhost opt] # source nat.sh
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP localhost.localdomain:http rr
-> 129.168.200.110:http Masq 100
-> 129.168.200.120:http Masq 1 00
Use Windows 7 customer terminals to access web pages
After reading the above application overview and classification of clusters, if you have anything else you need to know, you can find out what you are interested in in the industry information or find our professional and technical engineers for answers. Technical engineers have more than ten years of experience in the industry.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.