In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
This article mainly introduces in detail the method of deploying kubernetes cluster with lvs and keepalived. The detailed explanation of the picture and text is easy to learn. If the content is long, it is recommended to follow the steps step by step. Interested friends can refer to it.
I. Deployment environment 1.1hostlist hostname Centos version ipdocker versionflannel versionKeepalived version host configuration remarks lvs-keepalived017.6.1810172.27.34.28//v1.3.54C4Glvs-keepalivedlvs-keepalived017.6.1810172.27.34.29//v1.3.54C4Glvs-keepalivedmaster017.6.1810172.27.34.3518.09.9v0.11.0/4C4Gcontrol planemaster027.6.1810172.27.34.3618.09.9v0.11.0/4C4Gcontrol planemaster037.6.1810172.27.34 .3718.09.9v0.11.0 / 4C4Gcontrol planework017.6.1810172.27.34.16118.09.9//4C4Gworker nodeswork027.6.1810172.27.34.16218.09.9//4C4Gworker nodeswork037.6.1810172.27.34.16318.09.9//4C4Gworker nodesVIP7.6.1810172.27.34.222//v1.3.54C4G floating client7.6.1810172.27.34.85///4C4Gclient on two lvs-keepalived hosts
There are 9 servers, including 2 lvs-keepalived clusters, 3 control plane clusters, 3 work clusters, and 1 client.
1.2k8s hostname kubelet versionkubeadm versionkubectl version Note master01v1.16.4v1.16.4v1.16.4kubectl optional master02v1.16.4v1.16.4v1.16.4kubectl optional master03v1.16.4v1.16.4v1.16.4kubectl optional work01v1.16.4v1.16.4v1.16.4kubectl optional work02v1.16.4v1.16.4v1.16.4kubectl optional work03v1.16.4v1.16.4v1.16.4kubectl optional client// v1.16.4 client II, highly available architecture 1. Architecture diagram
This article uses kubeadm to build a high-availability k8s cluster. The high availability of k8s cluster is actually the high availability of the core components of K8s. Cluster mode (for apiserver) is used here, and the architecture is as follows:
two。 The high availability architecture of cluster mode indicates that the core components are highly available. Apiserver cluster lvs+keepalivedcontroller-manager master / standby leader electionscheduler master / standby leader electionetcd cluster kubeadmapiserver achieves high availability through lvs-keepalived, and vip distributes requests to the apiserver components of each control plane node. Leaders are elected within controller-manager K8s (controlled by-- leader-elect selection, default is true), and only one controller-manager component is running in the cluster at a time. Within scheduler K8s, leaders are elected (controlled by-leader-elect selection, default is true). Only one scheduler component is running in the cluster at the same time. Etcd automatically creates a cluster by running kubeadm to achieve high availability. The number of nodes deployed is odd, and a maximum of one machine downtime is tolerated in 3-node mode. III. Centos7.6 installation
All the servers in this article are installed for Centos7.6,Centos7.6. For more information, please see: full record of installation and Optimization of Centos7.6 operating system
The firewall and selinux have been disabled and the Ali source has been set when Centos is installed.
IV. Preparatory work for k8s cluster installation
Both control plane and work nodes perform this part of the operation, and take master01 as an example to record the build process.
1. Configure hostname 1.1 modify hostname [root@centos7 ~] # hostnamectl set-hostname master01 [root@centos7 ~] # more / etc/hostname master01
Log out and log in again to display the newly set hostname master01, and each server is changed to the corresponding hostname.
1.2 modify the hosts file [root@master01 ~] # cat > > / etc/hosts / etc/rc.sysinit / etc/sysconfig/modules/br_netfilter.modules > ~ / .bash_ profile [root @ master02 ~] # source .bash _ profile [root@master03 ~] # scp master01:/etc/kubernetes/admin.conf / etc/kubernetes/ [root@master03 ~] # echo "export KUBECONFIG=/etc/kubernetes/admin.conf" > > ~ / .bash_ profile [root @ master03 ~] # source .bash _ profile
This step is done so that kubectl commands can also be executed on master02 and master03.
5. K8s cluster node view [root@master01 ~] # kubectl get nodes [root@master01 ~] # kubectl get po-o wide-n kube-system
An exception was found in downloading flannel for master01 and master03. It is normal after manually downloading the image on master01 and master03, respectively.
[root@master01 ~] # docker pull registry.cn-hangzhou.aliyuncs.com/loong576/flannel:v0.11.0-amd64 [root@master03 ~] # docker pull registry.cn-hangzhou.aliyuncs.com/loong576/flannel:v0.11.0-amd64
9. Work nodes join k8s cluster 1. Work01 join k8s cluster [root@work01] # kubeadm join 172.27.34.222 kubeadm join 6443-- token lw90fv.j1lease5jhzj9ih3-- discovery-token-ca-cert-hash sha256:79575e7a39eac086e121364f79e58a33f9c9de2a4e9162ad81d0abd1958b24f4
Run the command to initialize the work nodes generated by master to join the cluster
2. Work02 joins k8s cluster [root@work02] # kubeadm join 172.27.34.222 token lw90fv.j1lease5jhzj9ih3-- discovery-token-ca-cert-hash sha256:79575e7a39eac086e121364f79e58a33f9c9de2a4e9162ad81d0abd1958b24f4
3. Work03 joins k8s cluster [root@work03] # kubeadm join 172.27.34.222 token lw90fv.j1lease5jhzj9ih3-- discovery-token-ca-cert-hash sha256:79575e7a39eac086e121364f79e58a33f9c9de2a4e9162ad81d0abd1958b24f4
4. All nodes in k8s cluster view [root@master01 ~] # kubectl get nodes [root@master01 ~] # kubectl get po-o wide-n kube-system
10. Ipvs installation
Both lvs-keepalived01 and lvs-keepalived02 do this.
1. Install ipvs
LVS does not need to be installed, but administrative tools are installed. The first is called ipvsadm, and the second is called keepalive. Ipvsadm is managed through the command line, while keepalive reads configuration file management.
[root@lvs-keepalived01 ~] # yum-y install ipvsadm
two。 Load ipvsadm module
Load the ipvsadm module into the system
[root@lvs-keepalived01 ~] # ipvsadmIP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConn [root@lvs-keepalived01 ~] # lsmod | grep ip_vsip_vs 145497 0 nf_conntrack 133095 1 ip_vslibcrc32c 12644 3 xfs,ip_vs,nf_conntrack
For more information on lvs-related practices, please see LVS+Keepalived+Nginx load balancing Building and testing.
11. Keepalived installation
Both lvs-keepalived01 and lvs-keepalived02 do this.
1. Keepalived install [root@lvs-keepalived01 ~] # yum-y install keepalived
2. Keepalived configuration
The lvs-keepalived01 configuration is as follows:
[root@lvs-keepalived01 ~] # more / etc/keepalived/keepalived.conf! Configuration File for keepalivedglobal_defs {router_id lvs-keepalived01 # router_id machine ID, usually hostname, but not necessarily hostname. Email notifications are used when a failure occurs. } vrrp_instance VI_1 {# vrrp instance definition section state MASTER # sets the status of lvs, both MASTER and BACKUP, and must uppercase interface ens160 # set the interface for external services virtual_router_id 100# set the virtual route label, which is a number The same vrrp instance uses the unique label priority 100 # to define the priority. The higher the number, the higher the priority. Under a vrrp--instance, the priority of master must be greater than backup advert_int 1 # set the time interval for synchronization checks between master and backup load balancers. Unit is second authentication {# set authentication type and password auth_type PASS # there are mainly two kinds of auth_pass 1111 # authentication password: PASS and AH, MASTER and BACKUP passwords must be the same under the same vrrp_instance} virtual_ipaddress {# set virtual ip address, you can set multiple One 172.27.34.222} per line} virtual_server 172.27.34.222 6443 {# set up the virtual server Need to specify virtual ip and service port delay_loop 6 # Health check interval lb_algo wrr # load balancing scheduling algorithm lb_kind DR # load balance forwarding rules # persistence_timeout 50 # set session persistence time Protocol TCP # is very useful for dynamic web pages to specify the forwarding protocol type. There are TCP and UDP real_server 172.27.34.35 6443 {# configure server node 1, you need to specify the real IP address and port weight 10 # of real server to set the weight. The larger the number, the higher the weight. TCP_CHECK {# realserver status monitoring settings partial unit second connect_timeout 10 # connection timeout 10 seconds retry 3 # number of reconnections delay_before_retry 3 # retry interval connect_port 6443 # connection port 6443 To be consistent with the above}} real_server 172.27.34.36 6443 {# configure server node 1, you need to specify the real IP address of real server and port weight 10 # to set the weight The larger the number, the higher the weight. TCP_CHECK {# realserver status monitoring settings partial unit second connect_timeout 10 # connection timeout 10 seconds retry 3 # number of reconnections delay_before_retry 3 # retry interval connect_port 6443 # connection port 6443 To be consistent with the above}} real_server 172.27.34.37 6443 {# configure server node 1, you need to specify the real IP address of real server and port weight 10 # to set the weight The larger the number, the higher the weight. TCP_CHECK {# realserver status monitoring settings partial unit second connect_timeout 10 # connection timeout 10 seconds retry 3 # number of reconnections delay_before_retry 3 # retry interval connect_port 6443 # connection port 6443 To be consistent with the above}}
The lvs-keepalived02 configuration is as follows:
[root@lvs-keepalived02 ~] # more / etc/keepalived/keepalived.conf! Configuration File for keepalivedglobal_defs {router_id lvs-keepalived02 # router_id machine ID, usually hostname, but not necessarily hostname. Email notifications are used when a failure occurs. } vrrp_instance VI_1 {# vrrp instance definition section state BACKUP # sets the status of lvs, both MASTER and BACKUP, and must uppercase interface ens160 # set the interface for external services virtual_router_id 100# set the virtual route label, which is a number The same vrrp instance uses the unique label priority 90 # to define priority. The higher the number, the higher the priority. Under a vrrp--instance, the priority of master must be greater than backup advert_int 1 # to set the time interval for synchronization checks between master and backup load balancers. Unit is second authentication {# set authentication type and password auth_type PASS # there are mainly two kinds of auth_pass 1111 # authentication password: PASS and AH, MASTER and BACKUP passwords must be the same under the same vrrp_instance} virtual_ipaddress {# set virtual ip address, you can set multiple One 172.27.34.222} per line} virtual_server 172.27.34.222 6443 {# set up the virtual server Need to specify virtual ip and service port delay_loop 6 # Health check interval lb_algo wrr # load balancing scheduling algorithm lb_kind DR # load balance forwarding rules # persistence_timeout 50 # set session persistence time Protocol TCP # is very useful for dynamic web pages to specify the forwarding protocol type. There are TCP and UDP real_server 172.27.34.35 6443 {# configure server node 1, you need to specify the real IP address and port weight 10 # of real server to set the weight. The larger the number, the higher the weight. TCP_CHECK {# realserver status monitoring settings partial unit second connect_timeout 10 # connection timeout 10 seconds retry 3 # number of reconnections delay_before_retry 3 # retry interval connect_port 6443 # connection port 6443 To be consistent with the above}} real_server 172.27.34.36 6443 {# configure server node 1, you need to specify the real IP address of real server and port weight 10 # to set the weight The larger the number, the higher the weight. TCP_CHECK {# realserver status monitoring settings partial unit second connect_timeout 10 # connection timeout 10 seconds retry 3 # number of reconnections delay_before_retry 3 # retry interval connect_port 6443 # connection port 6443 To be consistent with the above}} real_server 172.27.34.37 6443 {# configure server node 1, you need to specify the real IP address of real server and port weight 10 # to set the weight The larger the number, the higher the weight. TCP_CHECK {# realserver status monitoring settings partial unit second connect_timeout 10 # connection timeout 10 seconds retry 3 # number of reconnections delay_before_retry 3 # retry interval connect_port 6443 # connection port 6443 To be consistent with the above} 3. Remove vip from master01 [root @ master01 ~] # ifconfig ens160:2 172.27.34.222 netmask 255.255.255.0 down
Remove ip 172.27.34.222 for initialization on master01
4. Start keepalived
Both lvs-keepalived01 and lvs-keepalived02 start keepalived and are set to boot
[root@lvs-keepalived01] # service keepalived startRedirecting to / bin/systemctl start keepalived.service [root@lvs-keepalived01] # systemctl enable keepalivedCreated symlink from / etc/systemd/system/multi-user.target.wants/keepalived.service to / usr/lib/systemd/system/keepalived.service.5. Vip view [root@lvs-keepalived01 ~] # ip a
Vip is on lvs-keepalived01 at this time
12. Control plane node configuration
All control plane perform this operation.
1. Create a new realserver.sh
Turn on the "routing" feature of the server where the control plane resides, turn off the "ARP query" feature, and set the loopback ip. The configuration of the three control plane is the same, as follows:
[root@master01 ~] # cd / etc/rc.d/init.d/ [root@master01 init.d] # more realserver.sh #! / bin/bash SNS_VIP=172.27.34.222 case "$1" in start) ifconfig lo:0 $SNS_VIP netmask 255.255.255.255 broadcast $SNS_VIP / sbin/route add-host $SNS_VIP dev lo:0 echo "1" > / proc/sys/net/ipv4/conf/lo/arp _ ignore echo "2" > / proc/sys/net/ipv4/conf/lo/arp_announce echo "1" > / proc/sys/net/ipv4/conf/all/arp_ignore echo "2" > / proc/sys/net/ipv4/conf/all/arp_announce sysctl-p > / dev/null 2 > & 1 echo "RealServer Start OK" Stop) ifconfig lo:0 down route del $SNS_VIP > / dev/null 2 > & 1 echo "0" > / proc/sys/net/ipv4/conf/lo/arp_ignore echo "0" > / proc/sys/net/ipv4/conf/lo/arp_announce echo "0" > / proc/sys/net/ipv4/conf/all/arp_ignore echo "0" > / proc/sys/net/ipv4/conf/ All/arp_announce echo "RealServer Stoped" ; *) echo "Usage: $0 {start | stop}" exit 1 esac exit 0
This script is used for control plane nodes to bind VIP and suppress ARP requests that respond to VIP. The purpose of this is to prevent the node server from replying when ARP broadcasts about VIP (because control plane nodes are bound to VIP, they will reply if they are not set up).
2 run the realserver.sh script
Execute the realserver.sh script on all control plane nodes:
[root@master01 init.d] # chmod uplix realserver.sh [root@master01 init.d] # / etc/rc.d/init.d/realserver.sh startRealServer Start OK
Grant execution permissions to realserver.sh scripts and run realserver.sh scripts
3. Realserver.sh enable and launch [root@master01 init.d] # sed-I'$a / etc/rc.d/init.d/realserver.sh start' / etc/rc.d/rc.local [root@master01 init.d] # chmod uplix / etc/rc.d/rc.local XIII, client configuration 1. Set kubernetes source 1.1 add Kubernetes source [root@client ~] # cat > ~ / .bash_ profile [root @ client ~] # source .bash _ profile 3.4 load environment variable [root@master01 ~] # echo "source
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.