In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article is about how to deploy k8s v1.16 on Centos7.6. The editor thinks it is very practical, so I share it with you. I hope you can get something after reading this article.
I. deployment environment
List of hosts:
Hostname Centos version ipdocker versionflannel versionKeepalived version host configuration remarks master017.6.1810172.27.34.318.09.9v0.11.0v1.3.54C4Gcontrol planemaster027.6.1810172.27.34.418.09.9v0.11.0v1.3.54C4Gcontrol planemaster037.6.1810172.27.34.518.09.9v0.11.0v1.3.54C4Gcontrol planework017.6.1810172.27.34.9318.09.9//4C4Gworker nodeswork027.6.1810172.27.34.9418.09.9// 4C4Gworker nodeswork037.6.1810172.27.34.9518.09.9//4C4Gworker nodesVIP7.6.1810172.27.34.13018.09.9v0.11.0v1.3.54C4G floats client7.6.1810172.27.34.234///4C4Gclient on control plane
There are 7 servers, 3 control plane,3, work,1 and client.
K8s version:
Hostname kubelet versionkubeadm versionkubectl version remarks master01v1.16.4v1.16.4v1.16.4kubectl optional master02v1.16.4v1.16.4v1.16.4kubectl optional master03v1.16.4v1.16.4v1.16.4kubectl optional work01v1.16.4v1.16.4v1.16.4kubectl optional work02v1.16.4v1.16.4v1.16.4kubectl optional work03v1.16.4v1.16.4v1.16.4kubectl optional client// v1.16.4client II, highly available architecture
This article uses kubeadm to build a high-availability k8s cluster. The high availability of the k8s cluster is actually the high availability of the core components of K8s. Here, the master / slave mode is used, and the architecture is as follows:
Description of highly available architecture for master / slave mode:
Core components High availability Mode High availability implementation apiserver active / standby keepalivedcontroller-manager active / standby leader electionscheduler active / standby leader electionetcd Cluster kubeadm
Apiserver achieves high availability through keepalived, triggering keepalived vip transfer when a node fails
Within controller-manager K8s, leaders are elected (controlled by-- leader-elect selection, default is true). Only one controller-manager component is running in the cluster at a time.
Leaders are elected within scheduler K8s (controlled by-- leader-elect selection, default is true). Only one scheduler component is running in the cluster at a time.
Etcd automatically creates clusters by running kubeadm to achieve high availability. The number of nodes deployed is odd, and a maximum of one machine downtime is tolerated in 3-node mode.
III. Preparation for installation
Both control plane and work nodes perform this part of the operation.
For details of Centos7.6 installation, please see: full record of installation and Optimization of Centos7.6 operating system
The firewall and selinux have been disabled and the Ali source has been set when Centos is installed.
1. Configure hostname 1.1 modify hostname [root@centos7 ~] # hostnamectl set-hostname master01 [root@centos7 ~] # more / etc/hostname master01
Log out and log back in to display the newly set hostname master01
1.2 modify the hosts file [root@master01 ~] # cat > > / etc/hosts / etc/rc.sysinit / etc/sysconfig/modules/br_netfilter.modules > ~ / .bash_ profile [root @ master02 ~] # source .bash _ profile [root @ master03 ~] # scp master01:/etc/kubernetes/admin.conf / etc/kubernetes/ [root@master03 ~] # echo "export KUBECONFIG=/etc/kubernetes/admin.conf" > > ~ / .bash_ profile [root @ master03 ~] # source .bash _ profile
This step is done so that kubectl commands can also be executed on master02 and master03.
5. Cluster nodes view [root@master01 ~] # kubectl get nodes [root@master01 ~] # kubectl get po-o wide-n kube-system
All control plane nodes are in the ready state and all system components are normal.
9. Work nodes join the cluster 1. Work01 join the cluster kubeadm join 172.27.34.130 kubeadm join 6443-- token qbwt6v.rr4hsh73gv8vrcij\-- discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966
Run the command to initialize the work nodes generated by master to join the cluster
2. Work02 joins the cluster
3. Work03 joins the cluster
4. Cluster nodes view [root@master01 ~] # kubectl get nodesNAME STATUS ROLES AGE VERSIONmaster01 Ready master 44m v1.16.4master02 Ready master 33m v1.16.4master03 Ready master 23m v1.16.4work01 Ready 11m v1.16.4work02 Ready 7m50s v1.16.4work03 Ready 3m4s v1.16.4
10. Client configuration 1. Set kubernetes source 1.1 add Kubernetes source [root@client ~] # cat > ~ / .bash_ profile [root @ client ~] # source .bash _ profile
3.4 load the environment variable [root@master01 ~] # echo "source
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.