In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article focuses on "how to deploy k8s high availability architecture". Interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn "how to deploy K8s highly available architecture"!
Overview
Address sealos, so that kubernetes high availability no longer need keepalived haproxy and ansible,sealyun customized Super Edition kubeadm through ipvs proxy multiple master, elegant solution to K8s high availability problem.
Environment introduction ip role10.103.97.200 master010.103.97.201 master010.103.97.202 master010.103.97.2 virtulIP
Apiserver.cluster.local apiserver resolution name
Download kubernetes1.14.0+ offline package
Initialize on each node
Tar zxvf kube1.14.0.tar.gz & & cd kube/shell & & sh init.sh
Replace the kubeadm in the replacement package with the downloaded kubeadm:
Cp kubeadm / usr/bin/kubeadmkubeadm configuration file
Cat kubeadm-config.yaml:
ApiVersion: kubeadm.k8s.io/v1beta1kind: ClusterConfigurationkubernetesVersion: v1.14.0controlPlaneEndpoint: "apiserver.cluster.local:6443" # use the resolution name to access APIserverapiServer: certSANs:-127.0.0.1-apiserver.cluster.local-172.20.241.205-172.20.241.206-172.20.241.207-172.20.241.208-10.103.97 .2 # Virtual IP, etc. are added to the certificate-apiVersion: kubeproxy.config.k8s.io/v1alpha1kind: KubeProxyConfigurationmode: "ipvs" ipvs: excludeCIDRs:-"10.103.97.2 IPVS 32" # if this k8s is not added, the user-created IPVS rules will be cleaned regularly Cause the agent to fail
On master0 10.103.97.200
Echo "10.103.97.200 apiserver.cluster.local" > > / etc/hostskubeadm init-- config=kubeadm-config.yaml-- experimental-upload-certs mkdir ~ / .kube & & cp / etc/kubernetes/admin.conf ~ / .kube/configkubectl apply-f https://docs.projectcalico.org/v3.6/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml # install calico
After execution, some logs are output, which contain the commands needed by join.
On master1 10.103.97.201
# attention Parse to master0 before installation, and change it to yourself after successful installation, because kubelet kube-proxy is configured with this resolution name Echo "10.103.97.200 apiserver.cluster.local" > > / etc/hostskubeadm join 10.103.97.200 apiserver.cluster.local 6443-- token 9vr73a.a8uxyaju799qwdjv\-- discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866\-- experimental-control-plane\-- certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07sed "s echo 10.103.97.200 apiserver.cluster.local"-- I / etc/hosts # parsing is also your own local address if you don't change the resolution of the whole cluster.
On master2 10.103.97.202, same as master1
Echo "10.103.97.200 apiserver.cluster.local" > > / etc/hostskubeadm join 10.103.97.200 etc/hostskubeadm join 6443-- token 9vr73a.a8uxyaju799qwdjv\-- discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866\-- experimental-control-plane\-- certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07 sed "s use 10.103.97.200 apiserver.cluster.local 10.103.97.200 apiserver.cluster.local"-I / etc/hosts on the Node node
Through the virtual IP join to the master, this command creates an ipvs rule on the node node, virturl server is the virtual IP and realserver is the three master. Then start a daemon to protect these rules with a static pod. Once which apiserver is inaccessible, clear the realserver, and add it back after the apiserver is connected.
Echo "10.103.97.2 apiserver.cluster.local" > > / etc/hosts # using vipkubeadm join 10.103.97.2 using vipkubeadm join 6443-- token 9vr73a.a8uxyaju799qwdjv\-- master 10.103.97.200 echo 6443\-- master 10.103.97.201purl 6443\-- master 10.103.97.202 Vol 6443\-- discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866Architecture +-+ +-+ virturl server: 10.103.97.2 6443 | mater0 |
Each node node in the cluster creates an ipvs rule for proxying all master nodes. Ipvs implementation similar to kube-proxy.
Then start a daemon to check the health.
Apiservers / etc/kubernetes/manifests/sealyun-lvscare.yaml so far, I believe you have a better understanding of "how to deploy K8s highly available architecture". You might as well do it in practice! Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.