In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
Today, I will talk to you about how to understand the sealos installation of kubernetes HA. Many people may not know much about it. In order to make you understand better, the editor has summarized the following for you. I hope you can get something according to this article.
Overview
Teaches you how to build a K8s highly available cluster with one command without relying on haproxy and keepalived or ansible. Apiserver is load balanced through kernel ipvs with apiserver health detection.
Preparation condition
Install docker and start docker. Download the offline installation package and copy it to the / root directory of all nodes. There is no need to extract it. If there is a file server, sealos supports wget from one server to all nodes.
Installation
Sealos has been placed in the offline package and decompressed in the kube/bin directory (you can extract one to get the sealos bin file)
Sealos init\-- master 192.168.0.2\-- master 192.168.0.3\-- master 192.168.0.4\ # master address list-node 192.168.0.5\ # node address list-user root\ # service user name-passwd your-server-password\ # server password For remote execution of the command-pkg kube1.14.1.tar.gz\ # offline installation package name-version v1.14.1 # kubernetes offline installation package version, which needs to be used to render the kubeadm configuration
And then, there's no such thing.
Other parameters
-- kubeadm-config string kubeadm-config.yaml local # Custom kubeadm configuration file. If you have this sealos, you will not render the kubeadm configuration-- pkg-url string http://store.lameleg.com/kube1.14.1.tar.gz download offline pakage url # supports pulling offline packages remotely, saving each machine copy. You must have a http server to put an offline package-- vip string virtual ip (default "10.103.97.2") # virtual IP of proxy master. As long as it does not conflict with your address, please do not change it.
Clear
Sealos clean\-- master 192.168.0.2\-- master 192.168.0.3\-- master 192.168.0.4\ # master address list-node 192.168.0.5\ # node address list-- user root\ # Service user name-- passwd your-server-password
Add nodes
New nodes can be decompressed directly using kubeadm.
Cd kube/shell & & init.shecho "10.103.97.2 apiserver.cluster.local" > > / etc/hosts # using vipkubeadm join 10.103.97.2 etc/hosts 6443-- token 9vr73a.a8uxyaju799qwdjv\-- master 10.103.97.100 apiserver.cluster.local 6443\-- master 10.103.97.101 apiserver.cluster.local 6443\-- master 10.103.97.102 etc/hosts 6443\-- discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866
Install dashboard prometheus
Yaml configuration and images are included in the offline package, which users install on demand.
Cd / root/kube/confkubectl taint nodes-- all node-role.kubernetes.io/master- # decontamination, depending on the demand, master allows scheduling of kubectl apply-f heapster/ # to install heapster, no installation of dashboard without monitoring data kubectl apply-f heapster/rbac kubectl apply-f dashboard # with dashboardkubectl apply-f prometheus # with monitoring
Isn't it amazing how do you do this? Then you need to see the following two things.
About Super kubeadm
We customized kubeadm and did two things:
An ipvs rule is added to each node node, and its backend proxies three master
A static pod of lvscare is set up on node to guard the ipvs. Once the apiserver is inaccessible, it will automatically clean up all the corresponding ipvs rules on the node and add it back when the master returns to normal.
Access to masters through local kernel load balancing on each node is implemented in this way:
+-+-+ virturl server: 127.0.0.1 virturl server 6443 | mater0 |
< ----------------------| ipvs nodes | real servers: +----------+ |+---------------+ 10.103.97.200:6443 | 10.103.97.201:6443 +----------+ | 10.103.97.202:6443 | mater1 | 127.0.0.1:8081 Masq 1 0 0 ->127.0.0.1 Masq 8082 Masq 100-> 127.0.0.1 curl vip 8083 Masq 100 curl vip: [root@iZj6c9fiza9orwscdhate4Z ~] # curl 10.103.97.12 curl vip 6443
Delete a nginx and one rule is missing.
[root@iZj6c9fiza9orwscdhate4Z ~] # docker stop nginx1nginx1 [root@iZj6c9fiza9orwscdhate4Z ~] # ipvsadm-LnIP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 10.103.97.12 Masq 6443 rr-> 127.0.0.1 root@iZj6c9fiza9orwscdhate4Z 8082 Masq 100-> 127.0.0.1 root@iZj6c9fiza9orwscdhate4Z 8083 10 1
Delete another one:
[root@iZj6c9fiza9orwscdhate4Z ~] # docker stop nginx2nginx2 [root@iZj6c9fiza9orwscdhate4Z ~] # ipvsadm-LnIP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 10.103.97.12 Masq 6443 Masq-> 127.0.0.1
At this point, VIP can still visit:
[root@iZj6c9fiza9orwscdhate4Z ~] # curl 10.103.97.12 purl 6443
Delete all, the rules will be automatically cleared out, and curl curl will not work, because there is no realserver available.
[root@iZj6c9fiza9orwscdhate4Z ~] # docker stop nginx3nginx3 [root@iZj6c9fiza9orwscdhate4Z ~] # ipvsadm-LnIP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 10.103.97.12 root@iZj6c9fiza9orwscdhate4Z 6443 rr [root@iZj6c9fiza9orwscdhate4Z ~] # curl 10.103.97.12 root@iZj6c9fiza9orwscdhate4Z 6443 curl: (7) Failed connect to 10.103.97.12 ipvsadm 6443; reject connection
Start up the nginx again, and the rules will be added back automatically.
[root@iZj6c9fiza9orwscdhate4Z ~] # docker start nginx1nginx2 nginx3nginx1nginx2nginx3 [root@iZj6c9fiza9orwscdhate4Z ~] # ipvsadm-LnIP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 10.103.97.12 root@iZj6c9fiza9orwscdhate4Z 6443 rr-> 127.0.0.1 Masq 8081 Masq 100-> 127.0.0.1 root@iZj6c9fiza9orwscdhate4Z 8082 0 0-> 127.0.0.1 Masq 8083 1 0 0
So in sealos, the above apiserver is the above three nginx,lvscare will conduct a health test. Of course, you can also use lvscare for other scenarios, such as proxying your own TCP service.
After reading the above, do you have any further understanding of how to understand kubernetes HA's sealos installation? If you want to know more knowledge or related content, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.