In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article introduces how to carry out multi-master node fault-tolerant deployment in Kubernetes. The content is very detailed. Interested friends can use it for reference. I hope it will be helpful to you.
Kubernetes's master services mainly include etcd (data storage), control-manager (controller), scheduler (scheduler) and apiserver (service interface). We deploy them to multi-nodes to achieve fault tolerance.
1. Deployment of multiple master nodes
Control-manager (controller) and scheduler (scheduler) manage cluster container instances through apiserver (service interface), and leader election is performed by adding locks to the Endpoint in apiserver. When the current instance of leader does not work properly, other instances will get the lock and become the new leader. By default, election=true, which is installed by default, supports automatic selection of multiple instances.
Note: the kubelet of each node will automatically start the pod defined in / etc/kubernetes/manifest/*.yaml and run as static pod.
First, deploy the first master node using kubeadm.
The second step is to install secondary nodes.
Deploy multiple secondary nodes using kubeadm.
Or deploy as a work node with kubeadm join first.
Then copy the files under / etc/kubernetes/manifest to the corresponding directory of each secondary node, as well as the * .conf file corresponding to the superior. The documents include:
Etcd.yaml (previously modified and cannot be overwritten)
Kube-apiserver.yaml
Kube-controller-manager.yaml
Kube-scheduler.yaml
Restart the kubelet service and run the command: systemctl restart kubelet.
At this point, the above pod can be seen in the Dashboard of Kubernetes, but operations such as deletion cannot be performed.
2. Load balancing of apiserver
After setting up multiple master services using the above method, the URL primary address of kube-apiserver all points to the IP address of the first master node, and there is still a risk of single point of failure. To achieve multipoint fault tolerance, there are several options (the principle is all the same, but the implementation is different):
The first is an external load balancer.
The highly available IP assigned by the external load balancer is used as the service address of the apiserver, and all external accesses and the server parameters in scheduler.conf and controller-manager.conf point to this address, and then the address is mapped to the specific internal server IP, and the access load is distributed by the external load balancer.
Use highly available IP as the service address on each of the above master nodes, such as-- apiserver-advertise-address=10.1.1.201.
Reference: multi-Nic Ubuntu server installs Kubernetes
Add the IP address of the secondary node to the load balancer.
Point the server parameters in the scheduler.conf and controller-manager.conf of all nodes to the highly available IP.
This method is relatively simple to deploy, but relies on the load balancer provided by the cloud service provider.
If you install your own load balancer device or software, you need to make sure that it is highly available.
Second, virtual IP+ load balancing.
Virtual IP is realized by using keepalived. When the master node is not available, the IP is automatically drifted to other nodes, and the working node is basically unaffected. K8s cluster is configured according to virtual IP, which is similar to the first scheme, but the fault tolerance of K8s cluster master node can be realized by simple software.
The virtual IP (which actually modifies the real IP directly) runs only on a single node at a time. Therefore, the apiserver services on the other secondary nodes are in standby mode.
By adding HAProxy as the load balancer of apiserver, and then using keepalived as the virtual IP of multi-nodes, you can turn multi-nodes into a mutual backup mode that supports load balancing.
Run keepalived on each secondary node, configured to join the load balancer with the same group and IP address.
Point the server parameters in the scheduler.conf and controller-manager.conf of all nodes to the highly available IP.
Note that the kubernetes certificate installed by kubeadm can only support native single node authorization. This model may require a new certificate of authorization.
Third, multi-principal divide and conquer + reverse proxy.
Each node runs independently and shares data through etcd.
The server parameters of scheduler.conf and controller-manager.conf of each node point to the local apiserver.
Deploy nginx as a reverse proxy, and external access is distributed to each apiserver through the reverse proxy service.
Each node is completely autonomous and the authorization certificate is different, which needs to be processed by reverse proxy.
Reverse proxies should be highly available, similar to the first approach.
3. Kube-dns is highly available
Kube-dns is not part of the Master component and can run on Node nodes and use Service to provide services to the cluster. However, in the actual environment, because there is only one kube-dns instance running by default, the dns service within the cluster will be unavailable when it is upgraded or the node is down, and the normal operation of the online service will be affected in serious cases.
To avoid failures, set the replicas value of kube-dns to 2 or more and deploy them on different Node nodes using anti-affinity. This operation is easy to be overlooked, and it is not until there is a failure that it is found that only one instance of kube-dns is running.
So much for the fault-tolerant deployment of multi-master nodes in Kubernetes. I hope the above content can be of some help and learn more knowledge. If you think the article is good, you can share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.