Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to deploy highly available kubernetes clusters

2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article introduces the knowledge of "how to deploy highly available kubernetes clusters". In the operation of practical cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

Highly available configuration of kube-apiserver

Apiserver itself is stateless and can be scaled horizontally. With the help of external load balancing software, it is also relatively easy to configure. There are many implementation schemes, but it is generally implemented by external components LVS or HAProxy. Our production environment is implemented through LVS. The high availability of apiserver can be divided into high availability outside the cluster and high availability within the cluster. High availability outside the cluster means that for external users (such as kubectl and kubelet) who directly call K8s API, the client needs to call the VIP of apiserver to achieve high availability. The deployment of LVS and the configuration of VIP are not specified here.

The highly available configuration in the cluster means that after the pod access kubernetes,kubernetes cluster is created, a service of kubernetes will be launched by default for pod access in the cluster. The default value of ClusterIP for service is 172.0.0.1. When each service object is generated, an instance of apiserver can be seen in the object endpoints,endpoints that exposes the corresponding pod at the backend of the object. The service,service that accesses the kubernetes forwards the request to the ip in the endpoints. If there is no IP in the endpoints in the service, the apiserver cannot be accessed.

$kubectl get svc kubernetes

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE

Kubernetes ClusterIP 172.0.0.1 443/TCP 21d

$kubectl get endpoints kubernetes

NAME ENDPOINTS AGE

Kubernetes 10.0.2.15:6443, 10.0.2.16:6443 21d

The high availability of kube-apiserver service before kubernetes v1.9 means that in order to add master ip to kubernetes service's endpoints, you must specify the value of apiserver-count in the parameter. V1.9 has another parameter-endpoint-reconciler-type to replace the previous-- apiserver-count, but this parameter is disabled by default (Alpha version) and v1.10 is disabled by default. In v1.11-the endpoint-reconciler-type parameter is enabled by default, and the default value is lease. -- the apiserver-count parameter is removed in v1.13. You can also use-- apiserver-count in v1.11 and v1.12, but only if you set-- endpoint-reconciler-type=master-count. In other words, there is no need to configure apiserver in v1.11 and later, and several apiserver instances enabled will be added to the corresponding endpoints by default.

Highly available configurations for kube-controller-manager and kube-scheduler

Kube-controller-manager and kube-scheduler are highly available by leader election. Leader election is performed by locking endpoint in apiserver. To enable leader election, you need to add the following parameters to the configuration of the component:

-- leader-elect=true

-- leader-elect-lease-duration=15s

-- leader-elect-renew-deadline=10s

-- leader-elect-resource-lock=endpoints

-- leader-elect-retry-period=2s

The current leader of the component is written in the holderIdentity field of the endpoints. Use the following command to view the current leader of the component:

$kubectl get endpoints kube-controller-manager-- namespace=kube-system-o yaml

$kubectl get endpoints kube-scheduler-- namespace=kube-system-o yaml

For details on the high-availability implementation of kube-controller-manager and kube-scheduler, please refer to a previous article: high-availability implementation of components in kubernets.

Highly available configuration of etcd

Etcd is a distributed cluster and a stateful service, which is inherently a highly available architecture. In order to prevent etcd from cerebral fissure, the number of etcd clusters is generally odd (3 or 5 nodes). If the physical machine is used to build a k8s cluster, the size of the cluster will be relatively large in theory, and etcd should also deploy a set of independent clusters with 3 or 5 nodes. If you want to automate the operation and maintenance of etcd, consider using etcd-operator to deploy the etcd cluster in k8s.

An architectural diagram of a highly available deployment of components in kubernetes:

Summary

This article mainly introduces how to configure a highly available kubernetes cluster. The new version of kubernetes is getting closer and closer to the full TLS + RBAC configuration. If the kubernetes cluster is still using port 8080, the kube-controller-manager and kube-scheduler on each master node are connected to apiserver through port 8080. If the apiserver on the node dies, kube-controller-manager and kube-scheduler will also die. As the core component of the cluster, apiserver must be highly available for deployment, and it is relatively easy for other components to achieve high availability.

That's all for "how to deploy highly available kubernetes clusters". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report