Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the techniques of multi-Master fault-tolerant configuration in Kubernetes cluster

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

Editor to share with you what Kubernetes cluster multi-Master fault-tolerant configuration skills, I believe most people do not know much, so share this article for your reference, I hope you can learn a lot after reading this article, let's go to know it!

1 、 kube-apiserver

There are two modifications:

Modify its primary service address to point to the virtual IP that is consistent with the master node (setting reference "Keepalived Quick use (Ubuntu18.04)").

Modify the address and certificate file directory of the etcd service.

Start editing:

Sudo nano / etc/kubernetes/manifests/kube-apiserver.yaml

The final kube-apiserver.yaml file is as follows:

# / etc/kubernetes/manifests/kube-apiserver.yamlapiVersion: v1kind: Podmetadata: annotations: scheduler.alpha.kubernetes.io/critical-pod: "" creationTimestamp: null labels: component: kube-apiserver tier: control-plane name: kube-apiserver namespace: kube-systemspec: containers:-command:-kube-apiserver-authorization-mode=Node RBAC-advertise-address=10.1.1.199-allow-privileged=true-client-ca-file=/etc/kubernetes/pki/ca.crt-enable-admission-plugins=NodeRestriction-enable-bootstrap-token-auth=true#-etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt#-etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt#- -- etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key#-etcd-servers= https://127.0.0.1:2379-etcd-cafile=/etc/kubernetes/pki/etcd-certs/ca.pem-etcd-certfile=/etc/kubernetes/pki/etcd-certs/client.pem-etcd-keyfile=/etc/kubernetes/pki/etcd-certs/client-key.pem-etcd -servers= https://10.1.1.201:2379-insecure-port=0-kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt-kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key-kubelet-preferred-address-types=InternalIP ExternalIP Hostname-proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt-proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key-requestheader-allowed-names=front-proxy-client-requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt-requestheader-extra-headers-prefix=X-Remote-Extra-- -requestheader-group-headers=X-Remote-Group-requestheader-username-headers=X-Remote-User-secure-port=6443-service-account-key-file=/etc/kubernetes/pki/sa.pub-service-cluster-ip-range=10.96.0.0/12-tls-cert-file=/etc/kubernetes/pki/apiserver.crt-tls-private-key-file=/etc/kubernetes/pki/apiserver .key image: k8s.gcr.io/kube-apiserver:v1.13.1 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 8 httpGet: host: 10.1.1.199 path: / healthz port: 6443 scheme: HTTPS initialDelaySeconds: 15 timeoutSeconds: 15 name: kube-apiserver resources: requests: cpu: 250m volumeMounts:-mountPath: / etc/ssl/certs name: : path: / etc/kubernetes/pki type: DirectoryOrCreate name: k8s-certs-hostPath: path: / usr/local/share/ca-certificates type: DirectoryOrCreate name: usr-local-share-ca-certificates-hostPath: path: / usr/share/ca-certificates type: DirectoryOrCreate name: usr-share-ca-certificatesstatus: {}

Be careful

The main changes here are-- advertise-address=10.1.1.199 and-- etcd-servers= https://10.1.1.201:2379.

The IP address of the two is different, and the virtual IP,201 is the etcd service address of the current node.

2 、 kube-control-manager 、 kube-schedule

Kube-control-manager and kube-schedule instances access the apiserver service interface to obtain the cluster status and perform the internal management and maintenance of the cluster, support concurrent access to multiple running instances, and lock the apiserver to select the master controller.

Kube-control-manager is mainly responsible for ensuring the consistency of node states, including / etc/kubernetes/manifests/kube-control-manager.yaml and etc/kubernetes/control-manager.conf files.

Kube-schedule is mainly responsible for scheduling pod instances, including / etc/kubernetes/manifests/kube-schedule.yaml and etc/kubernetes/schedule.conf files.

For the default installation of Kubeadm, the elect of kube-control-manager and kube-schedule has been set to true to support multi-instance operation. You only need to copy it to the / etc/kubernetes of the secondary node.

The specific operations are as follows:

# copy the configuration files of control-manager and schedule locally. # refer to https://my.oschina.net/u/2306127/blog/write/2991361# to log in to the remote node first, and then execute the following command. Echo "Clone control-manager configuration file." scp root@10.1.1.201:/etc/kubernetes/control-manager.conf / etc/kubernetes/scp root@10.1.1.201:/etc/kubernetes/manifests/kube-control-manager.yaml / etc/kubernetes/manifests/echo "Clone schedule configuration file." scp root@10.1.1.201:/etc/kubernetes/schedule.conf / etc/kubernetes/scp root@10.1.1.201:/etc/kubernetes/manifests/kube-schedule.yaml / etc/kubernetes/manifests/

Restart kubelet and the control-manager and schedule instances will be automatically restarted.

3 、 admin.conf

After the primary node dies, you need to use kubectl on the secondary node. First copy the admin.conf to the secondary node, and then configure it to the local account.

The specific operations are as follows:

# copy admin.confscp root@10.1.1.201:/etc/kubernetes/admin.conf / etc/kubernetes/# to create a local account access directory. User profile mkdir-p $HOME/.kubesudo cp-I / etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id-u): $(id-g) $HOME/.kube/config

The primary IP address of the admin.conf is accessed through the virtual IP without any modification.

4. Kubectl operation

Now, the previous secondary node can perform all the operations above the Master (all nodes can do it). Give it a try:

# Kubernetes version. Kubectl version# cluster information, service address. List of kubectl cluster-info# cluster nodes. All pod information that the kubectl get node-o wide# cluster is running. Kubectl get pod-- all-namespaces-o wide

Check that the output information of the newly upgraded secondary node is consistent with that of the primary node. If inconsistent:

Check the consistency of etcd clusters (for more information, please see "practical skills of etcd Cluster expansion in Kubernetes 1.13.1").

Whether the container instances of kube-control-manager and kube-schedule are running properly and whether the parameters are configured correctly.

These are all the contents of this article entitled "what are the fault-tolerant configuration skills of Kubernetes cluster multi-Master". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report