Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the etcd node and expansion method of Kubernetes

2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly explains "what is the etcd node and expansion method of Kubernetes". The content of the explanation is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "what is the etcd node and expansion method of Kubernetes".

Kubernetes installation using kubeadm has only one etcd instance by default, and there is a risk of a single point of failure. The methods to improve the availability of Kubernetes cluster include: 1, backup (Kubernetes exploration-etcd status data and its backup); 2, expansion of etcd nodes and instances; 3. Multi-node service and load balancing of apiserver. Here, we mainly experiment with the expansion of etcd nodes and instances.

I. expansion of etcd, main ideas

Etcd is a stand-alone service that maps configuration parameters and data directories to host directories respectively when used in kubernetes, and uses the hostnetwork network (local host network). Where / etc/kubernetes/manifest/etcd.yaml is the startup parameter file, / etc/kubernetes/pki/etcd is the certificate used by https, and / var/lib/etcd is the etcd data file of the node.

For a single Master node Kubernetes cluster that has been installed with kubeadm, there is only one etcd running instance. We want to expand its etcd instances to multiple to reduce the risk of single point of failure. The ways to expand the capacity of etcd in Kubernetes are as follows:

Kubeadm/kubectl/kubelet is installed on all nodes and installed as separate master nodes.

Create a certificate for the etcd cluster and copy it to each node.

Modify the etcd startup configuration file on each node to start the etcd instance. There are many ways (the results are the same, but the management methods are different):

Through kubectl deployment, let kubernetes control start. Specify the node to run through the nodeSelector.

Start through the kubelet service, and the operating system starts the kubelet service through systemd. This is the standard process of K8s.

Let the container start itself through the-restart parameter of docker, which is managed by the container service.

Start etcd directly as a host service without using Docker or K8s management.

Point the etcd service of all node kube-apiserver.yaml to the local etcd service instance.

Etcd is distributed storage, the data of all nodes will be automatically synchronized, and access from any node will be the same.

Second, etcd capacity expansion, the first step of the experiment: install multiple nodes

Prepare the node on which to install etcd. I use ubuntu 18.04LTS, and then install Docker CE 18.06 and kubernetes 1.12.3.

The three nodes I have here are:

Podc01, 10.1.1.201

Podc02, 10.1.1.202

Podc03, 10.1.1.203

The container image used by k8s needs to be pulled down to each node in advance. Reference:

Kubernetes 1.12.3 Rapid upgrade

Kubernetes version locked to 1.12.3

Multi-network card Ubuntu server installs Kubernetes

Ubuntu 18.04 set up multi-Nic multi-port aggregation

Quickly set up Kubernetes clusters, starting from scratch

Step 2: create an etcd certificate

I wanted to try to copy the / etc/kubernetes/kpi and / etc/kubernetes/manifest directories of the primary node to all secondary (mate) nodes, but after startup, I could not access them properly, indicating that it was a problem with the ca certificate. Finally, you are ready to create your own certificate and deploy the yaml file from scratch.

To create a certificate using cfssl, you need to download the template file and modify the definition file, including ca organization, ca-config configuration, ca-key private key, csr request, server/peer/client and other certificate configuration template files. You need to modify the information according to your own environment.

Finally, the cert-file certificate file, the key-file public key file and the trusted-ca-file certificate authority file are generated (since we are using self-signature here, we create our own certificate authority file).

These three files are configured when the etcd instance is started (note: the parameter names of API2 and API3 are somewhat different) and need to be placed in the appropriate directory of each node and mapped to the etcd container volume.

You also need to specify the corresponding parameters when using etcdctl as the service client to access, and other peer (Peer) etcd instances also need to use these parameters to access each other, form clusters, and synchronize data.

1. Prepare the cfssl certificate tool mkdir ~ / cfssl & & cd ~ / cfsslmkdir bin & & cd binwget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64-O cfsslwget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64-O cfssljson chmod + x {cfssl,cfssljson} export PATH=$PATH:~/cfssl/bin

Optional: for convenience, you can add path to the ~ / .profile file or copy it to the / usr/local/bin directory.

2. Create a certificate profile

Create a certificate profile directory:

Mkdir-p ~ / cfssl/etcd-certs & & cd ~ / cfssl/etcd-certs

Generate the certificate configuration file and put it in the ~ / cfssl/etcd-certs directory. The file template is as follows:

# = # ca-config.json {"signing": {"default": {"expiry": "43800h"}, "profiles": {"server": {"expiry": "43800h", "usages": ["signing", "key encipherment" "server auth"]}, "client": {"expiry": "43800h", "usages": ["signing", "key encipherment" "client auth"]}, "peer": {"expiry": "43800h", "usages": ["signing", "key encipherment", "server auth" "client auth"]} # = # ca-csr.json {"CN": "My own CA", "key": {"algo": "rsa", "size": 2048}, "names": [{"C": "US" "L": "CA", "O": "My Company Name", "ST": "San Francisco", "OU": "Org Unit 1", "OU": "Org Unit 2"}]} # = # server.json {"CN": "etcd0", "hosts": ["127.0.0.1" "0.0.0.0", "10.1.1.201", "10.1.1.202", "10.1.1.203"], "key": {"algo": "ecdsa", "size": 256}, "names": [{"C": "US" "L": "CA", "ST": "San Francisco"}} # = # peer1.json # filling machine IP {"CN": "etcd0", "hosts": ["10.1.1.201"], "key": {"algo": "ecdsa", "size": 256} "names": [{"C": "US", "L": "CA", "ST": "San Francisco"}]} # = # client.json {"CN": "client", "hosts": ["]," key ": {" algo ":" ecdsa " "size": 256}, "names": [{"C": "US", "L": "CA", "ST": "San Francisco"}]} 3. Certificate to create etcd cluster

Do the following:

Cd ~ / cfssl/etcd-certs cfssl gencert-initca ca-csr.json | cfssljson-bare ca- cfssl gencert-ca=ca.pem-ca-key=ca-key.pem-config=ca-config.json-profile=server server.json | cfssljson-bare servercfssl gencert-ca=ca.pem-ca-key=ca-key.pem-config=ca-config.json-profile=peer peer1.json | cfssljson-bare peer1cfssl gencert-ca=ca.pem-ca-key=ca-key.pem-config=ca-config.json-profile=client client.json | cfssljson-bare client

View the resulting certificate file:

Ls-l ~ / cfssl/etcd-certs

The documents include:

... Step 3: start multiple instances of etcd

Note:

Because the original etcd library needs to be deleted during the capacity expansion process, which will result in the loss of master node information of the kubernetes cluster, it is recommended to use the etcdctl snapshot command for backup before capacity expansion. Or, another etcd node is built to transmit the original data.

Be sure to empty the / var/lib/etcd directory before starting the etcd instance, otherwise some of the set parameters will not work and remain in the original state.

Note that the following parameters of etcd only work at first startup (initialization), including:

-initial-advertise-peer-urls= http://10.1.1.202:2380

-initial-cluster=podc02= http://10.1.1.202:2380,podc03=http://10.1.1.203:2380

-initial-cluster-token=etcd-cluster

-initial-cluster-state=new

If you are adding a new node, run member add xxx on the original node first. Then-initial-cluster-state=existing, then start the service.

1. Upload certificate files

Copy the cfssl/etcd-certs directory to the / etc/kubernetes/pki/etcd-certs directory and upload it using scp or sftp.

2. Edit the startup file

Edit the / etc/kubernetes/manifests/etcd.yaml file, which is the configuration file for kubelet to start the etcd instance.

# / etc/kubernetes/manifests/etcd.yamlapiVersion: v1kind: Podmetadata: annotations: scheduler.alpha.kubernetes.io/critical-pod: "" creationTimestamp: null labels: component: etcd tier: control-plane name: etcd namespace: kube-systemspec: containers:-command:-etcd- advertise-client-urls= https://10.1.1.201:2379-cert-file=/etc/kubernetes/pki/etcd-certs/server.pem -client-cert-auth=true-data-dir=/var/lib/etcd-initial-advertise-peer-urls= https://10.1.1.201:2380-initial-cluster=etcd0= https://10.1.1.201:2380-key-file=/etc/kubernetes/pki/etcd-certs/server-key.pem-listen-client-urls= https://10.1.1.201:2379 -listen-peer-urls= https://10.1.1.201:2380-name=etcd1-peer-cert-file=/etc/kubernetes/pki/etcd-certs/peer1.pem-peer-client-cert-auth=true-peer-key-file=/etc/kubernetes/pki/etcd-certs/peer1-key.pem-peer-trusted-ca-file=/etc/kubernetes/pki/etcd-certs/ca .pem-- snapshot-count=10000-trusted-ca-file=/etc/kubernetes/pki/etcd-certs/ca.pem image: k8s.gcr.io/etcd-amd64:3.2.18 imagePullPolicy: IfNotPresent # livenessProbe: # exec: # command: #-/ bin/sh #-- ec #-ETCDCTL_API=3 etcdctl-- endpoints= https://[10.1.1.201]:2379 -- cacert=/etc/kubernetes/pki/etcd-certs/ca.pem #-cert=/etc/kubernetes/pki/etcd-certs/client.pem-- key=/etc/kubernetes/pki/etcd-certs/client-key.pem # get foo # failureThreshold: 8 # initialDelaySeconds: 15 # timeoutSeconds: 15 name: etcd resources: {} volumeMounts:-mountPath: / var/lib/etcd name: etcd-data-mountPath : / etc/kubernetes/pki/etcd name: etcd-certs hostNetwork: true priorityClassName: system-cluster-critical volumes:-hostPath: path: / var/lib/etcd type: DirectoryOrCreate name: etcd-data-hostPath: path: / etc/kubernetes/pki/etcd-certs type: DirectoryOrCreate name: etcd-certsstatus: {}

Referring to the above mode, modify the etcd startup parameters / etc/kubernetes/manifest/etcd.yaml file contents at each secondary node.

Note: IP address needs to be modified in many places, do not omit, error.

Restart the kubelet service.

Sudo systemctl restart kubelet .

Check the etcd service.

Ectdctl connects to the instance, etcdctl member list.

Finally, the multi-node etcd instance is linked into a cluster.

3. Verify the running status

Go to the etcd container and execute:

Alias etcdv3= "ETCDCTL_API=3 etcdctl-- endpoints= https://[10.1.1.201]:2379-- cacert=/etc/kubernetes/pki/etcd/ca.pem-- cert=/etc/kubernetes/pki/etcd/client.pem-- key=/etc/kubernetes/pki/etcd/client-key.pem" etcdv3 member add etcd1-- peer-urls= "https://10.1.1.202:2380"4, add etcd node

Copy the certificate on the etcd1 (10.1.1.201) node to the etcd1 (10.1.1.202) node, copy the peer1.json to the peer2.json of etcd2, and modify the peer2.json.

# peer2.json {"CN": "etcd1", "hosts": ["10.1.86.202"], "key": {"algo": "ecdsa", "size": 256}, "names": [{"C": "US", "L": "CA" "ST": "San Francisco"}]}

Regenerate generate the peer1 certificate on etcd1:

Cfssl gencert-ca=ca.pem-ca-key=ca-key.pem-config=ca-config.json-profile=peer peer1.json | cfssljson-bare peer1

Start etcd1 with the following configuration file:

# etcd02 etcd.yamlapiVersion: v1kind: Podmetadata: annotations: scheduler.alpha.kubernetes.io/critical-pod: "" creationTimestamp: null labels: component: etcd tier: control-plane name: etcd namespace: kube-systemspec: containers:-command:-etcd- advertise-client-urls= https://10.1.1.202:2379-cert-file=/etc/kubernetes/pki/etcd-certs/server.pem-data -dir=/var/lib/etcd-initial-advertise-peer-urls= https://10.1.1.202:2380-initial-cluster=etcd01= https://10.1.1.201:2380, Etcd02= https://10.1.1.202:2380-key-file=/etc/kubernetes/pki/etcd-certs/server-key.pem-listen-client-urls= https://10.1.1.202:2379-listen-peer-urls= https://10.1.1.202:2380-name=etcd02-peer-cert-file=/etc/kubernetes/pki/etcd-certs/peer2.pem- -- peer-client-cert-auth=true-peer-key-file=/etc/kubernetes/pki/etcd-certs/peer2-key.pem-peer-trusted-ca-file=/etc/kubernetes/pki/etcd-certs/ca.pem-snapshot-count=10000-trusted-ca-file=/etc/kubernetes/pki/etcd-certs/ca.pem-initial-cluster-state=existing # do not put double quotation marks Image: k8s.gcr.io/etcd-amd64:3.2.18 imagePullPolicy: IfNotPresent # livenessProbe: # exec: # command: #-/ bin/sh #-- ec #-ETCDCTL_API=3 etcdctl-- endpoints= https://[10.1.1.202]:2379-- cacert=/etc/kubernetes/pki/etcd-certs/ca.crt #-- cert=/etc/kubernetes/pki / etcd/healthcheck-client.crt-key=/etc/kubernetes/pki/etcd-certs/healthcheck-client.key # get foo # failureThreshold: 8 # initialDelaySeconds: 15 # timeoutSeconds: 15 name: etcd resources: {} volumeMounts:-mountPath: / var/lib/etcd name: etcd-data-mountPath: / etc/kubernetes/pki/etcd name: etcd-certs hostNetwork: true priorityClassName: system-cluster-critical volumes:- HostPath: path: / var/lib/etcd type: DirectoryOrCreate name: etcd-data-hostPath: path: / etc/kubernetes/pki/etcd-certs type: DirectoryOrCreate name: etcd-certsstatus: {}

Go to the etcd container and execute:

Alias etcdv3= "ETCDCTL_API=3 etcdctl-endpoints= https://[10.1.86.201]:2379-cacert=/etc/kubernetes/pki/etcd/ca.pem-cert=/etc/kubernetes/pki/etcd/client.pem-key=/etc/kubernetes/pki/etcd/client-key.pem" etcdv3 member add etcd1-peer-urls= "https://10.1.1.203:2380""

Follow the steps above to add etcd03.

5 、 Etcd Cluster Health check # etcdctl-- endpoints= https://[10.1.1.201]:2379-- ca-file=/etc/kubernetes/pki/etcd-certs/ca.pem-- cert-file=/etc/kubernetes/pki/etcd-certs/client.pem-- key-file=/etc/kubernetes/pki/etcd-certs/client-key.pem cluster-healthmember 5856099674401300 is healthy: got healthy result from https://10.1.86.201:2379member df99f445ac908d15 is healthy: got healthy result from https:/ / 10.1.86.202:2379cluster is healthy step 4: modify the apiserver service direction-etcd-cafile=/etc/kubernetes/pki/etcd-certs/ca.pem- etcd-certfile=/etc/kubernetes/pki/etcd-certs/client.pem- etcd-keyfile=/etc/kubernetes/pki/etcd-certs/client-key.pem

At this point, etcd has expanded to a multi-node distributed cluster, and the kubernetes of each node is accessible.

Note:

The above process is suitable for the k8s cluster you just created.

If you already have a multi-node cluster of kubeadm, you can first create an etcd cluster of node2/node3, then synchronize the data of node1, and then add a node1 cluster to retain the original data.

Reference: etcd data viewing and migration for Kubernetes

The worker node deployed above can only be connected to one apiserver, and the apiserver of other secondary nodes is available but cannot be connected by the worker node.

In the next step, we need to realize the fault tolerance of multiple master nodes, and we can transfer access to other secondary nodes in case of primary node failure.

Thank you for your reading. the above is the content of "what is the etcd node and expansion method of Kubernetes". After the study of this article, I believe you have a deeper understanding of what the etcd node and expansion method of Kubernetes is, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report