Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Practice of expanding node nodes in binary k8s cluster in production environment

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Expansion of node nodes in K8s binary production environment

Since the project micro-service is also deployed in the K8s cluster to maintain, it is also necessary to expand the capacity of the node node. Expand the node node must ensure that the network of the container environment of your entire cluster is interoperable, which is also a very important step. Here I expand the capacity according to my own experience, just for reference. First of all, I am here to install the binary way to deploy the K8s cluster, when expanding the node. It is also very convenient to expand the node node into two steps. The first step is to copy the configuration of our old node node to our new node, and the second point is to open our container network environment here. I am directly expanding the capacity of two node nodes.

Step one:

Let's first copy the directory files under our kubelet,kube-proxy 's bin on our master node to our new node, so that when we expand our capacity, we can easily start and manage our services using systemd.

The directory file [root@k8s-node3 ~] # mkdir-p / opt/kubernetes/ {bin,ssl,cfg} [root@k8s-node4 ~] # mkdir-p / opt/kubernetes/ {bin,ssl,cfg} [root@k8s-master1 ~] # scp / data/k8s/soft/kubernetes/server/bin/ {kubelet of the nodes required to create the file target Kube-proxy} root@192.168.30.25:/opt/kubernetes/bin/ [root@k8s-master1 ~] # scp / data/k8s/soft/kubernetes/server/bin/ {kubelet Kube-proxy} root@192.168.30.26:/opt/kubernetes/bin/ copies the components under / opt/kubernetes on our original node1 node to the new node [root@k8s-node1 ~] # scp-r / opt/kubernetes/ root@192.168.30.25:/opt [root@k8s-node1 ~] # scp-r / opt/kubernetes/ root@192.168.30.26:/opt [root@k8s-node1 ~] # scp / usr/lib/systemd/system/ {kubelet Kube-proxy} .service root@192.168.30.25:/usr/lib/systemd/system [root@k8s-node1 ~] # scp / usr/lib/systemd/system/ {kubelet,kube-proxy} .service root@192.168.30.26:/usr/lib/systemd/system

Go to node3 to operate

Delete the certificate of the copied file. This is the certificate of node1. We need to regenerate it.

[root@k8s-node3] # cd / opt/kubernetes/ssl/ [root@k8s-node3 ssl] # lskubelet-client-2019-11-07-14-37-36.pem kubelet-client-current.pem kubelet.crt kubelet.key [root@k8s-node3 ssl] # rm-rf * Node4 also delete [root@k8s-node4] # cd / opt/kubernetes/ssl/ [root@k8s-node4 ssl] # lskubelet-client-2019-11-07-14-37-36. Pem kubelet-client-current.pem kubelet.crt kubelet.key [root@k8s-node4 ssl] # rm-rf *

Modify the ip, find the configuration file and change the ip to the third node, that is, your own node.

[root@k8s-node3 cfg] # grep 23 * kubelet:--hostname-override=192.168.30.23\ kubelet.config:address: 192.168.30.23kube-proxy:--hostname-override=192.168.30.23\

This is the same as expanding the fourth node node.

When expanding capacity, remember that docker environment is needed here, and docker-ce needs to be installed.

[root@k8s-node3 ~] # systemctl restart docker [root@k8s-node3 ~] # docker-vDocker version 19.03.4, build 9013bf583a [root@k8s-node4 ~] # systemctl restart docker [root@k8s-node4 ~] # docker-vDocker version 19.03.4, build 9013bf583a

In addition, the startup file of etcd is required. Copy it over, and restart it.

[root@k8s-node1] # scp-r / opt/etcd/ root@192.168.30.25:/opt [root@k8s-node1 ~] # scp-r / opt/etcd/ root@192.168.30.25:/opt

Change all these to 25-host IP and start them.

[root@k8s-node3cfg] # systemctl restart kubelet [root@k8s-node3cfg] # systemctl restart kube-proxy.service [root@k8s-node3cfg] # ps-ef | grep kuberoot 62846 10 16:49? 00:00:07 root 86738 16 21:27? 00:00:00 / opt/kubernetes/bin/kubelet-- logtostderr=false-- log-dir=/opt/kubernetes/log-- VIP4-- hostname-override=192.168.30.25-- Kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig-- bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig-- config=/opt/kubernetes/cfg/kubelet.config-- cert-dir=/opt/kubernetes/ssl-- pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0root 86780 1 35 21:28? 00:00:02 / opt/kubernetes/bin/kube-proxy-- logtostderr=true-- VIP4-- hostname-override=192 .168.30.25-cluster-cidr=10.0.0.0/24-proxy-mode=ipvs-kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfigroot 86923 66523 0 21:28 pts/1 00:00:00 grep-color=auto kube

Check to see that the master node has a new node to join.

[root@k8s-master1] # kubectl get csrNAME AGE REQUESTOR CONDITIONnode-csr-eH_jPNUBXJF6sIii9SvNz9fW71543MLjPvOYWeDteqo 90s kubelet-bootstrap Pendingnode-csr-xLNLbvb3cibW-fyr_5Qyd3YuUYAX9DJgDwViu3AyXMk 31m kubelet-bootstrap Approved,Issued

Issue a certificate

[root@k8s-master1 ~] # kubectl certificate approve node-csr-eH_jPNUBXJF6sIii9SvNz9fW71543MLjPvOYWeDteqocertificatesigningrequest.certificates.k8s.io/node-csr-eH_jPNUBXJF6sIii9SvNz9fW71543MLjPvOYWeDteqo approved [root@k8s-master1 ~] # kubectl get csrNAME AGE REQUESTOR CONDITIONnode-csr-eH_jPNUBXJF6sIii9SvNz9fW71543MLjPvOYWeDteqo 3m18s kubelet-bootstrap Approved,Issuednode-csr-xLNLbvb3cibW-fyr_5Qyd3YuUYAX9DJgDwViu3AyXMk 33m kubelet-bootstrap Approved,Issued

View node node status

[root@k8s-master1 ~] # kubectl get nodeNAME STATUS ROLES AGE VERSION192.168.30.23 Ready 25m v1.15.1192.168.30.24 Ready 51s v1.15.1192.168.30.25 Ready 25m v1.15.1192.168.30.26 Ready 51s v1.15.1

Step 2:

Get through the network communication environment between containers. Here I use flannel for management.

Prepare the docker environment, where we were ready before, but we still need to assign them a subnet. Flanneld and docker need a subnet.

Deploy flannel to newly joined nodes and copy the deployed files over

[root@k8s-node1 ~] # scp / usr/lib/systemd/system/ {flanneld,docker} .service root@192.168.30.25:/usr/lib/systemd/system [root@k8s-node1 ~] # scp / usr/lib/systemd/system/ {flanneld,docker} .service root@192.168.30.26:/usr/lib/systemd/system

Specify one of our node on node1.

[root@k8s-node1 ~] #. / flannel.sh https://192.168.30.21:2379,https://192.168.30.22:2379,https://192.168.30.23:2379,https://192.168.30.24:2379,https://192.168.30.25:2379,https://192.168.30.26:2379

Then copy our specified flanneld file to the new node

[root@k8s-node1 ~] # cd / opt/kubernetes/cfg/ [root@k8s-node1 cfg] # lsbootstrap.kubeconfig flanneld kubelet kubelet.config kubelet.kubeconfig kube-proxy kube-proxy.kubeconfig [root@k8s-node1 cfg] # scp flanneld root@192.168.30.25:/opt/kubernetes/cfg/ [root@k8s-node1 cfg] # scp flanneld root@192.168.30.26:/opt/kubernetes/cfg/

Restart a new node

Check whether the network is on the same segment as docker

[root@k8s-node3] # ip a5: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:97:f5:6c:cd brd ff:ff:ff:ff:ff:ff inet 172.17.25.1 flannel.1 24 brd 172.17.25.255 scope global docker0 valid_lft forever preferred_lft forever6: flannel.1: mtu 1450 qdisc noqueue state UNKNOWN group default link/ether b2:1a:97:5c:61:1f brd ff:ff:ff : ff:ff:ff inet 172.17.25.0 ff:ff:ff inet 32 scope global flannel.1 valid_lft forever preferred_lft forever [root@k8s-node4 ~] # systemctl start flanneld [root@k8s-node4 ~] # systemctl restart docker [root@k8s-node4 ~] # ip a5: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:3f:3c:a8:62 brd ff:ff:ff:ff:ff:ff inet 172.17.77.1 brd 24 brd 172.17.77.255 scope global docker0 valid_lft forever preferred_lft forever6: flannel.1: mtu 1450 qdisc noqueue state UNKNOWN group default link/ether 96:1c:bc:ec:05:d6 brd ff:ff:ff:ff:ff:ff inet 172.17.77.0/32 scope global flannel.1

And test whether the containers of other nodes can share the network environment of each node.

[root@k8s-master1] # kubectl exec-it nginx-deployment-7b8677db56-wkbzb / bin/shping 172.17.79.2 PING 172.17.79.2 (172.17.79.2): 56 data bytes64 bytes from 172.17.79.2: icmp_seq=0 ttl=62 time=0.703 ms64 bytes from 172.17.79.2: icmp_seq=1 ttl=62 time=0.459 Ms ^ C-172.17.79.2 ping statistics-- 2 packets transmitted, 2 packets received 0 packet lossround-trip min/avg/max/stddev = 0.459 msping 172.17.40.3PING 0.581 ping statistics 0.703 msping 172.17.40.3PING 172.17.40.3 (172.17.40.3): 56 data bytes64 bytes from 172.17.40.3: icmp_seq=0 ttl=62 time=0.543 ms64 bytes from 172.17.40.3: icmp_seq=1 ttl=62 time=0.404 Ms ^ C-172.17.40.3 ping statistics-- 2 packets transmitted, 2 packets received 0 packet lossround-trip min/avg/max/stddev = 0.404 msping 172.17.6.3PING 0.474 ping statistics 0.543 Universe 0.070 msping 172.17.6.3PING 172.17.6.3 (172.17.6.3): 56 data bytes64 bytes from 172.17.6.3: icmp_seq=0 ttl=62 time=0.385 ms64 bytes from 172.17.6.3: icmp_seq=1 ttl=62 time=0.323 Ms ^ C-172.17.6.3 ping statistics-- 2 packets transmitted, 2 packets received 0% packet lossround-trip min/avg/max/stddev = 0.323 ms 0.354 ms

All successful tests can be connected.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report