Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to upgrade Kubernetes 1.15.0 quickly

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

Editor to share with you how to quickly upgrade Kubernetes 1.15.0, I believe most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's go to know it!

1. Upgrade kubeadm/kubectl/kubelet version sudo apt install kubeadm=1.15.0-00 kubectl=1.15.0-00 kubelet=1.15.0-00

Kubernetes for china

View the container image version of this version:

Kubeadm config images list

The output is as follows:

~ # kubeadm config images listk8s.gcr.io/kube-apiserver:v1.15.0k8s.gcr.io/kube-controller-manager:v1.15.0k8s.gcr.io/kube-scheduler:v1.15.0k8s.gcr.io/kube-proxy:v1.15.0k8s.gcr.io/pause:3.1k8s.gcr.io/etcd:3.3.10k8s.gcr.io/coredns:1.3.12, pull container image

The original kubernetes image file is on gcr and cannot be downloaded directly. I mirrored it to the container warehouse in Ariyun's Hangzhou computer room, and it was relatively fast to pull it.

Echo "" echo "=" echo "Pull Kubernetes v1.15.0 Images from aliyuncs.com." echo "= =" echo "" MY_REGISTRY=registry.cn-hangzhou.aliyuncs.com/openthings## pull image docker pull ${MY_REGISTRY} / k8s-gcr-io-kube-apiserver:v1.15.0docker pull ${MY_REGISTRY} / k8s-gcr-io-kube-controller-manager:v1.15.0docker pull ${MY_REGISTRY} / k8sMugcrashi Scheduler:v1.15.0docker pull ${MY_REGISTRY} / k8s-gcr-io-kube-proxy:v1.15.0docker pull ${MY_REGISTRY} / k8s-gcr-io-etcd:3.3.10docker pull ${MY_REGISTRY} / k8s-gcr-io-pause:3.1docker pull ${MY_REGISTRY} / k8s-gcr-io-coredns:1.3.1## add Tagdocker tag ${MY_REGISTRY} / k8s-gcr-io-kube-apiserver:v1.15.0 K8s.gcr.io/kube-apiserver:v1.15.0docker tag ${MY_REGISTRY} / k8s-gcr-io-kube-scheduler:v1.15.0 k8s.gcr.io/kube-scheduler:v1.15.0docker tag ${MY_REGISTRY} / k8s-gcr-io-kube-controller-manager:v1.15.0 k8s.gcr.io/kube-controller-manager:v1.15.0docker tag ${MY_REGISTRY} / k8s-gcr-io-kube-proxy:v1.15.0 k8s. Gcr.io/kube-proxy:v1.15.0docker tag ${MY_REGISTRY} / k8s-gcr-io-etcd:3.3.10 k8s.gcr.io/etcd:3.3.10docker tag ${MY_REGISTRY} / k8s-gcr-io-pause:3.1 k8s.gcr.io/pause:3.1docker tag ${MY_REGISTRY} / k8s-gcr-io-coredns:1.3.1 k8s.gcr.io/coredns:1.3.1echo "echo" = = "echo" Pull Kubernetes v1.15.0 Images FINISHED. "echo" into registry.cn-hangzhou.aliyuncs.com/openthings "echo" by openthings@ https://my.oschina.net/u/2306127."echo "=" echo ""

Save as a shell script and execute it.

Or, download the script: https://github.com/openthings/kubernetes-tools/blob/master/kubeadm/2-images/

3. Upgrade Kubernetes cluster

New installation:

# specify IP address, version 1.15.0: sudo kubeadm init-- kubernetes-version=v1.15.0-- apiserver-advertise-address=10.1.1.199-- pod-network-cidr=10.244.0.0/16# Note: CoreDNS is built-in, and the parameter-- feature-gates CoreDNS=true is no longer required

First take a look at the versions of the components that need to be upgraded.

Using kubeadm upgrade plan, the version upgrade information is output as follows:

COMPONENT CURRENT AVAILABLEAPI Server v1.14.1 v1.15.0Controller Manager v1.14.1 v1.15.0Scheduler v1.14.1 v1.15.0Kube Proxy v1.14.1 v1.15.0CoreDNS 1.3.1 1.3.1Etcd 3.3.10 3.3.10

Make sure that the above container image has been downloaded (if it is not downloaded in advance, it may be suspended by the network), and then perform the upgrade:

Kubeadm upgrade-y apply v1.15.0

If you see the following message, OK it.

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.15.0". Enjoy!

Then, configure the current user environment:

Mkdir-p $HOME/.kube sudo cp-I / etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id-u): $(id-g) $HOME/.kube/config

You can use kubectl version to view the status and kubectl cluster-info to view the service address.

4. Upgrade of work node

Each worker node needs to pull the image facing the corresponding version and install the corresponding version of kubelet.

Check the version:

~ $kubectl version

View Pod information:

Kubectl get pod-all-namespaces

Done.

5. Upgrade of HA cluster

If you upgrade from a version prior to 1.13.x, the upgrade fails because the api changes (apiserver cannot be started after the kubelet is upgraded to 1.14), which causes the new kubeadm to access the previous apiserver. After pulling the image down, manually switch the version of the image (the files under / etc/kubernetes/manifests of all nodes need to be modified).

For each node, perform the following steps:

Cd / etc/kubernetes/manifests/ .

Change all * .yaml and specify the images version as 1.15.0.

After the upgrade of version 1.14.0, the problem occurred (1.14.1 still exists):

Worker node join to cluster failed. See [kubeadm] # 76013, https://github.com/kubernetes/kubernetes/issues/76013

According to some community members' tests, the newly installed 1.14 cluster can operate normally.

My cluster was upgraded from 1.13.4, and after testing version 1.14.1, the problem still exists.

The version of kube-proxy needs to go to the management tool to change the images version number of DaemonSet to 1.14.1.

The version of coredns requires administrative tools to modify the images version number of the replication set as 1.3.1.

You can refer to "forced deletion of destroyed stubborn pod in Kubernetes".

Run the installation of flannel again, and it doesn't work.

However, after modification, restart the cluster will not be able to get up. Go in and see the pod status is Crash.

Forces the deletion of the Pod running instance of CoreDNS. Kubernetes automatically starts the new instance.

The original installation of jupyterhub can not get up, go in to see the hub pod status is Crash.

The jupyterhub.sqllite write temporary file under the hub-db-dir directory exists, resulting in locking, not an issue with glusterfs write permissions.

Set up gluster volume heal vol01 enable to synchronize its data.

Restart the volume or glusterd service.

"alternatively, delete the jupyterhub.sqllite file in the hub-db-dir directory under all gluster storage nodes, and then delete the hub pod to automatically rebuild the file."

Generally speaking, after a few steps above, you can recover.

Reference: GlusterFS: access permissions settin

View the log of hub, which shows that there was an error in SQLlite access, which was removed from the host storage directory. Access to hub service failed.

After the hub pod is deleted, the proxy-public of the service cannot connect.

Forces the deletion of hub for JupyterHub and Pod running instances for Proxy.

The Pod running instance of CoreDNS is forcibly deleted, and Kubernetes automatically starts the new instance, and then runs the recovery.

Sometimes it is the problem of setting permissions in glusterfs, which is set by setfacl/getfacl.

On further inspection, it is found that it may be caused by the volume write problem of GlusterFS, which is out of sync.

Other:

The whole cluster cannot be accessed, kubectl get node fails, and apiserver access fails during kubectl version.

Looking at one of the nodes route, the mysterious podsxx 255.255.255.255 routing record appears again, and route del fails to delete the record.

After running sudo netplan apply, the routing record disappears and the node returns to be accessible.

The above is all the contents of the article "how to upgrade Kubernetes 1.15.0 quickly". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report