Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to upgrade k8s cluster V1.15.3 to V1.16.0

2025-01-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article shows you how to upgrade k8s cluster V1.15.3 to V1.16.0. The content is concise and easy to understand, which will definitely brighten your eyes. I hope you can get something through the detailed introduction of this article.

1. Check the current version number and the latest version of k8s

[root@k8s01 ~] # kubectl get nodes-- View the current number of cluster nodes and version number

NAME STATUS ROLES AGE VERSION

K8s01 Ready master 41d v1.15.3

K8s02 Ready 41d v1.15.3

K8s03 Ready 41d v1.15.3

[root@k8s01 ~] # kubectl version-- check the server and client version numbers

Client Version: version.Info {Major: "1", Minor: "15", GitVersion: "v1.15.3", GitCommit: "2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState: "clean", BuildDate: "2019-08-19T11:13:54Z", GoVersion: "go1.12.9", Compiler: "gc", Platform: "linux/amd64"}

Server Version: version.Info {Major: "1", Minor: "15", GitVersion: "v1.15.3", GitCommit: "2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState: "clean", BuildDate: "2019-08-19T11:05:50Z", GoVersion: "go1.12.9", Compiler: "gc", Platform: "linux/amd64"}

[root@k8s01 ~] # yum list-- showduplicates kubeadm-- disableexcludes=kubernetes-- View the warehouse cluster version

two。 Upgrade the kubeadm version and check whether the cluster meets the upgrade requirements

[root@k8s01 ~] # yum install-y kubeadm-1.16.0-0-- disableexcludes=kubernetes-- upgrade kubeadm version

[root@k8s01 ~] # kubeadm version-- View the upgraded version

Kubeadm version: & version.Info {Major: "1", Minor: "16", GitVersion: "v1.16.0", GitCommit: "2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState: "clean", BuildDate: "2019-09-18T14:34:01Z", GoVersion: "go1.12.9", Compiler: "gc", Platform: "linux/amd64"}

[root@k8s01 ~] # kubeadm upgrade plan-- check whether the cluster can be upgraded and the version information of each component after the upgrade

[upgrade/config] Making sure the configuration is correct:

[upgrade/config] Reading configuration from the cluster...

[upgrade/config] FYI: You can look at this config file with 'kubectl-n kube-system get cm kubeadm-config-oyaml'

[preflight] Running pre-flight checks.

[upgrade] Making sure the cluster is healthy:

[upgrade] Fetching available versions to upgrade to

[upgrade/versions] Cluster version: v1.15.3

[upgrade/versions] kubeadm version: v1.16.0

W1019 13 version.go:101 11 version.go:101 18.402833 66426] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable.txt": Get https://dl.k8s.io/release/stable.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

W1019 13:11:18.402860 66426 version.go:102] falling back to the local client version: v1.16.0

[upgrade/versions] Latest stable version: v1.16.0

W1019 13 version.go:101 11V 28.427246 66426 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.15.txt": Get https://dl.k8s.io/release/stable-1.15.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

W1019 13:11:28.427289 66426 version.go:102] falling back to the local client version: v1.16.0

[upgrade/versions] Latest version in the v1.15 series: v1.16.0

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':

COMPONENT CURRENT AVAILABLE

Kubelet 3 x v1.15.3 v1.16.0

Upgrade to the latest version in the v1.15 series:

COMPONENT CURRENT AVAILABLE

API Server v1.15.3 v1.16.0

Controller Manager v1.15.3 v1.16.0

Scheduler v1.15.3 v1.16.0

Kube Proxy v1.15.3 v1.16.0

CoreDNS 1.3.1 1.6.2

Etcd 3.3.10 3.3.15-0

You can now apply the upgrade by executing the following command:

Kubeadm upgrade apply v1.16.0

_

[root@k8s01 ~] #

3. Download upgrade components (upgrade after Google downloads the basic components to speed up the upgrade)

[root@k8s01 ~] # cat 16.sh

#! / bin/bash

# download k8s 1.15.2 images

# get image-list by 'kubeadm config images list-- kubernetes-version=v1.15.2'

# gcr.azk8s.cn/google-containers = = k8s.gcr.io

Images= (

Kube-apiserver:v1.16.0

Kube-controller-manager:v1.16.0

Kube-scheduler:v1.16.0

Kube-proxy:v1.16.0

Pause:3.1

Etcd:3.3.15-0

Coredns:1.6.2

)

For imageName in ${images [@]}; do

Docker pull gcr.azk8s.cn/google-containers/$imageName

Docker tag gcr.azk8s.cn/google-containers/$imageName k8s.gcr.io/$imageName

Docker rmi gcr.azk8s.cn/google-containers/$imageName

Done

[root@k8s01 ~] # sh 16.sh

V1.16.0: Pulling from google-containers/kube-apiserver

39fafc05754f: Already exists

F7d981e9e2f5: Pull complete

Digest: sha256:f4168527c91289da2708f62ae729fdde5fb484167dd05ffbb7ab666f60de96cd

Status: Downloaded newer image for gcr.azk8s.cn/google-containers/kube-apiserver:v1.16.0

Gcr.azk8s.cn/google-containers/kube-apiserver:v1.16.0

Untagged: gcr.azk8s.cn/google-containers/kube-apiserver:v1.16.0

Untagged: gcr.azk8s.cn/google-containers/kube-apiserver@sha256:f4168527c91289da2708f62ae729fdde5fb484167dd05ffbb7ab666f60de96cd

V1.16.0: Pulling from google-containers/kube-controller-manager

39fafc05754f: Already exists

9fc21167a2c9: Pull complete

Digest: sha256:c156a05ee9d40e3ca2ebf9337f38a10558c1fc6c9124006f128a82e6c38cdf3e

Status: Downloaded newer image for gcr.azk8s.cn/google-containers/kube-controller-manager:v1.16.0

Gcr.azk8s.cn/google-containers/kube-controller-manager:v1.16.0

Untagged: gcr.azk8s.cn/google-containers/kube-controller-manager:v1.16.0

Untagged: gcr.azk8s.cn/google-containers/kube-controller-manager@sha256:c156a05ee9d40e3ca2ebf9337f38a10558c1fc6c9124006f128a82e6c38cdf3e

V1.16.0: Pulling from google-containers/kube-scheduler

39fafc05754f: Already exists

C589747bc37c: Pull complete

Digest: sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0

Status: Downloaded newer image for gcr.azk8s.cn/google-containers/kube-scheduler:v1.16.0

Gcr.azk8s.cn/google-containers/kube-scheduler:v1.16.0

Untagged: gcr.azk8s.cn/google-containers/kube-scheduler:v1.16.0

Untagged: gcr.azk8s.cn/google-containers/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0

V1.16.0: Pulling from google-containers/kube-proxy

39fafc05754f: Already exists

Db3f71d0eb90: Already exists

1531d95908fb: Pull complete

Digest: sha256:e7f0f8e320cfeeaafdc9c0cb8e23f51e542fa1d955ae39c8131a0531ba72c794

Status: Downloaded newer image for gcr.azk8s.cn/google-containers/kube-proxy:v1.16.0

Gcr.azk8s.cn/google-containers/kube-proxy:v1.16.0

Untagged: gcr.azk8s.cn/google-containers/kube-proxy:v1.16.0

Untagged: gcr.azk8s.cn/google-containers/kube-proxy@sha256:e7f0f8e320cfeeaafdc9c0cb8e23f51e542fa1d955ae39c8131a0531ba72c794

3.1: Pulling from google-containers/pause

Digest: sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea

Status: Downloaded newer image for gcr.azk8s.cn/google-containers/pause:3.1

Gcr.azk8s.cn/google-containers/pause:3.1

Untagged: gcr.azk8s.cn/google-containers/pause:3.1

Untagged: gcr.azk8s.cn/google-containers/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea

3.3.15-0: Pulling from google-containers/etcd

39fafc05754f: Already exists

Aee6f172d490: Pull complete

E6aae814a194: Pull complete

Digest: sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa

Status: Downloaded newer image for gcr.azk8s.cn/google-containers/etcd:3.3.15-0

Gcr.azk8s.cn/google-containers/etcd:3.3.15-0

Untagged: gcr.azk8s.cn/google-containers/etcd:3.3.15-0

Untagged: gcr.azk8s.cn/google-containers/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa

1.6.2: Pulling from google-containers/coredns

C6568d217a00: Pull complete

3970bc7cbb16: Pull complete

Digest: sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5

Status: Downloaded newer image for gcr.azk8s.cn/google-containers/coredns:1.6.2

Gcr.azk8s.cn/google-containers/coredns:1.6.2

Untagged: gcr.azk8s.cn/google-containers/coredns:1.6.2

Untagged: gcr.azk8s.cn/google-containers/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5

[root@k8s01 ~] #

4. Upgrade k8s cluster (master node)

[root@k8s01 ~] # kubeadm upgrade apply v1.16.0-v 5

I1019 13:37:55.767778 87227 apply.go:118] [upgrade/apply] verifying health of cluster

I1019 13:37:55.767819 87227 apply.go:119] [upgrade/apply] retrieving configuration from cluster

[upgrade/config] Making sure the configuration is correct:

[upgrade/config] Reading configuration from the cluster...

[upgrade/config] FYI: You can look at this config file with 'kubectl-n kube-system get cm kubeadm-config-oyaml'

I1019 13:37:55.803144 87227 common.go:122] running preflight checks

[preflight] Running pre-flight checks.

I1019 13:37:55.803169 87227 preflight.go:78] validating if there are any unsupported CoreDNS plugins in the Corefile

I1019 13:37:55.820014 87227 preflight.go:103] validating if migration can be done for the current CoreDNS release.

[upgrade] Making sure the cluster is healthy:

I1019 13:37:55.837178 87227 apply.go:131] [upgrade/apply] validating requested and actual version

I1019 13:37:55.837241 87227 apply.go:147] [upgrade/version] enforcing version skew policies

[upgrade/version] You have chosen to change the cluster version to "v1.16.0"

[upgrade/versions] Cluster version: v1.15.3

[upgrade/versions] kubeadm version: v1.16.0

[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y

I1019 13:37:58.228724 87227 apply.go:163] [upgrade/apply] creating prepuller

[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]

[upgrade/prepull] Prepulling image for component etcd.

[upgrade/prepull] Prepulling image for component kube-apiserver.

[upgrade/prepull] Prepulling image for component kube-controller-manager.

[upgrade/prepull] Prepulling image for component kube-scheduler.

[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver

[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler

[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd

[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager

[upgrade/prepull] Prepulled image for component etcd.

[upgrade/prepull] Prepulled image for component kube-apiserver.

[upgrade/prepull] Prepulled image for component kube-scheduler.

[upgrade/prepull] Prepulled image for component kube-controller-manager.

[upgrade/prepull] Successfully prepulled the images for all the control plane components

I1019 13:38:00.888210 87227 apply.go:174] [upgrade/apply] performing upgrade

[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.16.0"...

Static pod: kube-apiserver-k8s01 hash: 5bfb05e7cb17fe8298d61706cb2263b6

Static pod: kube-controller-manager-k8s01 hash: 9c5db0eef4ba8d433ced5874b5688886

Static pod: kube-scheduler-k8s01 hash: 7d5d3c0a6786e517a8973fa06754cb75

I1019 13 etcd endpoints read from pods 38 https://192.168.54.128:2379 00.974269 87227 etcd.go:107]

I1019 13 etcd endpoints read from etcd 38 https://192.168.54.128:2379 01.033816 87227 etcd.go:156]

I1019 13 update etcd endpoints 38 https://192.168.54.128:2379 01.033856 87227 etcd.go:125]

[upgrade/etcd] Upgrading to TLS for etcd

Static pod: etcd-k8s01 hash: af0f40c2a1ce2695115431265406ca0d

I1019 13 wrote Static Pod manifest for a local etcd member to 38 wrote Static Pod manifest for a local etcd member to 04.475950 87227 local.go:69] [etcd] / etc/kubernetes/tmp/kubeadm-upgraded-manifests460824650/etcd.yaml "

[upgrade/staticpods] Preparing for "etcd" upgrade

[upgrade/staticpods] Renewing etcd-server certificate

[upgrade/staticpods] Renewing etcd-peer certificate

[upgrade/staticpods] Renewing etcd-healthcheck-client certificate

[upgrade/staticpods] Moved new manifest to "/ etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/ etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-19-13-38-00/etcd.yaml"

[upgrade/staticpods] Waiting for the kubelet to restart the component

[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)

Static pod: etcd-k8s01 hash: af0f40c2a1ce2695115431265406ca0d

Static pod: etcd-k8s01 hash: af0f40c2a1ce2695115431265406ca0d

Static pod: etcd-k8s01 hash: 8b854fdc3768d8f9aac3dfb09c123400

[apiclient] Found 1 Pods for label selector component=etcd

[apiclient] Found 0 Pods for label selector component=etcd

[apiclient] Found 1 Pods for label selector component=etcd

[apiclient] Found 0 Pods for label selector component=etcd

[apiclient] Found 1 Pods for label selector component=etcd

[upgrade/staticpods] Component "etcd" upgraded successfully!

I1019 13 etcd endpoints read from pods 38 Swiss 35.604122 87227 etcd.go:107] etcd endpoints read from pods:

I1019 13 etcd endpoints read from etcd 38 Swiss 35.618217 87227 etcd.go:156] https://192.168.54.128:2379

I1019 13 update etcd endpoints 38 purl 35.618242 87227 etcd.go:125] https://192.168.54.128:2379

[upgrade/etcd] Waiting for etcd to become available

I1019 13 etcd 38 Swiss 35.618254 87227 etcd.go:372] [etcd] attempting to see if all cluster endpoints ([https://192.168.54.128:2379]) are available 1 point 10)

[upgrade/staticpods] Writing new Static Pod manifests to "/ etc/kubernetes/tmp/kubeadm-upgraded-manifests460824650"

I1019 13:38:35.631290 87227 manifests.go:42] [control-plane] creating static Pod files

I1019 13:38:35.631334 87227 manifests.go:91] [control-plane] getting StaticPodSpecs

I1019 13 kube-apiserver 38 manifests.go:116 35.639202 87227 manifests.go:116] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/ etc/kubernetes/tmp/kubeadm-upgraded-manifests460824650/kube-apiserver.yaml"

I1019 13 kube-controller-manager 38 manifests.go:116 35.639809 87227 manifests.go:116] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/ etc/kubernetes/tmp/kubeadm-upgraded-manifests460824650/kube-controller-manager.yaml"

I1019 13 kube-scheduler 38 manifests.go:116 35.640103 87227 manifests.go:116] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/ etc/kubernetes/tmp/kubeadm-upgraded-manifests460824650/kube-scheduler.yaml"

[upgrade/staticpods] Preparing for "kube-apiserver" upgrade

[upgrade/staticpods] Renewing apiserver certificate

[upgrade/staticpods] Renewing apiserver-kubelet-client certificate

[upgrade/staticpods] Renewing front-proxy-client certificate

[upgrade/staticpods] Renewing apiserver-etcd-client certificate

[upgrade/staticpods] Moved new manifest to "/ etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/ etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-19-13-38-00/kube-apiserver.yaml"

[upgrade/staticpods] Waiting for the kubelet to restart the component

[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)

Static pod: kube-apiserver-k8s01 hash: 5bfb05e7cb17fe8298d61706cb2263b6

Static pod: kube-apiserver-k8s01 hash: 5bfb05e7cb17fe8298d61706cb2263b6

Static pod: kube-apiserver-k8s01 hash: c7a6a6cd079e4034a3258c4d94365d5a

[apiclient] Found 1 Pods for label selector component=kube-apiserver

[apiclient] Found 0 Pods for label selector component=kube-apiserver

[apiclient] Found 1 Pods for label selector component=kube-apiserver

[apiclient] Found 0 Pods for label selector component=kube-apiserver

[apiclient] Found 1 Pods for label selector component=kube-apiserver

[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!

[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade

[upgrade/staticpods] Renewing controller-manager.conf certificate

[upgrade/staticpods] Moved new manifest to "/ etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/ etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-19-13-38-00/kube-controller-manager.yaml"

[upgrade/staticpods] Waiting for the kubelet to restart the component

[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)

Static pod: kube-controller-manager-k8s01 hash: 9c5db0eef4ba8d433ced5874b5688886

Static pod: kube-controller-manager-k8s01 hash: a174e7fbc474c3449c0ee50ba7220e8e

[apiclient] Found 1 Pods for label selector component=kube-controller-manager

[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!

[upgrade/staticpods] Preparing for "kube-scheduler" upgrade

[upgrade/staticpods] Renewing scheduler.conf certificate

[upgrade/staticpods] Moved new manifest to "/ etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/ etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-19-13-38-00/kube-scheduler.yaml"

[upgrade/staticpods] Waiting for the kubelet to restart the component

[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)

Static pod: kube-scheduler-k8s01 hash: 7d5d3c0a6786e517a8973fa06754cb75

Static pod: kube-scheduler-k8s01 hash: b8e7c07b524b78e0b03577d5f61f79ef

[apiclient] Found 1 Pods for label selector component=kube-scheduler

[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!

I1019 13:38:54.548181 87227 apply.go:180] [upgrade/postupgrade] upgrading RBAC rules and addons

[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace

[kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster

[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace

[kubelet-start] Writing kubelet configuration to file "/ var/lib/kubelet/config.yaml"

I1019 13 to the Node API object 38 patchnode.go:30 54.670273 87227 patchnode.go:30] [patchnode] Uploading the CRI Socket information "/ var/run/dockershim.sock" to the Node API object "k8s01" as an annotation

[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials

[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster

I1019 13:38:55.222748 87227 clusterinfo.go:79] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace

I1019 13 request 38 Throttling request took 145.385879ms 55.376508 87227 request.go:538] request: PUT: https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/kubeadm:bootstrap-signer-clusterinfo

I1019 13 request 38 Throttling request took 197.037818ms 55.576272 87227 request.go:538] request: POST: https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings

I1019 13 Throttling request took 196.268563ms, request: PUT: https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/kubeadm:bootstrap-signer-clusterinfo 55.776513 87227 request.go:538]

I1019 13 POST: 38 https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/clusterroles 55.976281 87227 request.go:538] Throttling request took 181.099838ms, request:

I1019 13 PUT: 38 https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:coredns 56.176460 87227 request.go:538] Throttling request took 184.088286ms, request:

I1019 13 request 38 Throttling request took 196.69921ms 56.376299 87227 request.go:538] request: POST: https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings

I1019 13 request 38 https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:coredns 56.576335 87227 request.go:538] Throttling request took 187.238443ms, request: PUT:

[addons] Applied essential addon: CoreDNS

I1019 13 POST: 38 https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 56.776530 87227 request.go:538] Throttling request took 122.128829ms, request:

I1019 13 PUT: 38 https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/kubeadm:node-proxier 56.976951 87227 request.go:538] Throttling request took 194.821608ms, request:

I1019 13 POST: 38 https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 57.176476 87227 request.go:538] Throttling request took 192.024614ms, request:

I1019 13 request 38 Throttling request took 180.872083ms 57.376234 87227 request.go:538] request: PUT: https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/kube-proxy

I1019 13 request 38 Throttling request took 197.323877ms 57.576309 87227 request.go:538] request: POST: https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings

I1019 13 request 38 Throttling request took 190.156387ms 57.776253 87227 request.go:538] request: PUT: https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/kube-proxy

[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.16.0". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

[root@k8s01 ~] #

5. If there are multiple master nodes, upgrade other master nodes (a single master can be ignored)

[root@k8s01 ~] # kubeadm upgrade plan-- Test the upgrade process

[upgrade/config] Making sure the configuration is correct:

[upgrade/config] Reading configuration from the cluster...

[upgrade/config] FYI: You can look at this config file with 'kubectl-n kube-system get cm kubeadm-config-oyaml'

[preflight] Running pre-flight checks.

[upgrade] Making sure the cluster is healthy:

[upgrade] Fetching available versions to upgrade to

[upgrade/versions] Cluster version: v1.16.0

[upgrade/versions] kubeadm version: v1.16.0

W1019 13 https://dl.k8s.io/release/stable.txt": 46 version.go:101 26.923622 92337 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable.txt": for https://dl.k8s.io/release/stable.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

W1019 13:46:26.923660 92337 version.go:102] falling back to the local client version: v1.16.0

[upgrade/versions] Latest stable version: v1.16.0

W1019 13 https://dl.k8s.io/release/stable-1.16.txt": 46 version.go:101 36.952719 92337 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.16.txt": for https://dl.k8s.io/release/stable-1.16.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

W1019 13:46:36.952743 92337 version.go:102] falling back to the local client version: v1.16.0

[upgrade/versions] Latest version in the v1.16 series: v1.16.0

Awesome, you're up-to-date! Enjoy!

[root@k8s01 ~] # kubeadm upgrade node-- upgrade other master nodes

[upgrade] Reading configuration from the cluster...

[upgrade] FYI: You can look at this config file with 'kubectl-n kube-system get cm kubeadm-config-oyaml'

[upgrade] Upgrading your Static Pod-hosted control plane instance to version "v1.16.0"...

Static pod: kube-apiserver-k8s01 hash: c7a6a6cd079e4034a3258c4d94365d5a

Static pod: kube-controller-manager-k8s01 hash: a174e7fbc474c3449c0ee50ba7220e8e

Static pod: kube-scheduler-k8s01 hash: b8e7c07b524b78e0b03577d5f61f79ef

[upgrade/staticpods] Writing new Static Pod manifests to "/ etc/kubernetes/tmp/kubeadm-upgraded-manifests902864317"

[upgrade/staticpods] Preparing for "kube-apiserver" upgrade

[upgrade/staticpods] Current and new manifests of kube-apiserver are equal, skipping upgrade

[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade

[upgrade/staticpods] Current and new manifests of kube-controller-manager are equal, skipping upgrade

[upgrade/staticpods] Preparing for "kube-scheduler" upgrade

[upgrade/staticpods] Current and new manifests of kube-scheduler are equal, skipping upgrade

[upgrade] The control plane instance for this node was successfully updated!

[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace

[kubelet-start] Writing kubelet configuration to file "/ var/lib/kubelet/config.yaml"

[upgrade] The configuration for this node was successfully updated!

[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.

[root@k8s01 ~] #

6. Upgrade kubelet and kubectl for all master nodes (ignored by a single master)

[root@k8s01] # yum install-y kubelet-1.16.0 kubectl-1.16.0-- disableexcludes=kubernetes

[root@k8s01 ~] # systemctl daemon-reload

[root@k8s01 ~] # systemctl restart kubelet

7. Upgrade kubeadm components on all node nodes (executed on all node nodes)

[root@k8s02] # yum install-y kubeadm-1.16.0-- disableexcludes=kubernetes

8. Mark node nodes as non-debuggable and maintain node upgrades (performed in master)

[root@k8s01 ~] # kubectl drain k8s02-- ignore-daemonsets-- upgrade the k8s02 node first. If multiple nodes want to upgrade repeatedly, each node should perform the upgrade. If an error is reported, add-- delete-local-data parameter according to the prompt.

Node/k8s02 already cordoned

WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-amd64-888x6, kube-system/kube-proxy-xm9fn

Evicting pod "tiller-deploy-8557598fbc-x96gq"

Evicting pod "myapp-5f57d6857b-xgj8l"

Evicting pod "coredns-5644d7b6d9-lmpd5"

Evicting pod "metrics-server-6b445cb696-zp94w"

Evicting pod "myapp-5f57d6857b-2g8ss"

Pod/tiller-deploy-8557598fbc-x96gq evicted

Pod/metrics-server-6b445cb696-zp94w evicted

Pod/myapp-5f57d6857b-2g8ss evicted

Pod/coredns-5644d7b6d9-lmpd5 evicted

Pod/myapp-5f57d6857b-xgj8l evicted

Node/k8s02 evicted

[root@k8s01 ~] # kubectl get nodes

NAME STATUS ROLES AGE VERSION

K8s01 Ready master 41d v1.16.0

K8s02 Ready,SchedulingDisabled 41d v1.15.3

K8s03 Ready 41d v1.15.3

[root@k8s01 ~] #

9. Upgrade kubelet and kubectl on the k8s02 node

[root@k8s02 ~] # kubeadm upgrade node

[upgrade] Reading configuration from the cluster...

[upgrade] FYI: You can look at this config file with 'kubectl-n kube-system get cm kubeadm-config-oyaml'

[upgrade] Skipping phase. Not a control plane node [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace

[kubelet-start] Writing kubelet configuration to file "/ var/lib/kubelet/config.yaml"

[upgrade] The configuration for this node was successfully updated!

[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.

[root@k8s02] # yum install-y kubelet-1.16.0 kubectl-1.16.0-- disableexcludes=kubernetes

[root@k8s02 ~] # systemctl daemon-reload

[root@k8s02 ~] # systemctl restart kubelet

10. Restore the debugging strategy of the k8s02 node on the master node

[root@k8s01 ~] # kubectl uncordon k8s02

Node/k8s02 uncordoned

[root@k8s01 ~] # kubectl get nodes

NAME STATUS ROLES AGE VERSION

K8s01 Ready master 41d v1.16.0

K8s02 Ready 41d v1.16.0

K8s03 Ready 41d v1.15.3

[root@k8s01 ~] #

11. Use steps 8 to 10 to upgrade the k8s03 node

twelve。 View the status of the entire K8s cluster

[root@k8s01] # kubectl get pods-n kube-system

NAME READY STATUS RESTARTS AGE

Coredns-5644d7b6d9-8wvgt 1bat 1 Running 0 11m

Coredns-5644d7b6d9-pzr7g 1/1 Running 0 11m

Coredns-5c98db65d4-rtktb 1/1 Running 0 11m

Etcd-k8s01 1/1 Running 0 34m

Kube-apiserver-k8s01 1/1 Running 0 34m

Kube-controller-manager-k8s01 1/1 Running 0 34m

Kube-flannel-ds-amd64-888x6 1bat 1 Running 5 41d

Kube-flannel-ds-amd64-d648v 1/1 Running 15 41d

Kube-flannel-ds-amd64-rc9bc 1/1 Running 2 46h

Kube-proxy-d4rd5 1/1 Running 1 46h

Kube-proxy-wtk2j 1/1 Running 11 41d

Kube-proxy-xm9fn 1/1 Running 0 45m

Kube-scheduler-k8s01 1/1 Running 0 34m

Metrics-server-6b445cb696-65r5k 1bat 1 Running 0 11m

Tiller-deploy-8557598fbc-6jfp7 1/1 Running 0 11m

[root@k8s01 ~] # kubectl get nodes

NAME STATUS ROLES AGE VERSION

K8s01 Ready master 41d v1.16.0

K8s02 Ready 41d v1.16.0

K8s03 Ready 41d v1.16.0

[root@k8s01 ~] #

Error summary:

Oct 19 14:46:20 k8s01 kubelet [653]: E1019 14 pod_workers.go:191 46 pod_workers.go:191 20.701679 653] Error syncing pod e641b551-7f22-40fa-b847-658f6c7696fa ("tiller-deploy-8557598fbc-6jfp7_kube-system (e641b551-7f22-40fa-b847-658f6c7696fa)"), skipping: network is not ready: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

Oct 19 14:46:20 k8s01 kubelet [653]: E1019 14 pod_workers.go:191 46 Error syncing pod bd45bbe0 20.702091 653 pod_workers.go:191] Error syncing pod bd45bbe0-8529-4ee4-9fcf-90528178dc0d ("coredns-5c98db65d4-rtktb_kube-system (bd45bbe0-8529-4ee4-9fcf-90528178dc0d)"), skipping: network is not ready: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

Oct 19 14:46:20 k8s01 kubelet [653]: E1019 14 pod_workers.go:191 46 Error syncing pod 87d24c8c-bba8 20.702396 653 pod_workers.go:191] Error syncing pod 87d24c8c-bba8-420b-8901-9e2b8bc339ac ("coredns-5644d7b6d9-8wvgt_kube-system (87d24c8c-bba8-420b-8901-9e2b8bc339ac)"), skipping: network is not ready: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

Solution: (if the CNI plug-in reports an error, you need to reinstall flannel)

[root@k8s01 ~] # wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

[root@k8s01] # kubectl apply-f kube-flannel.yml

Podsecuritypolicy.policy/psp.flannel.unprivileged configured

Clusterrole.rbac.authorization.k8s.io/flannel unchanged

Clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged

Serviceaccount/flannel unchanged

Configmap/kube-flannel-cfg configured

Daemonset.apps/kube-flannel-ds-amd64 configured

Daemonset.apps/kube-flannel-ds-arm64 configured

Daemonset.apps/kube-flannel-ds-arm configured

Daemonset.apps/kube-flannel-ds-ppc64le configured

Daemonset.apps/kube-flannel-ds-s390x configured

[root@k8s01 ~] #

The above is how to upgrade k8s cluster V1.15.3 to V1.16.0. Have you learned any knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report