Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

CentOS 8 binary High availability installation k8s 1.16.x

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Basic instructions this article will demonstrate the high availability of CentOS 8 binary installation of k8s 1.16.x, which is not much different from other versions. Compared with CentOS 7, CentOS 8 is more convenient to operate, such as the shutdown of some services, which takes effect permanently without modifying the configuration file. The default kernel version of CentOS 8 is 4.18.Therefore, there is no need to upgrade the kernel during the installation of K8s, and the system environment can be upgraded as needed. If you download the latest version of CentOS 8, the system upgrade can be omitted. Basic environment configuration host information 192.168.1.19 k8s-master01192.168.1.18 k8s-master02192.168.1.20 k8s-master03192.168.1.88 k8s-master-lb192.168.1.21 k8s-node01192.168.1.22 k8s-node02 system environment [root@k8s-master01] # cat / etc/redhat-release CentOS Linux release 8.0.1905 (Core) [root@k8s-master01] # uname-aLinux k8s-master01 4. 18.0-80.el8.x86_64 # 1 SMP Tue Jun 4 09:19:46 UTC 2019 x86 "64 GNU/Linux configure all nodes hosts file [root@k8s-master01 ~] # cat / etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.1.19 k8s-master01192.168.1.18 k8s-master02192.168.1.20 k8s-master03192.168 .1.88 k8s-master-lb192.168.1.21 k8s-node01192.168.1.22 k8s-node02 all nodes shut down firewalld, Dnsmasq 、 NetworkManager 、 Selinuxsystemctl disable-- now firewalld systemctl disable-- now dnsmasqsetenforce 0 all nodes close swap partition [root@k8s-master01] # swapoff-a & & sysctl-w vm.swappiness=0vm.swappiness = 0 synchronization time of all nodes ln-sf / usr/share/zoneinfo/Asia/Shanghai / etc/localtimeecho 'Asia/Shanghai' > / etc/timezonentpdate time2.aliyun.com Master01 node generates ssh key [root@k8s-master01 ~] # ssh-keygen-t rsaGenerating public/private rsa key pair .Enter file in which to save the key (/ root/.ssh/id_rsa): Created directory'/ root/.ssh'.Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in / root/.ssh/id_rsa.Your public key has been saved in / root/.ssh/id_rsa.pub.The key fingerprint is:SHA256:6uz2kI+jcMJIUQWKqRcDRbvpVxhCW3Tmqn0NKS+lT3U root@k8s-master01The key's randomart image is:+--- [RSA 2048]-+ | .o + + = .o | |. + o + + | | oo*. . | |. . *. | |. + + = S E | |. = o * *. | |. * * B. | | = Bo+ |. + * oo | +-[SHA256]-+ Master01 configuration password-free login to other nodes [root@k8s-master01 ~] # for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 K8sMurde02 do ssh-copy-id-I .ssh / id_rsa.pub $I | Done all nodes install basic tools yum install wget jq psmisc vim net-tools yum-utils device-mapper-persistent-data lvm2 git-y Master01 download installation file [root@k8s-master01 ~] # git clone https://github.com/dotbalo/k8s-ha-install.gitCloning into 'k8s-ha-install'...remote: Enumerating objects: 12, done.remote: Counting objects: 100% (12), done.remote: Compressing objects: 100% (11) Done.remote: Total 461 (delta 2), reused 5 (delta 1), pack-reused 449Receiving objects: 100% (461 hand 461), 19.52 MiB | 4.04 MiB/s, done.Resolving deltas: 100% (163 done), done. Switch to 1.16.x branch git checkout manual-installation-v1.16.x

Basic component installation

Configure Docker yum Feed

[root@k8s-master01 k8s-ha-install] # curl https://download.docker.com/linux/centos/docker-ce.repo-o / etc/yum.repos.d/docker-ce.repo

Total Received Xferd Average Speed Time Time Time Current

Dload Upload Total Spent Left Speed

2424 2424 00 3639 0 -:-3645

[root@k8s-master01 k8s-ha-install] # yum makecache

CentOS-8-AppStream 559 kB/s | 6.3MB 00:11

CentOS-8-Base 454 kB/s | 7.9 MB 00:17

CentOS-8-Extras 592 Bplash s | 2.1 kB 00:03

Docker CE Stable-x86 percent 64 5.8 kB/s | 20 kB 00:03

Last metadata expiration check: 0:00:01 ago on Sat 02 Nov 2019 02:46:29 PM CST.

Metadata cache created.

All nodes install the new version of containerd

[root@k8s-master01 k8s-ha-install] # wget https://download.docker.com/linux/centos/7/x86_64/edge/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm

-- 2019-11-02 15 0000Rose 20muri-https://download.docker.com/linux/centos/7/x86_64/edge/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm

Resolving download.docker.com (download.docker.com)... 13.225.103.32, 13.225.103.65, 13.225.103.10,...

Connecting to download.docker.com (download.docker.com) | 13.225.103.32 |: 443. Connected.

HTTP request sent, awaiting response... 200 OK

Length: 27119348 (26m) [binary/octet-stream]

Saving to: 'containerd.io-1.2.6-3.3.el7.x864.rpm'

Containerd.io-1.2.6-3.3.el7.x86_64.rpm 100% [= >] 25.86m 1.55MB/s in 30s

2019-11-02 15:00:51 (887 KB/s)-'containerd.io-1.2.6-3.3.el7.x86pm 64.rpm' saved [27119348ame27119348]

[root@k8s-master01 k8s-ha-install] # yum-y install containerd.io-1.2.6-3.3.el7.x86_64.rpm

Last metadata expiration check: 0:14:35 ago on Sat 02 Nov 2019 02:46:29 PM CST.

Install the latest version of Docker on all nodes

[root@k8s-master01 k8s-ha-install] # yum install docker-ce-y

All nodes enable Docker and set boot self-startup

[root@k8s-master01 k8s-ha-install] # systemctl enable-- now docker

Created symlink / etc/systemd/system/multi-user.target.wants/docker.service → / usr/lib/systemd/system/docker.service.

[root@k8s-master01 k8s-ha-install] # docker version

Client: Docker Engine-Community

Version: 19.03.4

API version: 1.40

Go version: go1.12.10

Git commit: 9013bf583a

Built: Fri Oct 18 15:52:22 2019

OS/Arch: linux/amd64

Experimental: false

Server: Docker Engine-Community

Engine:

Version: 19.03.4

API version: 1.40 (minimum version 1.12)

Go version: go1.12.10

Git commit: 9013bf583a

Built: Fri Oct 18 15:50:54 2019

OS/Arch: linux/amd64

Experimental: false

Containerd:

Version: 1.2.6

GitCommit: 894b81a4b802e4eb2a91d1ce216b8817763c29fb

Runc:

Version: 1.0.0-rc8

GitCommit: 425e105d5a03fabd737a126ad93d62a9eeede87f

Docker-init:

Version: 0.18.0

GitCommit: fec3683

4. Installation of k8s components

Download the kubernetes 1.16.x installation package

Https://dl.k8s.io/v1.16.2/kubernetes-server-linux-amd64.tar.gz

Download the etcd 3.3.15 installation package

[root@k8s-master01 ~] # wget https://github.com/etcd-io/etcd/releases/download/v3.3.15/etcd-v3.3.15-linux-amd64.tar.gz

Extract the kubernetes installation file

[root@k8s-master01] # tar-xf kubernetes-server-linux-amd64.tar.gz-- strip-components=3-C / usr/local/bin kubernetes/server/bin/kube {let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}

Extract the etcd installation file

[root@k8s-master01] # tar-zxvf etcd-v3.3.15-linux-amd64.tar.gz-- strip-components=1-C / usr/local/bin etcd-v3.3.15-linux-amd64/etcd {, ctl}

Version view

[root@k8s-master01] # etcd-- version

Etcd Version: 3.3.15

Git SHA: 94745a4ee

Go Version: go1.12.9

Go OS/Arch: linux/amd64

[root@k8s-master01 ~] # kubectl version

Client Version: version.Info {Major: "1", Minor: "16", GitVersion: "v1.16.2", GitCommit: "c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState: "clean", BuildDate: "2019-10-15T19:18:23Z", GoVersion: "go1.12.10", Compiler: "gc", Platform: "linux/amd64"}

The connection to the server localhost:8080 was refused-did you specify the right host or port?

Send components to other nodes

MasterNodes='k8s-master02 K8sMurray Master03'

WorkNodes='k8s-node01 K8sMurnode02'

For NODE in $MasterNodes; do echo $NODE; scp / usr/local/bin/kube {let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp / usr/local/bin/etcd* $NODE:/usr/local/bin/; done

For NODE in $WorkNodes; do scp / usr/local/bin/kube {let,-proxy} $NODE:/usr/local/bin/; done

CNI installation, download CNI components

Wget https://github.com/containernetworking/plugins/releases/download/v0.7.5/cni-plugins-amd64-v0.7.5.tgz

All nodes create / opt/cni/bin directory

Mkdir-p / opt/cni/bin

Extract the cni and send it to other nodes

Tar-zxf cni-plugins-amd64-v0.7.5.tgz-C / opt/cni/bin

For NODE in $MasterNodes; do ssh $NODE 'mkdir-p / opt/cni/bin'; scp / opt/cni/bin/ $NODE:/opt/cni/bin/; done

For NODE in $WorkNodes; do ssh $NODE 'mkdir-p / opt/cni/bin'; scp / opt/cni/bin/ $NODE:/opt/cni/bin/; done

5. Generate a certificate

Download the generate Certificate tool

Wget "https://pkg.cfssl.org/R1.2/cfssl_linux-amd64"-O / usr/local/bin/cfssl

Wget "https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64"-O / usr/local/bin/cfssljson

Chmod + x / usr/local/bin/cfssl / usr/local/bin/cfssljson

All Master nodes create etcd certificate directories

Mkdir / etc/etcd/ssl-p

Master01 node generates etcd certificate

[root@k8s-master01 pki] # pwd

/ root/k8s-ha-install/pki

[root@k8s-master01 pki] # cfssl gencert-initca etcd-ca-csr.json | cfssljson-bare / etc/etcd/ssl/etcd-ca

2019-11-02 20:39:26 [INFO] generating a new CA key and certificate from CSR

2019-11-02 20:39:26 [INFO] generate received request

2019-11-02 20:39:26 [INFO] received CSR

2019-11-02 20:39:26 [INFO] generating key: rsa-2048

2019-11-02 20:39:27 [INFO] encoded CSR

2019-11-02 20:39:27 [INFO] signed certificate with serial number 603981780847439086338945186785919518080443607000

[root@k8s-master01 pki] # cfssl gencert\

-ca=/etc/etcd/ssl/etcd-ca.pem\

-ca-key=/etc/etcd/ssl/etcd-ca-key.pem\

-config=ca-config.json\

-hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03192.168.1.19192.168.1.18192.168.1.20\

Json | cfssljson-bare / etc/etcd/ssl/etcd

-profile=kubernetes\

Etcd-csr.json | cfssljson-bare / etc/etcd/ssl/etcd

2019-11-02 20:48:51 [INFO] generate received request

2019-11-02 20:48:51 [INFO] received CSR

2019-11-02 20:48:51 [INFO] generating key: rsa-2048

2019-11-02 20:48:52 [INFO] encoded CSR

2019-11-02 20:48:52 [INFO] signed certificate with serial number 584188914871773268065873129590581350709445633441

Copy the certificate to another node

[root@k8s-master01 pki] # MasterNodes='k8s-master02 k8sSummermaster03'

[root@k8s-master01 pki] # WorkNodes='k8s-node01 k8smurde 02'

[root@k8s-master01 pki] #

[root@k8s-master01 pki] #

[root@k8s-master01 pki] #

[root@k8s-master01 pki] # for NODE in $MasterNodes; do

Ssh $NODE "mkdir-p / etc/etcd/ssl" for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do scp / etc/etcd/ssl/$ {FILE} $NODE:/etc/etcd/ssl/$ {FILE} done

Done

Etcd-ca-key.pem 100% 1675 424.0KB/s 00:00

Etcd-ca.pem 100% 1363 334.4KB/s 00:00

Etcd-key.pem 100% 1679 457.4KB/s 00:00

Etcd.pem 100% 1505 254.5KB/s 00:00

Etcd-ca-key.pem 100% 1675 308.3KB/s 00:00

Etcd-ca.pem 100% 1363 479.0KB/s 00:00

Etcd-key.pem 100% 1679 208.1KB/s 00:00

Etcd.pem 100% 1505 398.1KB/s 00:00

Generate kubernetes certificate

All nodes create kubernetes-related directories

Mkdir-p / etc/kubernetes/pki

[root@k8s-master01 pki] # cfssl gencert-initca ca-csr.json | cfssljson-bare / etc/kubernetes/pki/ca

2019-11-02 20:56:39 [INFO] generating a new CA key and certificate from CSR

2019-11-02 20:56:39 [INFO] generate received request

2019-11-02 20:56:39 [INFO] received CSR

2019-11-02 20:56:39 [INFO] generating key: rsa-2048

2019-11-02 20:56:39 [INFO] encoded CSR

2019-11-02 20:56:39 [INFO] signed certificate with serial number 204375274212114571715224985915861594178740016435

[root@k8s-master01 pki] # cfssl gencert-ca=/etc/kubernetes/pki/ca.pem-ca-key=/etc/kubernetes/pki/ca-key.pem-config=ca-config.json-hostname=10.96.0.1192.168.1.88127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local 192.168.1.19192.168.1.18192.168.1.20-profile=kubernetes apiserver-csr.json | cfssljson-bare / etc/kubernetes/pki/apiserver

2019-11-02 20:58:37 [INFO] generate received request

2019-11-02 20:58:37 [INFO] received CSR

2019-11-02 20:58:37 [INFO] generating key: rsa-2048

2019-11-02 20:58:37 [INFO] encoded CSR

2019-11-02 20:58:37 [INFO] signed certificate with serial number 530242414195398724956103410222477752498137496229

[root@k8s-master01 pki] # cfssl gencert-initca front-proxy-ca-csr.json | cfssljson-bare / etc/kubernetes/pki/front-proxy-ca

Netes front-proxy-client-csr.json | cfssljson-bare / etc/kubernetes/pki/front-proxy-client

2019-11-02 20:59:28 [INFO] generating a new CA key and certificate from CSR

2019-11-02 20:59:28 [INFO] generate received request

2019-11-02 20:59:28 [INFO] received CSR

2019-11-02 20:59:28 [INFO] generating key: rsa-2048

2019-11-02 20:59:29 [INFO] encoded CSR

2019-11-02 20:59:29 [INFO] signed certificate with serial number 400365003875188640769336046625866020115935574187

[root@k8s-master01 pki] #

[root@k8s-master01 pki] # cfssl gencert-ca=/etc/kubernetes/pki/front-proxy-ca.pem-ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem-config=ca-config.json-profile=kubernetes front-proxy-client-csr.json | cfssljson-bare / etc/kubernetes/pki/front-proxy-client

2019-11-02 20:59:29 [INFO] generate received request

2019-11-02 20:59:29 [INFO] received CSR

2019-11-02 20:59:29 [INFO] generating key: rsa-2048

2019-11-02 20:59:31 [INFO] encoded CSR

2019-11-02 20:59:31 [INFO] signed certificate with serial number 714876805420065531759406103649716563367283489841

[root@k8s-master01 pki] # cfssl gencert\

-ca=/etc/kubernetes/pki/ca.pem\

-ca-key=/etc/kubernetes/pki/ca-key.pem\

-config=ca-config.json\

-profile=kubernetes\

Manager-csr.json | cfssljson-bare / etc/kubernetes/pki/controller-manager

2019-11-02 20:59:56 [INFO] generate received request

2019-11-02 20:59:56 [INFO] received CSR

2019-11-02 20:59:56 [INFO] generating key: rsa-2048

2019-11-02 20:59:56 [INFO] encoded CSR

2019-11-02 20:59:56 [INFO] signed certificate with serial number 365163931170718211641188236585237940196007847195

20:59:56 on 2019-11-02 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for

Websites. For more information see the Baseline Requirements for the Issuance and Management

Of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);

Specifically, section 10.2.3 ("Information Requirements").

[root@k8s-master01 pki] #

[root@k8s-master01 pki] # kubectl config set-cluster kubernetes\

-- certificate-authority=/etc/kubernetes/pki/ca.pem\-- embed-certs=true\-- server= https://192.168.1.88:8443\-- kubeconfig=/etc/kubernetes/controller-manager.kubeconfig

Ig

Kubectl config set-context system:kube-controller-manager@kubernetes\

-- cluster=kubernetes\

-- user=system:kube-controller-manager\

-- kubeconfig=/etc/kubernetes/controller-manager.kubeconfig

Kubectl config use-context system:kube-controller-manager@kubernetes\

-- kubeconfig=/etc/kubernetes/controller-manager.kubeconfig

Cluster "kubernetes" set.

[root@k8s-master01 pki] #

[root@k8s-master01 pki] # kubectl config set-credentials system:kube-controller-manager\

-client-certificate=/etc/kubernetes/pki/controller-manager.pem\-- client-key=/etc/kubernetes/pki/controller-manager-key.pem\-- embed-certs=true\-- kubeconfig=/etc/kubernetes/controller-manager.kubeconfig

User "system:kube-controller-manager" set.

[root@k8s-master01 pki] #

[root@k8s-master01 pki] # kubectl config set-context system:kube-controller-manager@kubernetes\

-- cluster=kubernetes\

-- user=system:kube-controller-manager\

-- kubeconfig=/etc/kubernetes/controller-manager.kubeconfig

Context "system:kube-controller-manager@kubernetes" created.

[root@k8s-master01 pki] #

[root@k8s-master01 pki] # kubectl config use-context system:kube-controller-manager@kubernetes\

-- kubeconfig=/etc/kubernetes/controller-manager.kubeconfig

Switched to context "system:kube-controller-manager@kubernetes".

[root@k8s-master01 pki] # cfssl gencert\

-ca=/etc/kubernetes/pki/ca.pem\

-ca-key=/etc/kubernetes/pki/ca-key.pem\

-config=ca-config.json\

-profile=kubernetes\

Scheduler-csr.json | cfssljson-bare / etc/kubernetes/pki/scheduler

2019-11-02 21:00:48 [INFO] generate received request

2019-11-02 21:00:48 [INFO] received CSR

2019-11-02 21:00:48 [INFO] generating key: rsa-2048

2019-11-02 21:00:49 [INFO] encoded CSR

2019-11-02 21:00:49 [INFO] signed certificate with serial number 536666325566380578973549296433116161533422391008

21:00:49 on 2019-11-02 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for

Websites. For more information see the Baseline Requirements for the Issuance and Management

Of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);

Specifically, section 10.2.3 ("Information Requirements").

[root@k8s-master01 pki] # kubectl config set-cluster kubernetes\

-- certificate-authority=/etc/kubernetes/pki/ca.pem\-- embed-certs=true\-- server= https://192.168.1.88:8443\-- kubeconfig=/etc/kubernetes/scheduler.kubeconfig

Heduler@kubernetes\

-- cluster=kubernetes\

-- user=system:kube-scheduler\

-- kubeconfig=/etc/kubernetes/scheduler.kubeconfig

Kubectl config use-context system:kube-scheduler@kubernetes\

-- kubeconfig=/etc/kubernetes/scheduler.kubeconfig

Cluster "kubernetes" set.

[root@k8s-master01 pki] #

[root@k8s-master01 pki] # kubectl config set-credentials system:kube-scheduler\

-client-certificate=/etc/kubernetes/pki/scheduler.pem\-- client-key=/etc/kubernetes/pki/scheduler-key.pem\-- embed-certs=true\-- kubeconfig=/etc/kubernetes/scheduler.kubeconfig

User "system:kube-scheduler" set.

[root@k8s-master01 pki] #

[root@k8s-master01 pki] # kubectl config set-context system:kube-scheduler@kubernetes\

-- cluster=kubernetes\

-- user=system:kube-scheduler\

-- kubeconfig=/etc/kubernetes/scheduler.kubeconfig

Context "system:kube-scheduler@kubernetes" created.

[root@k8s-master01 pki] #

[root@k8s-master01 pki] # kubectl config use-context system:kube-scheduler@kubernetes\

-- kubeconfig=/etc/kubernetes/scheduler.kubeconfig

Switched to context "system:kube-scheduler@kubernetes".

[root@k8s-master01 pki] # cfssl gencert\

-ca=/etc/kubernetes/pki/ca.pem\

-ca-key=/etc/kubernetes/pki/ca-key.pem\

-config=ca-config.json\

-profile=kubernetes\

Admin-csr.json | cfssljson-bare / etc/kubernetes/pki/admin

2019-11-02 21:01:56 [INFO] generate received request

2019-11-02 21:01:56 [INFO] received CSR

2019-11-02 21:01:56 [INFO] generating key: rsa-2048

2019-11-02 21:01:56 [INFO] encoded CSR

2019-11-02 21:01:56 [INFO] signed certificate with serial number 596648060489336426936623033491721470152303339581

21:01:56 on 2019-11-02 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for

Websites. For more information see the Baseline Requirements for the Issuance and Management

Of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);

Specifically, section 10.2.3 ("Information Requirements").

[root@k8s-master01 pki] # kubectl config set-cluster kubernetes-certificate-authority=/etc/kubernetes/pki/ca.pem-embed-certs=true-server= https://192.168.1.88:8443-kubeconfig=/etc/kubernetes/admin.kubeconfig

S-- user=kubernetes-admin-- kubeconfig=/etc/kubernetes/admin.kubeconfig

Kubectl config use-context kubernetes-admin@kubernetes-kubeconfig=/etc/kubernetes/admin.kubeconfig

Cluster "kubernetes" set.

[root@k8s-master01 pki] #

[root@k8s-master01 pki] # kubectl config set-credentials kubernetes-admin-client-certificate=/etc/kubernetes/pki/admin.pem-client-key=/etc/kubernetes/pki/admin-key.pem-embed-certs=true-kubeconfig=/etc/kubernetes/admin.kubeconfig

User "kubernetes-admin" set.

[root@k8s-master01 pki] #

[root@k8s-master01 pki] # kubectl config set-context kubernetes-admin@kubernetes-cluster=kubernetes-user=kubernetes-admin-kubeconfig=/etc/kubernetes/admin.kubeconfig

Context "kubernetes-admin@kubernetes" created.

[root@k8s-master01 pki] #

[root@k8s-master01 pki] # kubectl config use-context kubernetes-admin@kubernetes-- kubeconfig=/etc/kubernetes/admin.kubeconfig

Switched to context "kubernetes-admin@kubernetes".

[root@k8s-master01 pki] # for NODE in k8s-master01 k8s-master02 K8smuryMaster03; do

\ cp kubelet-csr.json kubelet-$NODE-csr.json;sed-I "s /\ $NODE/$NODE/g" kubelet-$NODE-csr.json;cfssl gencert\-ca=/etc/kubernetes/pki/ca.pem\-ca-key=/etc/kubernetes/pki/ca-key.pem\-config=ca-config.json\-hostname=$NODE\-profile=kubernetes\ kubelet-$NODE-csr.json | cfssljson-bare / etc/kubernetes/pki/kubelet-$NODE;rm-f kubelet-$NODE-csr.json

Done

2019-11-02 21:03:19 [INFO] generate received request

2019-11-02 21:03:19 [INFO] received CSR

2019-11-02 21:03:19 [INFO] generating key: rsa-2048

2019-11-02 21:03:20 [INFO] encoded CSR

2019-11-02 21:03:20 [INFO] signed certificate with serial number 537071233304209996689594645648717291884343909273

2019-11-02 21:03:20 [INFO] generate received request

2019-11-02 21:03:20 [INFO] received CSR

2019-11-02 21:03:20 [INFO] generating key: rsa-2048

2019-11-02 21:03:20 [INFO] encoded CSR

2019-11-02 21:03:20 [INFO] signed certificate with serial number 93675644052244007817881327407761955585316699106

2019-11-02 21:03:20 [INFO] generate received request

2019-11-02 21:03:20 [INFO] received CSR

2019-11-02 21:03:20 [INFO] generating key: rsa-2048

2019-11-02 21:03:21 [INFO] encoded CSR

2019-11-02 21:03:21 [INFO] signed certificate with serial number 369432006714320793581219965286097236027135622831

[root@k8s-master01 pki] # for NODE in k8s-master01 k8s-master02 K8smuryMaster03; do

Ssh $NODE "mkdir-p / etc/kubernetes/pki" scp / etc/kubernetes/pki/ca.pem $NODE:/etc/kubernetes/pki/ca.pemscp / etc/kubernetes/pki/kubelet-$NODE-key.pem $NODE:/etc/kubernetes/pki/kubelet-key.pemscp / etc/kubernetes/pki/kubelet-$NODE.pem $NODE:/etc/kubernetes/pki/kubelet.pemrm-f / etc/kubernetes/pki/kubelet-$NODE-key.pem / etc/kubernetes/pki/kubelet-$NODE.pem

Done

Ca.pem 100% 1407 494.1KB/s 00:00

Kubelet-k8s-master01-key.pem 100% 1679 549.9KB/s 00:00

Kubelet-k8s-master01.pem 100% 1501 535.5KB/s 00:00

Ca.pem 100% 1407 358.2KB/s 00:00

Kubelet-k8s-master02-key.pem 100% 1675 539.8KB/s 00:00

Kubelet-k8s-master02.pem 100% 1501 375.5KB/s 00:00

Ca.pem 100% 1407 464.7KB/s 00:00

Kubelet-k8s-master03-key.pem 100% 1679 566.2KB/s 00:00

Kubelet-k8s-master03.pem 100% 1501 515.1KB/s 00:00

[root@k8s-master01 pki] # for NODE in k8s-master01 k8s-master02 K8smuryMaster03; do

Ssh $NODE "cd / etc/kubernetes/pki & &\ kubectl config set-cluster kubernetes\-certificate-authority=/etc/kubernetes/pki/ca.pem\-embed-certs=true\-server= https://192.168.1.88:8443\-kubeconfig=/etc/kubernetes/kubelet.kubeconfig & &\ kubectl config set-credentials system:node:$ {NODE}\-client-certificate=/etc/kubernetes/pki/kubelet.pem\- -client-key=/etc/kubernetes/pki/kubelet-key.pem\-embed-certs=true\-kubeconfig=/etc/kubernetes/kubelet.kubeconfig & &\ kubectl config set-context system:node:$ {NODE} @ kubernetes\-cluster=kubernetes\-user=system:node:$ {NODE}\-kubeconfig=/etc/kubernetes/kubelet.kubeconfig & &\ kubectl config use-context system:node:$ {NODE} @ kubernetes\-kubeconfig=/etc/kubernetes/kubelet.kubeconfig "

Done

Cluster "kubernetes" set.

User "system:node:k8s-master01" set.

Context "system:node:k8s-master01@kubernetes" created.

Switched to context "system:node:k8s-master01@kubernetes".

Cluster "kubernetes" set.

User "system:node:k8s-master02" set.

Context "system:node:k8s-master02@kubernetes" created.

Switched to context "system:node:k8s-master02@kubernetes".

Cluster "kubernetes" set.

User "system:node:k8s-master03" set.

Context "system:node:k8s-master03@kubernetes" created.

Switched to context "system:node:k8s-master03@kubernetes".

Create ServiceAccount Key

[root@k8s-master01 pki] # openssl genrsa-out / etc/kubernetes/pki/sa.key 2048

Generating RSA private key, 2048 bit long modulus (2 primes)

. +

. +

E is 65537 (0x010001)

[root@k8s-master01 pki] # openssl rsa-in / etc/kubernetes/pki/sa.key-pubout-out / etc/kubernetes/pki/sa.pub

Writing RSA key

[root@k8s-master01 pki] # for NODE in k8s-master02 K8smuryMaster03; do

For FILE in $(ls / etc/kubernetes/pki | grep-v etcd); do

Fig; do

Scp / etc/kubernetes/$ {FILE} $NODE:/etc/kubernetes/$ {FILE}

Done

Done

Scp / etc/kubernetes/pki/$ {FILE} $NODE:/etc/kubernetes/pki/$ {FILE}

Done

For FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do

Scp / etc/kubernetes/$ {FILE} $NODE:/etc/kubernetes/$ {FILE}

Done

Done

Admin.csr 100% 1025 303.5KB/s 00:00

Admin-key.pem 100% 1675 544.2KB/s 00:00

Admin.pem 100% 1440 447.6KB/s 00:00

Apiserver.csr 100% 1029 251.9KB/s 00:00

Apiserver-key.pem 100% 1679 579.5KB/s 00:00

Apiserver.pem 100% 1692 592.0KB/s 00:00

Ca.csr 100% 1025 372.6KB/s 00:00

Ca-key.pem 100% 1679 535.5KB/s 00:00

Ca.pem 100% 1407 496.0KB/s 00:00

Controller-manager.csr 100% 1082 273.4KB/s 00:00

Controller-manager-key.pem 100% 1679 552.8KB/s 00:00

Controller-manager.pem 100% 1497 246.7KB/s 00:00

Front-proxy-ca.csr 100% 891 322.2KB/s 00:00

Front-proxy-ca-key.pem 100% 1679 408.7KB/s 00:00

Front-proxy-ca.pem 100% 1143 436.9KB/s 00:00

Front-proxy-client.csr 100% 903 283.5KB/s 00:00

Front-proxy-client-key.pem 100% 1679 265.2KB/s 00:00

Front-proxy-client.pem 100% 1188 358.8KB/s 00:00

Kubelet-k8s-master01.csr 100% 1050 235.6KB/s 00:00

Kubelet-k8s-master02.csr 100% 1050 385.3KB/s 00:00

Kubelet-k8s-master03.csr 100% 1050 319.8KB/s 00:00

Kubelet-key.pem 100% 1679 562.2KB/s 00:00

Kubelet.pem 100% 1501 349.4KB/s 00:00

Sa.key 100% 1679 455.2KB/s 00:00

Sa.pub 100% 451 152.3KB/s 00:00

Scheduler.csr 100% 1058 402.0KB/s 00:00

Scheduler-key.pem 100% 1675 291.8KB/s 00:00

Scheduler.pem 100% 1472 426.2KB/s 00:00

Admin.kubeconfig 100% 6432 1.5MB/s 00:00

Controller-manager.kubeconfig 100% 6568 1.1MB/s 00:00

Scheduler.kubeconfig 100% 6496 601.6KB/s 00:00

Admin.csr 100% 1025 368.2KB/s 00:00

Admin-key.pem 100% 1675 542.3KB/s 00:00

Admin.pem 100% 1440 247.7KB/s 00:00

Apiserver.csr 100% 1029 212.4KB/s 00:00

Apiserver-key.pem 100% 1679 530.8KB/s 00:00

Apiserver.pem 100% 1692 557.3KB/s 00:00

Ca.csr 100% 1025 266.4KB/s 00:00

Ca-key.pem 100% 1679 408.6KB/s 00:00

Ca.pem 100% 1407 404.6KB/s 00:00

Controller-manager.csr 100% 1082 248.9KB/s 00:00

Controller-manager-key.pem 100% 1679 495.4KB/s 00:00

Controller-manager.pem 100% 1497 262.1KB/s 00:00

Front-proxy-ca.csr 100% 891 140.9KB/s 00:00

Front-proxy-ca-key.pem 100% 1679 418.7KB/s 00:00

Front-proxy-ca.pem 100% 1143 362.3KB/s 00:00

Front-proxy-client.csr 100% 903 305.6KB/s 00:00

Front-proxy-client-key.pem 100% 1679 342.7KB/s 00:00

Front-proxy-client.pem 100% 1188 235.1KB/s 00:00

Kubelet-k8s-master01.csr 100% 1050 215.1KB/s 00:00

Kubelet-k8s-master02.csr 100% 1050 354.0KB/s 00:00

Kubelet-k8s-master03.csr 100% 1050 243.0KB/s 00:00

Kubelet-key.pem 100% 1679 591.3KB/s 00:00

Kubelet.pem 100% 1501 557.0KB/s 00:00

Sa.key 100% 1679 387.5KB/s 00:00

Sa.pub 100% 451 154.0KB/s 00:00

Scheduler.csr 100% 1058 191.5KB/s 00:00

Scheduler-key.pem 100% 1675 426.3KB/s 00:00

Scheduler.pem 100% 1472 561.3KB/s 00:00

Admin.kubeconfig 100% 6432 1.3MB/s 00:00

Controller-manager.kubeconfig 100% 6568 1.7MB/s 00:00

Scheduler.kubeconfig 100% 6496 1.3MB/s 00:00

6. Kubernetes system component configuration

The etcd configuration is roughly the same. Be careful to modify the hostname and IP address of the etcd configuration of each Master node.

Name: 'K8smurmaster01'

Data-dir: / var/lib/etcd

Wal-dir: / var/lib/etcd/wal

Snapshot-count: 5000

Heartbeat-interval: 100

Election-timeout: 1000

Quota-backend-bytes: 0

Listen-peer-urls: 'https://192.168.1.19:2380'

Listen-client-urls: 'https://192.168.1.19:2379,http://127.0.0.1:2379'

Max-snapshots: 3

Max-wals: 5

Cors:

Initial-advertise-peer-urls: 'https://192.168.1.19:2380'

Advertise-client-urls: 'https://192.168.1.19:2379'

Discovery:

Discovery-fallback: 'proxy'

Discovery-proxy:

Discovery-srv:

Initial-cluster: 'k8smurmaster01 = https://192.168.1.19:2380,k8s-master02=https://192.168.1.18:2380,k8s-master03=https://192.168.1.20:2380'

Initial-cluster-token: 'etcd-k8s-cluster'

Initial-cluster-state: 'new'

Strict-reconfig-check: false

Enable-v2: true

Enable-pprof: true

Proxy: 'off'

Proxy-failure-wait: 5000

Proxy-refresh-interval: 30000

Proxy-dial-timeout: 1000

Proxy-write-timeout: 5000

Proxy-read-timeout: 0

Client-transport-security:

Ca-file:'/ etc/kubernetes/pki/etcd/etcd-ca.pem'

Cert-file:'/ etc/kubernetes/pki/etcd/etcd.pem'

Key-file:'/ etc/kubernetes/pki/etcd/etcd-key.pem'

Client-cert-auth: true

Trusted-ca-file:'/ etc/kubernetes/pki/etcd/etcd-ca.pem'

Auto-tls: true

Peer-transport-security:

Ca-file:'/ etc/kubernetes/pki/etcd/etcd-ca.pem'

Cert-file:'/ etc/kubernetes/pki/etcd/etcd.pem'

Key-file:'/ etc/kubernetes/pki/etcd/etcd-key.pem'

Peer-client-cert-auth: true

Trusted-ca-file:'/ etc/kubernetes/pki/etcd/etcd-ca.pem'

Auto-tls: true

Debug: false

Log-package-levels:

Log-output: default

Force-new-cluster: false

All Master nodes create etcd service and start

[root@k8s-master01 pki] # cat / usr/lib/systemd/system/etcd.service

[Unit]

Description=Etcd Service

Documentation= https://coreos.com/etcd/docs/latest/

After=network.target

[Service]

Type=notify

ExecStart=/usr/local/bin/etcd-config-file=/etc/etcd/etcd.config.yml

Restart=on-failure

RestartSec=10

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

Alias=etcd3.service

[root@k8s-master01 pki] # mkdir / etc/kubernetes/pki/etcd

[root@k8s-master01 pki] # ln-s / etc/etcd/ssl/* / etc/kubernetes/pki/etcd/

[root@k8s-master01 pki] # systemctl daemon-reload

[root@k8s-master01 pki] # systemctl enable-- now etcd

Created symlink / etc/systemd/system/etcd3.service → / usr/lib/systemd/system/etcd.service.

Created symlink / etc/systemd/system/multi-user.target.wants/etcd.service → / usr/lib/systemd/system/etcd.service.

Highly available configuration

All Master nodes install keepalived and haproxy

Yum install keepalived haproxy-y

HAProxy configuration

[root@k8s-master01 pki] # cat / etc/haproxy/haproxy.cfg

Global

Maxconn 2000

Ulimit-n 16384

Log 127.0.0.1 local0 err

Stats timeout 30s

Defaults

Log global

Mode http

Option httplog

Timeout connect 5000

Timeout client 50000

Timeout server 50000

Timeout http-request 15s

Timeout http-keep-alive 15s

Frontend monitor-in

Bind *: 33305

Mode http

Option httplog

Monitor-uri / monitor

Listen stats

Bind *: 8006

Mode http

Stats enable

Stats hide-version

Stats uri / stats

Stats refresh 30s

Stats realm Haproxy\ Statistics

Stats auth admin:admin

Frontend k8s-master

Bind 0.0.0.0:8443

Bind 127.0.0.1:8443

Mode tcp

Option tcplog

Tcp-request inspect-delay 5s

Default_backend k8s-master

Backend k8s-master

Mode tcp

Option tcplog

Option tcp-check

Balance roundrobin

Default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100

Server k8s-master01 192.168.1.19:6443 check

Server k8s-master02 192.168.1.18:6443 check

Server k8s-master03 192.168.1.20:6443 check

KeepAlived configuration

! Configuration File for keepalived

Global_defs {

Router_id LVS_DEVEL

}

Vrrp_script chk_apiserver {

Script "/ etc/keepalived/check_apiserver.sh"

Interval 2

Weight-5

Fall 3

Rise 2

}

Vrrp_instance VI_1 {

State MASTER

Interface ens160

Mcast_src_ip 192.168.1.19

Virtual_router_id 51

Priority 100

Advert_int 2

Authentication {

Auth_type PASS

Auth_pass K8SHA_KA_AUTH

}

Virtual_ipaddress {

192.168.1.88

}

Track_script {

Chk_apiserver

}}

Health check configuration

[root@k8s-master01 keepalived] # cat / etc/keepalived/check_apiserver.sh

#! / bin/bash

Err=0

For k in $(seq 1 5)

Do

Check_code=$ (pgrep kube-apiserver)

If [[$check_code = = "]]; then

Err=$ (expr $err + 1)

Sleep 5

Continue

Else

Err=0

Break

Fi

Done

If [[$err! = "0"]]; then

Echo "systemctl stop keepalived"

/ usr/bin/systemctl stop keepalived

Exit 1

Else

Exit 0

Fi

Start HAProxy and KeepAlived

[root@k8s-master01 keepalived] # systemctl enable-- now haproxy

[root@k8s-master01 keepalived] # systemctl enable-- now keepalived

VIP test

[root@k8s-master01 pki] # ping 192.168.1.88

PING 192.168.1.88 (192.168.1.88) 56 (84) bytes of data.

64 bytes from 192.168.1.88: icmp_seq=1 ttl=64 time=1.39 ms

64 bytes from 192.168.1.88: icmp_seq=2 ttl=64 time=2.46 ms

64 bytes from 192.168.1.88: icmp_seq=3 ttl=64 time=1.68 ms

64 bytes from 192.168.1.88: icmp_seq=4 ttl=64 time=1.08 ms

Kubernetes component configuration

All nodes create related directories

[root@k8s-master01 pki] # mkdir-p / etc/kubernetes/manifests/ / etc/systemd/system/kubelet.service.d / var/lib/kubelet / var/log/kubernetes

All Master nodes create kube-apiserver service

[root@k8s-master01 pki] # cat / usr/lib/systemd/system/kube-apiserver.service

[Unit]

Description=Kubernetes API Server

Documentation= https://github.com/kubernetes/kubernetes

After=network.target

[Service]

ExecStart=/usr/local/bin/kube-apiserver\

-- vicious 2\

-- logtostderr=true\

-- allow-privileged=true\

-- bind-address=0.0.0.0\

-- secure-port=6443\

-- insecure-port=0\

-- advertise-address=192.168.1.88\

-- service-cluster-ip-range=10.96.0.0/12\

-service-node-port-range=30000-32767\

-- etcd-servers= https://192.168.1.19:2379,https://192.168.1.18:2379,https://192.168.1.20:2379\

-- etcd-cafile=/etc/etcd/ssl/etcd-ca.pem\

-- etcd-certfile=/etc/etcd/ssl/etcd.pem\

-- etcd-keyfile=/etc/etcd/ssl/etcd-key.pem\

-- client-ca-file=/etc/kubernetes/pki/ca.pem\

-- tls-cert-file=/etc/kubernetes/pki/apiserver.pem\

-- tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem\

-- kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem\

-- kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem\

-- service-account-key-file=/etc/kubernetes/pki/sa.pub\

-- kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname\

-- enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota\

-- authorization-mode=Node,RBAC\

-- enable-bootstrap-token-auth=true\

-- requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem\

-- proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem\

-- proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem\

-- requestheader-allowed-names=aggregator\

-- requestheader-group-headers=X-Remote-Group\

-- requestheader-extra-headers-prefix=X-Remote-Extra-\

-- requestheader-username-headers=X-Remote-User\

-- token-auth-file=/etc/kubernetes/token.csv

Restart=on-failure

RestartSec=10s

LimitNOFILE=65535

[Install]

WantedBy=multi-user.target

[root@k8s-master01 pki] # vim / etc/kubernetes/token.csv

[root@k8s-master01 pki] # cat! $

Cat / etc/kubernetes/token.csv

D7d356746b508a1a478e49968fba79477 KubeletMurbootstrap 10001, "system:kubelet-bootstrap"

All Master nodes enable kube-apiserver

[root@k8s-master01 pki] # systemctl enable-- now kube-apiserver

All Master nodes are configured with kube-controller-manager service

[root@k8s-master01 pki] # cat / usr/lib/systemd/system/kube-controller-manager.service

[Unit]

Description=Kubernetes Controller Manager

Documentation= https://github.com/kubernetes/kubernetes

After=network.target

[Service]

ExecStart=/usr/local/bin/kube-controller-manager\

-- vicious 2\

-- logtostderr=true\

-- address=127.0.0.1\

-- root-ca-file=/etc/kubernetes/pki/ca.pem\

-- cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem\

-- cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem\

-- service-account-private-key-file=/etc/kubernetes/pki/sa.key\

-- kubeconfig=/etc/kubernetes/controller-manager.kubeconfig\

-- leader-elect=true\

-- use-service-account-credentials=true\

-- node-monitor-grace-period=40s\

-- node-monitor-period=5s\

-- pod-eviction-timeout=2m0s\

-- controllers=*,bootstrapsigner,tokencleaner\

-- allocate-node-cidrs=true\

-- cluster-cidr=10.244.0.0/16\

-- requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem\

-- node-cidr-mask-size=24

Restart=always

RestartSec=10s

[Install]

WantedBy=multi-user.target

All Master nodes start kube-controller-manager

[root@k8s-master01 pki] # systemctl daemon-reload

[root@k8s-master01 pki] # systemctl enable-- now kube-controller-manager

Created symlink / etc/systemd/system/multi-user.target.wants/kube-controller-manager.service → / usr/lib/systemd/system/kube-controller-manager.service.

All Master nodes are configured with kube-scheduler service

[root@k8s-master01 pki] # cat / usr/lib/systemd/system/kube-scheduler.service

[Unit]

Description=Kubernetes Scheduler

Documentation= https://github.com/kubernetes/kubernetes

After=network.target

[Service]

ExecStart=/usr/local/bin/kube-scheduler\

-- vicious 2\

-- logtostderr=true\

-- address=127.0.0.1\

-- leader-elect=true\

-- kubeconfig=/etc/kubernetes/scheduler.kubeconfig

Restart=always

RestartSec=10s

[Install]

WantedBy=multi-user.target

[root@k8s-master01 pki] # systemctl daemon-reload

[root@k8s-master01 pki] # systemctl enable-- now kube-scheduler

Created symlink / etc/systemd/system/multi-user.target.wants/kube-scheduler.service → / usr/lib/systemd/system/kube-scheduler.service.

7. TLS Bootstrapping configuration

Create a bootstrap in Master01

Kubectl config set-cluster kubernetes-certificate-authority=/etc/kubernetes/pki/ca.pem-embed-certs=true-server= https://192.168.1.88:8443-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig

Kubectl config set-credentials tls-bootstrap-token-user-token=c8ad9c.2e4d610cf3e7426e-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig

Kubectl config set-context tls-bootstrap-token-user@kubernetes-cluster=kubernetes-user=tls-bootstrap-token-user-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig

Kubectl config use-context tls-bootstrap-token-user@kubernetes-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig

[root@k8s-master01 bootstrap] # pwd

/ root/k8s-ha-install/bootstrap

[root@k8s-master01 bootstrap] # kubectl create-f bootstrap.secret.yaml

Secret/bootstrap-token-c8ad9c created

Clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created

Clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-bootstrap created

Clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-certificate-rotation created

Clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created

Clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created

8. Node node configuration

Copy the certificate to the Node node

[root@k8s-master01 bootstrap] # for NODE in k8s-node01 K8sMurnode02; do

Ssh $NODE mkdir-p / etc/kubernetes/pki / etc/etcd/ssl/ etc/etcd/sslfor FILE in etcd-ca.pem etcd.pem etcd-key.pem; do scp / etc/etcd/ssl/$FILE $NODE:/etc/etcd/ssl/donefor FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig; do scp / etc/kubernetes/$FILE $NODE:/etc/kubernetes/$ {FILE}

Done

Done

Etcd-ca.pem 100% 1363 314.0KB/s 00:00

Etcd.pem 100% 1505 429.1KB/s 00:00

Etcd-key.pem 100% 1679 361.9KB/s 00:00

Ca.pem 100% 1407 459.5KB/s 00:00

Ca-key.pem 100% 1679 475.2KB/s 00:00

Front-proxy-ca.pem 100% 1143 214.5KB/s 00:00

Bootstrap-kubelet.kubeconfig 100% 2291 695.1KB/s 00:00

Etcd-ca.pem 100% 1363 325.5KB/s 00:00

Etcd.pem 100% 1505 301.2KB/s 00:00

Etcd-key.pem 100% 1679 260.9KB/s 00:00

Ca.pem 100% 1407 420.8KB/s 00:00

Ca-key.pem 100% 1679 398.0KB/s 00:00

Front-proxy-ca.pem 100% 1143 224.9KB/s 00:00

Bootstrap-kubelet.kubeconfig 100% 2291 685.4KB/s 00:00

All Node nodes create related directories

Mkdir-p / var/lib/kubelet / var/log/kubernetes / etc/systemd/system/kubelet.service.d / etc/kubernetes/manifests/

All nodes are configured with kubelet service (Master nodes do not deploy Pod or do not need to be configured)

[root@k8s-master01 bootstrap] # vim / usr/lib/systemd/system/kubelet.service

[root@k8s-master01 bootstrap] # cat! $

Cat / usr/lib/systemd/system/kubelet.service

[Unit]

Description=Kubernetes Kubelet

Documentation= https://github.com/kubernetes/kubernetes

After=docker.service

Requires=docker.service

[Service]

ExecStart=/usr/local/bin/kubelet

Restart=always

StartLimitInterval=0

RestartSec=10

[Install]

WantedBy=multi-user.target

[root@k8s-master01 bootstrap] # vim / etc/systemd/system/kubelet.service.d/10-kubelet.conf

[root@k8s-master01 bootstrap] # cat! $

Cat / etc/systemd/system/kubelet.service.d/10-kubelet.conf

[Service]

Environment= "KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig-kubeconfig=/etc/kubernetes/kubelet.kubeconfig"

Environment= "KUBELET_SYSTEM_ARGS=--network-plugin=cni-cni-conf-dir=/etc/cni/net.d-cni-bin-dir=/opt/cni/bin"

Environment= "KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml"

Environment= "KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node=''-pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"

ExecStart=

ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS

[root@k8s-master01 bootstrap] # vim / etc/kubernetes/kubelet-conf.yml

[root@k8s-master01 bootstrap] # cat! $

Cat / etc/kubernetes/kubelet-conf.yml

ApiVersion: kubelet.config.k8s.io/v1beta1

Kind: KubeletConfiguration

Address: 0.0.0.0

Port: 10250

ReadOnlyPort: 10255

Authentication:

Anonymous:

Enabled: false

Webhook:

CacheTTL: 2m0s

Enabled: true

X509:

ClientCAFile: / etc/kubernetes/pki/ca.pem

Authorization:

Mode: Webhook

Webhook:

CacheAuthorizedTTL: 5m0s

CacheUnauthorizedTTL: 30s

CgroupDriver: cgroupfs

CgroupsPerQOS: true

ClusterDNS:

10.96.0.10

ClusterDomain: cluster.local

ContainerLogMaxFiles: 5

ContainerLogMaxSize: 10Mi

ContentType: application/vnd.kubernetes.protobuf

CpuCFSQuota: true

CpuManagerPolicy: none

CpuManagerReconcilePeriod: 10s

EnableControllerAttachDetach: true

EnableDebuggingHandlers: true

EnforceNodeAllocatable:pods

EventBurst: 10

EventRecordQPS: 5

EvictionHard:

Imagefs.available: 15%

Memory.available: 100Mi

Nodefs.available: 10%

Nodefs.inodesFree: 5%

EvictionPressureTransitionPeriod: 5m0s

FailSwapOn: true

FileCheckFrequency: 20s

HairpinMode: promiscuous-bridge

HealthzBindAddress: 127.0.0.1

HealthzPort: 10248

HttpCheckFrequency: 20s

ImageGCHighThresholdPercent: 85

ImageGCLowThresholdPercent: 80

ImageMinimumGCAge: 2m0s

IptablesDropBit: 15

IptablesMasqueradeBit: 14

KubeAPIBurst: 10

KubeAPIQPS: 5

MakeIPTablesUtilChains: true

MaxOpenFiles: 1000000

MaxPods: 110

NodeStatusUpdateFrequency: 10s

OomScoreAdj:-999

PodPidsLimit:-1

RegistryBurst: 10

RegistryPullQPS: 5

ResolvConf: / etc/resolv.conf

RotateCertificates: true

RuntimeRequestTimeout: 2m0s

SerializeImagePulls: true

StaticPodPath: / etc/kubernetes/manifests

StreamingConnectionIdleTimeout: 4h0m0s

SyncFrequency: 1m0s

VolumeStatsAggPeriod: 1m0s

Start all node kubelet

Systemctl daemon-reload

Systemctl enable-now kubelet

View cluster status

[root@k8s-master01 bootstrap] # kubectl get node

NAME STATUS ROLES AGE VERSION

K8s-master01 NotReady 13m v1.16.2

K8s-master02 NotReady 11m v1.16.2

K8s-master03 NotReady 10m v1.16.2

K8s-node01 NotReady 9m16s v1.16.2

K8s-node02 NotReady 53s v1.16.2

Kube-Proxy configuration

Kubectl-n kube-system create serviceaccount kube-proxy

Kubectl create clusterrolebinding system:kube-proxy-clusterrole system:node-proxier-serviceaccount kube-system:kube-proxy

SECRET=$ (kubectl-n kube-system get sa/kube-proxy\

-- output=jsonpath=' {.secretts [0] .name}')

JWT_TOKEN=$ (kubectl-n kube-system get secret/$SECRET\

-- output=jsonpath=' {.data.token}'| base64-d)

PKI_DIR=/etc/kubernetes/pki

K8S_DIR=/etc/kubernetes

Kubectl config set-cluster kubernetes-certificate-authority=/etc/kubernetes/pki/ca.pem-embed-certs=true-server= https://192.168.1.88:8443-kubeconfig=$ {K8S_DIR} / kube-proxy.kubeconfig

Kubectl config set-credentials kubernetes-- token=$ {JWT_TOKEN}-- kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

Kubectl config set-context kubernetes-cluster=kubernetes-user=kubernetes-kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

Kubectl config use-context kubernetes-kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

Assign Service file

[root@k8s-master01 k8s-ha-install] # for NODE in k8s-master01 k8s-master02 K8smuryMaster03; do

Scp ${K8S_DIR} / kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfigscp kube-proxy/kube-proxy.conf $NODE:/etc/kubernetes/kube-proxy.confscp kube-proxy/kube-proxy.service $NODE:/usr/lib/systemd/system/kube-proxy.service

Done

Cp kube-proxy/kube-proxy.service $NODE:/usr/lib/systemd/system/kube-proxy.service

Done

Kube-proxy.kubeconfig 100% 3112 973.8KB/s 00:00

Kube-proxy.conf 100% 813 196.5KB/s 00:00

Kube-proxy.service 100% 288 115.5KB/s 00:00

Kube-proxy.kubeconfig 100% 3112 580.5KB/s 00:00

Kube-proxy.conf 100% 813 160.1KB/s 00:00

Kube-proxy.service 100% 288 92.4KB/s 00:00

Kube-proxy.kubeconfig 100% 3112 703.1KB/s 00:00

Kube-proxy.conf 100% 813 244.1KB/s 00:00

Kube-proxy.service 100% 288 54.6KB/s 00:00

[root@k8s-master01 k8s-ha-install] #

[root@k8s-master01 k8s-ha-install] # for NODE in k8s-node01 K8sMurnode02; do

Scp / etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig

Scp kube-proxy/kube-proxy.conf $NODE:/etc/kubernetes/kube-proxy.conf

Scp kube-proxy/kube-proxy.service $NODE:/usr/lib/systemd/system/kube-proxy.service

Done

Kube-proxy.kubeconfig 100% 3112 604.6KB/s 00:00

Kube-proxy.conf 100% 813 197.6KB/s 00:00

Kube-proxy.service 100% 288 95.5KB/s 00:00

Kube-proxy.kubeconfig 100% 3112 749.3KB/s 00:00

Kube-proxy.conf 100% 813 206.2KB/s 00:00

Kube-proxy.service 100% 288 69.9KB/s 00:00

All nodes start kube-proxy

[root@k8s-master01 k8s-ha-install] # systemctl daemon-reload

[root@k8s-master01 k8s-ha-install] # systemctl enable-- now kube-proxy

Created symlink / etc/systemd/system/multi-user.target.wants/kube-proxy.service → / usr/lib/systemd/system/kube-proxy.service.

9. Install calico

[root@k8s-master01 Calico] # cd / root/k8s-ha-install/Calico/

[root@k8s-master01 Calico] # POD_CIDR= "10.244.0.0swap 16"\

Sed-I-e "s?192.168.0.0/16?$POD_CIDR?g" calico.yaml

[root@k8s-master01 Calico] # kubectl create-f calico.yaml

Configmap/calico-config created

Customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created

Customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created

Customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created

Customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created

Customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created

Customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created

Customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created

Customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created

Customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created

Customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created

Customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created

Customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created

Customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created

Customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created

Clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created

Clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created

Clusterrole.rbac.authorization.k8s.io/calico-node created

Clusterrolebinding.rbac.authorization.k8s.io/calico-node created

Daemonset.apps/calico-node created

Serviceaccount/calico-node created

Deployment.apps/calico-kube-controllers created

Serviceaccount/calico-kube-controllers created

View Calico status

[root@k8s-master01 Calico] # kubectl get po-n kube-system

NAME READY STATUS RESTARTS AGE

Calico-kube-controllers-6d85fdfbd8-2jtj9 1 2jtj9 1 Running 0 3m38s

Calico-node-5t9kj 1/1 Running 0 3m38s

Calico-node-9ftns 1/1 Running 0 3m38s

Calico-node-b6rsl 1/1 Running 0 3m38s

Calico-node-hfqrd 1/1 Running 0 3m38s

Calico-node-lpcmp 1/1 Running 0 3m38s

View cluster status

[root@k8s-master01 Calico] # kubectl cluster-info

Kubernetes master is running at https://192.168.1.88:8443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

[root@k8s-master01 Calico] # kubectl get node

NAME STATUS ROLES AGE VERSION

K8s-master01 Ready 12h v1.16.2

K8s-master02 Ready 13h v1.16.2

K8s-master03 Ready 12h v1.16.2

K8s-node01 Ready 12h v1.16.2

K8s-node02 Ready 11h v1.16.2

At this point, the cluster is installed, and other components can refer to the documentation installed in other versions of this blog.

Refer to the practical Guide to Kubernetes that will never tread again.

Original link: https://www.cnblogs.com/dukuan/p/11780729.html

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report