Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to install Kubernetes 1.16.0 High availability Cluster in binary system under CentOS7.3

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article shows you how to install Kubernetes 1.16.0 high-availability cluster in binary under CentOS7.3. The content is concise and easy to understand, which will definitely brighten your eyes. I hope you can get something through the detailed introduction of this article.

Server Planning:

At least three servers set up a high availability cluster and configure 2C4G or above

All master,node nodes have docker installed, version 18 or above

VIP 172.30.2.60

172.30.0.109 k8s-master1 nginx keepalived

172.30.0.89 k8s-master2 nginx keepalived

172.30.0.81 k8s-node1

Binary Kubernetes installation path: / opt/kubernetes/ {ssl,cfg,bin,logs} stores key, configuration, executable, and log files respectively

Binary etcd installation path: / opt/etcd/ {ssl,cfg,bin} stores keys, configurations, and executable files respectively

1. System initialization

1. Turn off the firewall:

# systemctl stop firewalld

# systemctl disable firewalld

two。 Turn off selinux:

# setenforce 0 # temporary

# sed-I 'etc/selinux/config' / etc/selinux/config # permanent

3. Turn off swap:

# swapoff-a # temporary

# vim / etc/fstab # permanent

4. Synchronize system time:

# ntpdate time.windows.com

5. Add hosts:

# vim / etc/hosts

172.30.0.109 k8s-master1

172.30.0.81 k8s-master2

172.30.0.89 k8s-node1

6. Modify the hostname:

Hostnamectl set-hostname k8s-master1

2. Etcd cluster

1. Install cfssl tools

# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64

# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64

# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

# chmod + x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64

# mv cfssl_linux-amd64 / usr/local/bin/cfssl

# mv cfssljson_linux-amd64 / usr/local/bin/cfssljson

# mv cfssl-certinfo_linux-amd64 / usr/bin/cfssl-certinfo

two。 Generate etcd certificate

The hosts field in the ① generation server.csr.json modification request file contains all etcd node IP:

# vi server-csr.json

{

"CN": "etcd"

"hosts": [

"etcd01 Node IP"

"etcd02 Node IP"

"etcd03 Node IP"

]

"key": {

"algo": "rsa"

"size": 2048

}

"names": [

{

"C": "CN"

"L": "BeiJing"

"ST": "BeiJing"

}

]

}

② generates custom CA and initializes the configuration file

# vim ca-config.json

{

"signing": {

"default": {

"expiry": "87600h"

}

"profiles": {

"www": {

"expiry": "87600h"

"usages": [

"signing"

"key encipherment"

"server auth"

"client auth"

]

}

}

}

}

# vim ca-csr.json

{

"CN": "etcd CA"

"key": {

"algo": "rsa"

"size": 2048

}

"names": [

{

"C": "CN"

"L": "Beijing"

"ST": "Beijing"

}

]

}

Once the etcd key initialization file is ready, the key can be generated.

# cfssl gencert-initca ca-csr.json | cfssljson-bare ca-

# cfssl gencert-ca=ca.pem-ca-key=ca-key.pem-config=ca-config.json-profile=www server-csr.json | cfssljson-bare server

The generated key is placed in the specified etcd/ssl directory

Etcd binary executable file etcd,etcdctl is placed under etcd/bin

The etcd configuration file is placed under etcd/cfg, as follows

# cat / opt/etcd/cfg/etcd

# [Member]

ETCD_NAME= "etcd02"

ETCD_DATA_DIR= "/ var/lib/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS= "https://172.30.0.81:2380"

ETCD_LISTEN_CLIENT_URLS= "https://172.30.0.81:2379"

# [Clustering]

ETCD_INITIAL_ADVERTISE_PEER_URLS= "https://172.30.0.81:2380"

ETCD_ADVERTISE_CLIENT_URLS= "https://172.30.0.81:2379"

ETCD_INITIAL_CLUSTER= "etcd01= https://172.30.2.10:2380,etcd02=https://172.30.0.81:2380,etcd03=https://172.30.0.89:2380"

ETCD_INITIAL_CLUSTER_TOKEN= "etcd-cluster"

ETCD_INITIAL_CLUSTER_STATE= "new"

Place etcd.service under / usr/lib/systemd/system/

# cat / usr/lib/systemd/system/etcd.service

[Unit]

Description=Etcd Server

After=network.target

After=network-online.target

Wants=network-online.target

[Service]

Type=notify

EnvironmentFile=-/opt/etcd/cfg/etcd

ExecStart=/opt/etcd/bin/etcd\

-- name=$ {ETCD_NAME}\

-- data-dir=$ {ETCD_DATA_DIR}\

-- listen-peer-urls=$ {ETCD_LISTEN_PEER_URLS}\

-- listen-client-urls=$ {ETCD_LISTEN_CLIENT_URLS}, http://127.0.0.1:2379\

-- advertise-client-urls=$ {ETCD_ADVERTISE_CLIENT_URLS}\

-- initial-advertise-peer-urls=$ {ETCD_INITIAL_ADVERTISE_PEER_URLS}\

-- initial-cluster=$ {ETCD_INITIAL_CLUSTER}\

-- initial-cluster-token=$ {ETCD_INITIAL_CLUSTER}\

-- initial-cluster-state=new\

-- cert-file=/opt/etcd/ssl/server.pem\

-- key-file=/opt/etcd/ssl/server-key.pem\

-- peer-cert-file=/opt/etcd/ssl/server.pem\

-- peer-key-file=/opt/etcd/ssl/server-key.pem\

-- trusted-ca-file=/opt/etcd/ssl/ca.pem\

-- peer-trusted-ca-file=/opt/etcd/ssl/ca.pem

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

Copy to the three nodes of Etcd:

# scp-r / opt/etcd root@etcd node IP:/opt

# scp / usr/lib/systemd/system/etcd.service root@etcd node IP:/usr/lib/systemd/system

Synchronously modify the configuration name of the etcd node configuration file to correspond to the host, as shown in

Start the etcd node and the etcd installation is complete

# systemctl start etcd & & systemctl enable etcd

Check the health status of etcd nodes

# / opt/etcd/bin/etcdctl-ca-file=/opt/etcd/ssl/ca.pem-cert-file=/opt/etcd/ssl/server.pem-key-file=/opt/etcd/ssl/server-key.pem-endpoints= "https://172.30.2.10:2379,https://172.30.0.81:2379,https://172.30.0.89:2379" cluster-health

Member 37f20611ff3d9209 is healthy: got healthy result from https://172.30.2.10:2379

Member b10f0bac3883a232 is healthy: got healthy result from https://172.30.0.81:2379

Member b46624837acedac9 is healthy: got healthy result from https://172.30.0.89:2379

Cluster is healthy

3. Deploy K8S cluster-master node

① regenerates custom CA and distinguishes it from etcd to generate apiserver,kube-proxy,kubectl-admin certificate

# vim ca-config.json

{

"signing": {

"default": {

"expiry": "87600h"

}

"profiles": {

"kubernetes": {

"expiry": "87600h"

"usages": [

"signing"

"key encipherment"

"server auth"

"client auth"

]

}

}

}

}

# vim ca-csr.json

{

"CN": "kubernetes"

"key": {

"algo": "rsa"

"size": 2048

}

"names": [

{

"C": "CN"

"L": "Beijing"

"ST": "Beijing"

"O": "K8s"

"OU": "System"

}

]

}

② generates kube-proxy key initialization file

Cat kube-proxy-csr.json

{

"CN": "system:kube-proxy"

"hosts": []

"key": {

"algo": "rsa"

"size": 2048

}

"names": [

{

"C": "CN"

"L": "BeiJing"

"ST": "BeiJing"

"O": "K8s"

"OU": "System"

}

]

}

③ generates the apiserver key initialization file. Note that you need to write the IP of all master nodes and the IP address of the apiserver, including nginx,nginx vip, otherwise you need to regenerate the key.

# vim server-csr.json

{

"CN": "kubernetes"

"hosts": [

"10.0.0.1"

"127.0.0.1"

"kubernetes"

"kubernetes.default"

"kubernetes.default.svc"

"kubernetes.default.svc.cluster"

"kubernetes.default.svc.cluster.local"

"172.30.2.60"

"172.30.0.109"

"172.30.0.81"

"172.30.2.10"

"172.30.0.89"

]

"key": {

"algo": "rsa"

"size": 2048

}

"names": [

{

"C": "CN"

"L": "BeiJing"

"ST": "BeiJing"

"O": "K8s"

"OU": "System"

}

]

}

④ admin key for kubectl remote clients to run

# vim admin-csr.json

{

"CN": "admin"

"hosts": []

"key": {

"algo": "rsa"

"size": 2048

}

"names": [

{

"C": "CN"

"L": "BeiJing"

"ST": "BeiJing"

"O": "system:masters"

"OU": "System"

}

]

}

⑤ generation key

# cfssl gencert-initca ca-csr.json | cfssljson-bare ca-

# cfssl gencert-ca=ca.pem-ca-key=ca-key.pem-config=ca-config.json-profile=kubernetes server-csr.json | cfssljson-bare server

# cfssl gencert-ca=ca.pem-ca-key=ca-key.pem-config=ca-config.json-profile=kubernetes kube-proxy-csr.json | cfssljson-bare kube-proxy

# cfssl gencert-ca=ca.pem-ca-key=ca-key.pem-config=ca-config.json-profile=kubernetes admin-csr.json | cfssljson-bare admin

Synchronize the * .pem key to the / opt/kubernetes/ssl of all nodes (you can also synchronize the key specifically). The kubelet key is automatically generated by the later deployment kubelet.

Generate the kubeconfig file of admin for clients to access k8s cluster

# kubectl config set-cluster kubernetes-certificate-authority=/opt/kubernetes/ssl/ca.pem-embed-certs=true-server= https://172.30.0.109:6443-kubeconfig=/root/.kube/kubectl.kubeconfig

# kubectl config set-credentials kube-admin-client-certificate=/opt/kubernetes/ssl/admin.pem-client-key=/opt/kubernetes/ssl/admin-key.pem-embed-certs=true-kubeconfig=/root/.kube/kubectl.kubeconfig

# kubectl config set-context kube-admin@kubernetes-cluster=kubernetes-user=kube-admin-kubeconfig=/root/.kube/kubectl.kubeconfig

# kubectl config use-context kube-admin@kubernetes-kubeconfig=/root/.kube/kubectl.kubeconfig

# mv / root/.kube/ {kubectl.kubeconfig,config}

⑥ download master binary package is placed under / opt/kubernetes/bin

⑦ configuration file, service startup file

Kube-apiserve Note: when deploying another master, you need to change the IP address

# vim / opt/kubernetes/cfg/kube-apiserver.conf

KUBE_APISERVER_OPTS= "--logtostderr=false\

-- vicious 2\

-- log-dir=/opt/kubernetes/logs\

-- etcd-servers= https://172.30.2.10:2379,https://172.30.0.81:2379,https://172.30.0.89:2379\

-- bind-address=172.30.0.109\

-- secure-port=6443\

-- advertise-address=172.30.0.109\

-- allow-privileged=true\

-- service-cluster-ip-range=10.0.0.0/24\

-- enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction\

-- authorization-mode=RBAC,Node\

-- enable-bootstrap-token-auth=true\

-- token-auth-file=/opt/kubernetes/cfg/token.csv\

-service-node-port-range=30000-32767\

-- kubelet-client-certificate=/opt/kubernetes/ssl/server.pem\

-- kubelet-client-key=/opt/kubernetes/ssl/server-key.pem\

-- tls-cert-file=/opt/kubernetes/ssl/server.pem\

-- tls-private-key-file=/opt/kubernetes/ssl/server-key.pem\

-- client-ca-file=/opt/kubernetes/ssl/ca.pem\

-- service-account-key-file=/opt/kubernetes/ssl/ca-key.pem\

-- etcd-cafile=/opt/etcd/ssl/ca.pem\

-- etcd-certfile=/opt/etcd/ssl/server.pem\

-- etcd-keyfile=/opt/etcd/ssl/server-key.pem\

-- audit-log-maxage=30\

-- audit-log-maxbackup=3\

-- audit-log-maxsize=100\

-- audit-log-path=/opt/kubernetes/logs/k8s-audit.log "

# cat / usr/lib/systemd/system/kube-apiserver.service

[Unit]

Description=Kubernetes API Server

Documentation= https://github.com/kubernetes/kubernetes

[Service]

EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf

ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS

Restart=on-failure

[Install]

WantedBy=multi-user.target

Kube-controller-manager

# vim / opt/kubernetes/cfg/kube-controller-manager.conf

KUBE_CONTROLLER_MANAGER_OPTS= "--logtostderr=false\

-- vicious 2\

-- log-dir=/opt/kubernetes/logs\

-- leader-elect=true\

-- master=127.0.0.1:8080\

-- address=127.0.0.1\

-- allocate-node-cidrs=true\

-- cluster-cidr=10.244.0.0/16\

-- service-cluster-ip-range=10.0.0.0/24\

-- cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem\

-- cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem\

-- root-ca-file=/opt/kubernetes/ssl/ca.pem\

-- service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem\

-- experimental-cluster-signing-duration=87600h0m0s "

# vim / usr/lib/systemd/system/kube-controller-manager.service

[Unit]

Description=Kubernetes Controller Manager

Documentation= https://github.com/kubernetes/kubernetes

[Service]

EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf

ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS

Restart=on-failure

[Install]

WantedBy=multi-user.target

Kube-scheduler

# vim / opt/kubernetes/cfg/kube-scheduler.conf

KUBE_SCHEDULER_OPTS= "--logtostderr=false\

-- vicious 2\

-- log-dir=/opt/kubernetes/logs\

-- leader-elect\

-- master=127.0.0.1:8080\

-- address=127.0.0.1 "

# vim / usr/lib/systemd/system/kube-scheduler.service

[Unit]

Description=Kubernetes Scheduler

Documentation= https://github.com/kubernetes/kubernetes

[Service]

EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf

ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS

Restart=on-failure

[Install]

WantedBy=multi-user.target

Generate token files for node nodes to communicate with apiserver. Note the token value. Master needs to be consistent with node.

# cat / opt/kubernetes/cfg/token.csv

C47ffb939f5ca36231d9e3121a252940 Kubeletripbootstrap 10001, "system:node-bootstrapper"

Format: token, user, uid, user group

Token can also generate its own replacements:

# head-c 16 / dev/urandom | od-An-t x | tr-d''

Authorize kubelet-bootstrap to enable node node kubelet to access apiserver normally

# kubectl create clusterrolebinding kubelet-bootstrap-clusterrole=system:node-bootstrapper-user=kubelet-bootstrap

Start the master node and observe the log / opt/kubernetes/logs

# systemctl start kube-controller-manager

# systemctl start kube-scheduler

# systemctl enable kube-apiserver

# systemctl enable kube-controller-manager

# systemctl enable kube-scheduler

Deploy the second master node 172.30.0.81 as is, except for the IP address in the apiserver configuration file

4. Deploy K8S cluster-node node

① node node key preparation

Synchronized to / opt/kubernetes/ssl of node node ahead of time

② configuration node Node kubelet,kube-proxy profile

Kubelet Note: kubelet of different nodes needs to modify hostname configuration.

# vim kubelet.conf

KUBELET_OPTS= "--logtostderr=false\

-- vicious 2\

-- log-dir=/opt/kubernetes/logs\

-- hostname-override=k8s-node1\

-- network-plugin=cni\

-- kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig\

-- bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig\

-- config=/opt/kubernetes/cfg/kubelet-config.yml\

-- cert-dir=/opt/kubernetes/ssl\

-- pod-infra-container-image=lizhenliang/pause-amd64:3.0 "

Bootstrap.kubeconfig file is the communication authentication file between kubelet and apiserver. The internal token needs to be consistent with the token file of the master node.

# vim bootstrap.kubeconfig

ApiVersion: v1

Clusters:

-cluster:

Certificate-authority: / opt/kubernetes/ssl/ca.pem

Server: https://172.30.0.109:6443

Name: kubernetes

Contexts:

-context:

Cluster: kubernetes

User: kubelet-bootstrap

Name: default

Current-context: default

Kind: Config

Preferences: {}

Users:

-name: kubelet-bootstrap

User:

Token: c47ffb939f5ca36231d9e3121a252940

# vim kubelet-config.yml

Kind: KubeletConfiguration

ApiVersion: kubelet.config.k8s.io/v1beta1

Address: 0.0.0.0

Port: 10250

ReadOnlyPort: 10255

CgroupDriver: cgroupfs

ClusterDNS:

-10.0.0.2

ClusterDomain: cluster.local

FailSwapOn: false

Authentication:

Anonymous:

Enabled: false

Webhook:

CacheTTL: 2m0s

Enabled: true

X509:

ClientCAFile: / opt/kubernetes/ssl/ca.pem

Authorization:

Mode: Webhook

Webhook:

CacheAuthorizedTTL: 5m0s

CacheUnauthorizedTTL: 30s

EvictionHard:

Imagefs.available: 15%

Memory.available: 100Mi

Nodefs.available: 10%

Nodefs.inodesFree: 5%

MaxOpenFiles: 1000000

MaxPods: 110

# vim / usr/lib/systemd/system/kubelet.service

[Unit]

Description=Kubernetes Kubelet

After=docker.service

Before=docker.service

[Service]

EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf

ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

Kube-proxy

# vim kube-proxy.conf

KUBE_PROXY_OPTS= "--logtostderr=false\

-- vicious 2\

-- log-dir=/opt/kubernetes/logs\

-- config=/opt/kubernetes/cfg/kube-proxy-config.yml "

# vim kube-proxy-config.yml

Kind: KubeProxyConfiguration

ApiVersion: kubeproxy.config.k8s.io/v1alpha1

Address: 0.0.0.0

MetricsBindAddress: 0.0.0.0:10249

ClientConnection:

Kubeconfig: / opt/kubernetes/cfg/kube-proxy.kubeconfig

HostnameOverride: k8s-node1

ClusterCIDR: 10.0.0.0/24

Mode: ipvs

Ipvs:

Scheduler: "rr"

Iptables:

MasqueradeAll: true

Kube-proxy.kubeconfig kube-proxy communication authentication file

# vim kube-proxy.kubeconfig

ApiVersion: v1

Clusters:

-cluster:

Certificate-authority: / opt/kubernetes/ssl/ca.pem

Server: https://172.30.0.109:6443

Name: kubernetes

Contexts:

-context:

Cluster: kubernetes

User: kube-proxy

Name: default

Current-context: default

Kind: Config

Preferences: {}

Users:

-name: kube-proxy

User:

Client-certificate: / opt/kubernetes/ssl/kube-proxy.pem

Client-key: / opt/kubernetes/ssl/kube-proxy-key.pem

# vim / usr/lib/systemd/system/kube-proxy.service

[Unit]

Description=Kubernetes Proxy

After=network.target

[Service]

EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf

ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

/ opt/kubernetes/cfg configuration as follows: kubelet.kubeconfig is automatically generated after startup

/ opt/kubernetes/bin

③ starts the node node

# systemctl start kubelet

# systemctl start kube-proxy

# systemctl enable kubelet

# systemctl enable kube-proxy

④ allows certificates to be issued to node

# kubectl get csr

# kubectl certificate approve node-csr-MYUxbmf_nmPQjmH3LkbZRL2uTO-_FCzDQUoUfTy7YjI

# kubectl get node

5. Deploy CNI network

Download address of binary package: https://github.com/containernetworking/plugins/releases

# mkdir / opt/cni/bin / etc/cni/net.d

# tar zxvf cni-plugins-linux-amd64-v0.8.2.tgz-C / opt/cni/bin

Make sure that kubelet enables CNI:

# cat / opt/kubernetes/cfg/kubelet.conf

-- network-plugin=cni

Execute on Master:

# kubectl apply-f kube-flannel.yaml

# kubectl get pods-n kube-system

NAME READY STATUS RESTARTS AGE

Kube-flannel-ds-amd64-5xmhh 1bat 1 Running 6 171m

Kube-flannel-ds-amd64-ps5fx 1/1 Running 0 150m

6. Authorize apiserver to access kubelet

To provide security, kubelet forbids anonymous access and must be authorized.

# vim apiserver-to-kubelet-rbac.yaml

ApiVersion: rbac.authorization.k8s.io/v1

Kind: ClusterRole

Metadata:

Annotations:

Rbac.authorization.kubernetes.io/autoupdate: "true"

Labels:

Kubernetes.io/bootstrapping: rbac-defaults

Name: system:kube-apiserver-to-kubelet

Rules:

-apiGroups:

-"

Resources:

-nodes/proxy

-nodes/stats

-nodes/log

-nodes/spec

-nodes/metrics

-pods/log

Verbs:

-"*"

-

ApiVersion: rbac.authorization.k8s.io/v1

Kind: ClusterRoleBinding

Metadata:

Name: system:kube-apiserver

Namespace: ""

RoleRef:

ApiGroup: rbac.authorization.k8s.io

Kind: ClusterRole

Name: system:kube-apiserver-to-kubelet

Subjects:

-apiGroup: rbac.authorization.k8s.io

Kind: User

Name: kubernetes

# kubectl apply-f apiserver-to-kubelet-rbac.yaml

In this way, you can operate pod and view logs through kubectl.

7. Deploy coredns

# kubectl apply-f coredns.yaml

8. K8S highly available configuration

The high availability of kube-controller-manager and kube-scheduler has been reflected in the configuration when deploying the cluster.

Election for load balancing, so we only need to focus on the high availability of apiserver

High availability of kube-apiserver

① first configures two master nodes

② deploys nginx,keepalived on both master nodes

Keepalived monitors nginx health status

Nginx is configured to listen on port 6443 of two master at layer 4 and forward at layer 4.

The nginx configuration is as follows

# rpm-vih http://nginx.org/packages/rhel/7/x86_64/RPMS/nginx-1.16.0-1.el7.ngx.x86_64.rpm

# vim / etc/nginx/nginx.conf

……

Stream {

Log_format main'$remote_addr $upstream_addr-[$time_local] $status $upstream_bytes_sent'

Access_log / var/log/nginx/k8s-access.log main

Upstream k8s-apiserver {

Server 172.30.0.109:6443

Server 172.30.0.81:6443

}

Server {

Listen 6443

Proxy_pass k8s-apiserver

}

}

……

Start nginx

# systemctl start nginx

# systemctl enable nginx

Keepliaved vip is configured as 172.30.2.60

③ configures node nodes to access master apiserver for layer 4 forwarding via 172.30.2.60

Modify the node node configuration to change the node of a single point of master to VIP 172.30.2.60.

Batch modifications:

# sed-I's' 172.30.0.109 '172.30.2.60' *

The above is how to install Kubernetes 1.16.0 high availability cluster in binary system under CentOS7.3. Have you learned any knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report