Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Deployment Scheme of K8S1.14 High availability production Cluster

2025-01-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

System description system component version operating system: CentOS 7.6Kernel: 4.4Kubernetes: v1.14.1Docker: 18.09 (supports 1.13.1, 17.03,17.06,17.09,18.06,18.09) Etcd: v3.3.12Flannel: v0.11cni-plugins: v0.7.5CoreDNS: 1.4.0 architecture diagram

Architecture description:

The Kubernetes components deployed on six hosts, three Master nodes, and three node nodes Master nodes are kube-apiserver, kube-scheduler, and kube-controller-manager,kube-proxy. Deploy the network component flannel, and the two highly available node deployment Haproxy of the data storage cluster Etcd.Master and the Kubernetes component deployed by the keepalivedNode node are Kubelet,kube-proxy. Container component Docker, network component Flannel cluster IP and hostname information: cluster role hostname IPMastermaster-1192.168.20.44Mastermaster-2192.168.20.45Mastermaster-3192.168.20.46Nodek8s-node-1192.168.20.47Nodek8s-node-2192.168.20.48Nodek8s-node-3192.168.20.49Ceph requires an available Ceph cluster system initialization 1. Host initialization

Install the CentOS7 system and do the following:

Close firewalld,Selinux. Update the system package, execute yum update to install the source of elrepo, update the kernel to version 4.4 or above, and restart to take effect and set the hostname to and parse in the local hosts file. Install the NTP service to set kernel parameters

In the part of setting kernel parameters, be sure to do the following:

# High availability Master node sets kernel parameter cat / etc/profilesource / etc/profile2. Generate the configuration file needed to create the certificate: [root@master-1 ~] # cd / opt/kubernetes/ssl/ [root @ master-1 ssl] # cfssl print-defaults config > config.json [root @ master-1 ssl] # cfssl print-defaults csr > csr.json [root @ master-1 ssl] # lltotal 8 root root Rukashi-1 root root 567 Jul 26 00:05 config.json-rw-r--r-- 1 root root 287 Jul 26 00:05 csr.json [root@master-1 ssl] # mv config.json ca-config.json [root@master-1 ssl] # mv csr.json ca-csr.json modifies the generated file as follows:

Ca-config.json file: [root@master-1 ssl] # vim ca-config.json {"signing": {"default": {"expiry": "87600h"}, "profiles": {"kubernetes": {"usages": ["signing", "key encipherment", "server auth", "client auth"] "expiry": "87600h"}

Ca-csr.json file:

[root@master-1 ssl] # vim ca-csr.json {"CN": "kubernetes", "key": {"algo": "rsa", "size": 2048}, "names": [{"C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "K8s" "OU": "System"}]} generate certificate (ca-key.pem) and key (ca.pem): [root@master-1 ssl] # cfssl gencert-initca ca-csr.json | cfssljson-bare ca2018/07/26 00:27:00 [INFO] generating a new CA key and certificate from CSR2018/07/26 00:27:00 [INFO] generate received request2018/07/26 00:27:00 [INFO] received CSR2018/07/26 00 27:00 [INFO] generating key: rsa-20482018/07/26 00:27:01 [INFO] encoded CSR2018/07/26 00:27:01 [INFO] signed certificate with serial number 479065525331838190845576195908271097044538206777 [root@master-1 ssl] # lltotal 20What Rwkashi-1 root root 386 Jul 26 00:16 ca-config.json-rw-r--r-- 1 root root 1001 Jul 26 00:27 ca.csr-rw-r--r-- 1 root root 255 Jul 26 00:20 ca- Csr.json-rw- 1 root root 1679 Jul 26 00:27 ca-key.pem-rw-r--r-- 1 root root 1359 Jul 26 00:27 ca.pem distributes certificates to each node: [root@master-1 ssl] # scp ca.csr ca.pem ca-key.pem ca-config.json 192.168.20.45:/opt/kubernetes/ SSL [root @ master-1 ssl] # scp ca.csr ca.pem ca-key.pem ca- Config.json 192.168.20.46:/opt/kubernetes/ SSL [root @ master-1 ssl] # scp ca.csr ca.pem ca-key.pem ca-config.json 192.168.20.47:/opt/kubernetes/ SSL [root @ master-1 ssl] # scp ca.csr ca.pem ca-key.pem ca-config.json 192.168.20.48:/opt/kubernetes/ SSL [root @ master-1 ssl] # scp ca.csr ca.pem ca-key.pem ca- Config.json 192.168.20.49:/opt/kubernetes/sslHA node deployment

Here you select two Master nodes to deploy Haproxy and keepalived, and you need to add a script to monitor the haproxy application on keepalived.

Keepalived configuration HA Node download and install keepalive:yum install keepalived-y configure two virtual IP, one for the apiserver agent of the K8s cluster and the other for the nginx ingress portal (which can also be configured separately). At the same time, set the status judgment of the haproxy. If the haproxy process on the node ends, you need to automatically switch the VIP to another node, the main HA configuration is as follows: # cat / etc/keepalived/keepalived.conf! Configuration File for keepalivedvrrp_script check_haproxy {script "/ etc/keepalived/check_haproxy.sh" interval 3 weight-20} vrrp_instance K8S {state backup interface eth0 virtual_router_id 44 priority 200 advert_int 5 authentication {auth_type PASS auth_pass 1111} virtual_ipaddress {192.168.20.50 192.168.20.60} Track_script {check_haproxy}} is configured from HA as follows:! Configuration File for keepalivedvrrp_script check_haproxy {script "/ etc/keepalived/check_haproxy.sh" interval 3 weight-20} vrrp_instance K8S {state backup interface eth0 virtual_router_id 44 priority 190 advert_int 5 authentication {auth_type PASS auth_pass 1111} virtual_ipaddress {192.168.20.50 192.168.20.60} Track_script {check_haproxy}} configures the corresponding monitoring script on these two nodes: vim / etcswap keepalivedamp checkmonitor haproxy.shroud bind binbank bashactiveroomstatus`netstat-lntp | grep haproxy | wc-l`if [$active_status-gt 0] Then exit 0else exit 1fi needs to add permissions chmod + x / etc/keepalived/check_haproxy.sh to deploy Haproxy

Official configuration manual

You need to confirm that the kernel parameters have been configured: echo 'net.ipv4.ip_nonlocal_bind = 1' > > / etc/sysctl.confecho 'net.ipv4.ip_forward = 1' > > / etc/sysctl.confsysctl-p install haproxyyum install haproxy-y and configure haproxy. The VIP we designed for the k8s cluster is 192.168.20.50, using a four-tier proxy. The configuration file is as follows: # cat / etc/haproxy/haproxy.cfg | egrep-v "^ #" global log 127.0.0.1 local2 chroot / var/lib/haproxy pidfile / var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon # turn on stats unix socket stats socket / var/lib/haproxy/statsdefaults mode tcp Modify the default to four-tier proxy log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0 timeout client 8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000frontend main 192.168.20.50 maxconn 3000frontend main 6443 acl url_static path_beg-I / static / images / javascript / stylesheets acl url_static path_end-I .jpg .gif .png .css .js default_backend k8s-nodebackend k8s-node mode tcp # modified to tcp balance roundrobin server k8s-node-1 192.168.20.44 master 6443 check # three master hosts server k8s-node-2 192.168.20.45 check server k8s-node-3 6443 check

Check whether the IP can be switched automatically after the configuration.

Deploy ETCD cluster 1. Install etcd

Complete the installation of etcd by executing the following command:

[root@master-1 ~] # cd / tmp/ [root @ master-1 tmp] # tar xf etcd-v3.3.12-linux-amd64.tar.gz [root@master-1 tmp] # cd etcd-v3.3.12-linux-amd64 [root@master-1 tmp] # cp etcd* / opt/kubernetes/bin/ [root @ master-1 tmp] # scp etcd* 192.168.20.45:/opt/kubernetes/bin/ [root @ master-1 tmp] # scp etcd* 192 .168.20.46: / opt/kubernetes/bin/2. Generate an exclusive certificate for etcd

1. Create an etcd certificate signing request

[root@master-1 ~] # vim etcd-csr.json {"CN": "etcd", "hosts": ["127.0.0.1", "192.168.20.44", "192.168.20.45", "192.168.20.46"], "key": {"algo": "rsa", "size": 2048}, "names": [{"C": "CN" "ST": "BeiJing", "L": "BeiJing", "O": "K8s", "OU": "System"}]}

two。 Generate etcd certificate

[root@master-1 ~] # cfssl gencert-ca=/opt/kubernetes/ssl/ca.pem\-ca-key=/opt/kubernetes/ssl/ca-key.pem\-config=/opt/kubernetes/ssl/ca-config.json\-profile=kubernetes etcd-csr.json | cfssljson-bare etcd

The following files are generated:

[root@master-1 ~] # lltotal 16 RW Jul-1 root root 1062 Jul 26 01:18 etcd.csr-rw-r--r-- 1 root root 287 Jul 26 00:50 etcd-csr.json-rw- 1 root root 1679 Jul 26 01:18 etcd-key.pem-rw-r--r-- 1 root root 1436 Jul 26 01:18 etcd.pem Mobile Certificate to ssl directory: [root@master-1 ~] # cp etcd * .pem / opt/kubernetes/ SSL [root @ master-1 ~] # scp etcd*.pem 192.168.20.45:/opt/kubernetes/ SSL [root @ master-1 ~] # scp etcd*.pem 192.168.20.46:/opt/kubernetes/ssl3. Configure etcd configuration ETCD profile

The configuration on master-1 is:

[root@master-1 ~] # vim / opt/kubernetes/cfg/etcd.conf# [member] ETCD_NAME= "etcd-node-1" ETCD_DATA_DIR= "/ var/lib/etcd/default.etcd" # ETCD_SNAPSHOT_COUNTER= "10000" # ETCD_HEARTBEAT_INTERVAL= "100" # ETCD_ELECTION_TIMEOUT= "1000" ETCD_LISTEN_PEER_URLS= "https://192.168.20.44:2380"ETCD_LISTEN_CLIENT_URLS="https://192.168.20.44:2379, Https://127.0.0.1:2379"#ETCD_MAX_SNAPSHOTS="5"#ETCD_MAX_WALS="5"#ETCD_CORS=""#[cluster]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.20.44:2380"# if you use different ETCD_NAME (e.g. Test), # set ETCD_INITIAL_CLUSTER value for this name, i.e. "test= http://..."ETCD_INITIAL_CLUSTER="etcd-node-1=https://192.168.20.44:2380, Etcd-node-2= https://192.168.20.45:2380, Etcd-node-3= https://192.168.20.46:2380"ETCD_INITIAL_CLUSTER_STATE="new"ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"ETCD_ADVERTISE_CLIENT_URLS="https://192.168.20.44:2379"#[security]CLIENT_CERT_AUTH="true"ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"ETCD_KEY_FILE="/opt/kubernetes / ssl/etcd-key.pem "PEER_CLIENT_CERT_AUTH=" true "ETCD_PEER_CA_FILE=" / opt/kubernetes/ssl/ca.pem "ETCD_PEER_CERT_FILE=" / opt/kubernetes/ssl/etcd.pem "ETCD_PEER_KEY_FILE=" / opt/kubernetes/ssl/etcd-key.pem "

The configuration on master-2 is:

[root@master-2 tmp] # vim / opt/kubernetes/cfg/etcd.conf# [member] ETCD_NAME= "etcd-node-2" ETCD_DATA_DIR= "/ var/lib/etcd/default.etcd" # ETCD_SNAPSHOT_COUNTER= "10000" # ETCD_HEARTBEAT_INTERVAL= "100" # ETCD_ELECTION_TIMEOUT= "1000" ETCD_LISTEN_PEER_URLS= "https://192.168.20.45:2380"ETCD_LISTEN_CLIENT_URLS="https://192.168.20.45:2379, Https://127.0.0.1:2379"#ETCD_MAX_SNAPSHOTS="5"#ETCD_MAX_WALS="5"#ETCD_CORS=""#[cluster]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.20.45:2380"# if you use different ETCD_NAME (e.g. Test), # set ETCD_INITIAL_CLUSTER value for this name, i.e. "test= http://..."ETCD_INITIAL_CLUSTER="etcd-node-1=https://192.168.20.44:2380, Etcd-node-2= https://192.168.20.45:2380, Etcd-node-3= https://192.168.20.46:2380"ETCD_INITIAL_CLUSTER_STATE="new"ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"ETCD_ADVERTISE_CLIENT_URLS="https://192.168.20.45:2379"#[security]CLIENT_CERT_AUTH="true"ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"ETCD_KEY_FILE="/opt/kubernetes / ssl/etcd-key.pem "PEER_CLIENT_CERT_AUTH=" true "ETCD_PEER_CA_FILE=" / opt/kubernetes/ssl/ca.pem "ETCD_PEER_CERT_FILE=" / opt/kubernetes/ssl/etcd.pem "ETCD_PEER_KEY_FILE=" / opt/kubernetes/ssl/etcd-key.pem "

The configuration on master-3 is:

[root@master-3 ~] # vim / opt/kubernetes/cfg/etcd.conf# [member] ETCD_NAME= "etcd-node-3" ETCD_DATA_DIR= "/ var/lib/etcd/default.etcd" # ETCD_SNAPSHOT_COUNTER= "10000" # ETCD_HEARTBEAT_INTERVAL= "100" # ETCD_ELECTION_TIMEOUT= "1000" ETCD_LISTEN_PEER_URLS= "https://192.168.20.46:2380"ETCD_LISTEN_CLIENT_URLS="https://192.168.20.46:2379, Https://127.0.0.1:2379"#ETCD_MAX_SNAPSHOTS="5"#ETCD_MAX_WALS="5"#ETCD_CORS=""#[cluster]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.20.46:2380"# if you use different ETCD_NAME (e.g. Test), # set ETCD_INITIAL_CLUSTER value for this name, i.e. "test= http://..."ETCD_INITIAL_CLUSTER="etcd-node-1=https://192.168.20.44:2380, Etcd-node-2= https://192.168.20.45:2380, Etcd-node-3= https://192.168.20.46:2380"ETCD_INITIAL_CLUSTER_STATE="new"ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"ETCD_ADVERTISE_CLIENT_URLS="https://192.168.20.46:2379"#[security]CLIENT_CERT_AUTH="true"ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"ETCD_KEY_FILE="/opt/kubernetes / ssl/etcd-key.pem "PEER_CLIENT_CERT_AUTH=" true "ETCD_PEER_CA_FILE=" / opt/kubernetes/ssl/ca.pem "ETCD_PEER_CERT_FILE=" / opt/kubernetes/ssl/etcd.pem "ETCD_PEER_KEY_FILE=" / opt/kubernetes/ssl/etcd-key.pem "

Create a systemd file for etcd on three nodes:

[root@master-1 ~] # vim / usr/lib/systemd/system/etcd.service [Unit] Description=etcdDocumentation= https://github.com/coreos/etcdConflicts=etcd.serviceConflicts=etcd2.service[Service]Type=notifyRestart=alwaysRestartSec=5sLimitNOFILE=40000TimeoutStartSec=0WorkingDirectory=/var/lib/etcdEnvironmentFile=-/opt/kubernetes/cfg/etcd.conf# set GOMAXPROCS to number of processorsExecStart=/bin/bash-c "GOMAXPROCS=$ (nproc) / opt/kubernetes/bin/etcd" [Install] WantedBy=multi-user.target starts the ETCD service and executes the following command on three nodes: mkdir / var/lib/etcdsystemctl daemon-reloadsystemctl start etcdsystemctl enable etcd

Confirm that the etcd service of all nodes is started.

4. Verify the cluster [root@master-1 ~] # etcdctl-- endpoints= https://192.168.20.44:2379\-- ca-file=/opt/kubernetes/ssl/ca.pem\-- cert-file=/opt/kubernetes/ssl/etcd.pem\-- key-file=/opt/kubernetes/ssl/etcd-key.pem cluster-healthmember 32922a109cfe00b2 is healthy: got healthy result from https://192.168.20.46:2379member 4fa519fdd3e64a84 is healthy: got healthy result from https://192. 168.20.45:2379member cab6e832332e8b2a is healthy: got healthy result from https://192.168.20.44:2379cluster is healthyMaster node deployment 1. Deploy the Kubernetes package [root@master-1 ~] # cd / tmp/kubernetes/server/bin/ [root @ master-1 bin] # cp kube-apiserver / opt/kubernetes/bin/ [root @ master-1 bin] # cp kube-controller-manager / opt/kubernetes/bin/ [root@master-1 bin] # cp kube-scheduler / opt/kubernetes/bin/2. Generate the authentication file of API Server

Reference link

1. To create a JSON file for generating CSR, you need to specify the IP of the HA agent and the ClusterIP of the cluster:

[root@master-1 ~] # cd / opt/kubernetes/ SSL [root @ master-1 ssl] # vim kubernetes-csr.json {"CN": "kubernetes", "hosts": ["127.0.0.1", "192.168.20.50", "10.1.0.1", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster" "kubernetes.default.svc.cluster.local"], "key": {"algo": "rsa", "size": 2048}, "names": [{"C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "K8s", "OU": "System"}]}

two。 Generate the certificate and private key of Kubernetes

[root@master-1 ssl] # cfssl gencert-ca=/opt/kubernetes/ssl/ca.pem\-ca-key=/opt/kubernetes/ssl/ca-key.pem\-config=/opt/kubernetes/ssl/ca-config.json\-profile=kubernetes kubernetes-csr.json | cfssljson-bare kubernetes distributes the private key to all other node nodes: [root@master-1 ssl] # scp kubernetes*.pem 192.168.20.46 Create the token file for API Server [root@master-1 ~] # head-c 16 / dev/urandom | od-An-t x | tr-d''197f33fcbbfab2d15603dcc4408358f5 [root @ master-1 ~] # vim / opt/kubernetes/ssl/bootstrap-token.csv197f33fcbbfab2d15603dcc4408358f5,kubelet-bootstrap,10001, "system:kubelet-bootstrap" to create the basic user name Password authentication configuration [root@k8s-node-1 ~] # vim / opt/kubernetes/ssl/basic-auth.csvadmin,admin,1readonly,readonly 2 copy the files of the ssl directory to other master nodes scp-r-p / opt/kubernetes/ssl/* k8s-node-1:/opt/kubernetes/ssl/scp-r-p / opt/kubernetes/ssl/* k8s-node-2:/opt/kubernetes/ssl/scp-r-p / opt/kubernetes/ssl/* k8s-node-3:/opt/kubernetes/ssl/3. Deploy kube-apiserver to create the systemd file of kube-apiserver [root@k8s-node-1 ~] # vim / usr/lib/systemd/system/kube- apiserver.service [Unit] Description=Kubernetes API ServerDocumentation= https://github.com/GoogleCloudPlatform/kubernetesAfter=network.target[Service]ExecStart=/opt/kubernetes/bin/kube-apiserver\-enable-admission-plugins=MutatingAdmissionWebhook,ValidatingAdmissionWebhook,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction\-bind-address=192.168.20.44\-- insecure-bind-address=127.0.0.1\-- authorization-mode=Node RBAC\-- runtime-config=rbac.authorization.k8s.io/v1\-- kubelet-https=true\-- anonymous-auth=false\-- basic-auth-file=/opt/kubernetes/ssl/basic-auth.csv\-- enable-bootstrap-token-auth\-- token-auth-file=/opt/kubernetes/ssl/bootstrap-token.csv\-- service-cluster-ip-range=10.1.0.0/16\-- service-node-port-range=20000-40000\-tls -cert-file=/opt/kubernetes/ssl/kubernetes.pem\-tls-private-key-file=/opt/kubernetes/ssl/kubernetes-key.pem\-client-ca-file=/opt/kubernetes/ssl/ca.pem\-service-account-key-file=/opt/kubernetes/ssl/ca-key.pem\-etcd-cafile=/opt/kubernetes/ssl/ca.pem\-etcd-certfile=/opt/kubernetes/ssl/kubernetes.pem\-etcd-keyfile= / opt/kubernetes/ssl/kubernetes-key.pem\-- etcd-servers= https://192.168.20.44:2379, Https://192.168.20.45:2379, Https://192.168.20.46:2379\-- enable-swagger-ui=true\-- allow-privileged=true\-- audit-log-maxage=30\-- audit-log-maxbackup=3\-- audit-log-maxsize=100\-- audit-log-path=/opt/kubernetes/log/api-audit.log\-- event-ttl=1h\-- logtostderr=false 2\-- logtostderr=false\-- log-dir=/opt/kubernetes/logRestart=on-failureRestartSec=5Type=notifyLimitNOFILE=65536 [Install] WantedBy=multi-user.target launch kube -apiserver service [root@k8s-node-1 ~] # systemctl daemon-reload [root@k8s-node-1 ~] # systemctl start kube-apiserver [root@k8s-node-1 ~] # systemctl enable kube-apiserver to check whether the service status is normal [root@master-1 ~] # systemctl status kube-apiserver [root@master-1 ~] # netstat-lntp | grep kube-apiservertcp 0 192.168.20.44 netstat 6443 0.0.0.0 root@k8s-node-1 * LISTEN 4289/kube-apiserver tcp 0 0127.0.0.1 8080 0.0.0.0 4. Deploy controller-manager to generate controller-manager systemd file [root@master-1 ~] # vim / usr/lib/systemd/system/kube-controller- manager.service [Unit] Description=Kubernetes Controller ManagerDocumentation= https://github.com/GoogleCloudPlatform/kubernetes[Service]ExecStart=/opt/kubernetes/bin/kube-controller-manager\-- bind-address=127.0.0.1\-- master= http://127.0.0.1:8080\-- allocate-node-cidrs=true\-- service-cluster-ip -range=10.1.0.0/16\-cluster-cidr=10.2.0.0/16\-cluster-name=kubernetes\-cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem\-cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem\-service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem\-root-ca-file=/opt/kubernetes/ssl/ca.pem \-- leader-elect=true\-- Install 2\-- logtostderr=false\-- log-dir=/opt/kubernetes/logRestart=on-failureRestartSec=5 [Install] WantedBy=multi-user.target starts kube-controller-manager [root@master-1 ~] # systemctl daemon-reload [root@master-1 ~] # systemctl start kube-controller-manager [root@master-1 ~] # systemctl enable kube-controller-manager to view the service status [root@master-1 ~] # systemctl status kube-controller-manage r [root @ master-1 ~] # netstat-lntp | grep kube-contcp 0 0127.0.0.1 grep kube-contcp 10252 0.0.0.0 LISTEN 4390/kube-controlle 5. Deploy Kubernetes Scheduler to create a systemd file: [root@master-1 ~] # vim / usr/lib/systemd/system/kube- organizer. Service [Unit] Description=Kubernetes SchedulerDocumentation= https://github.com/GoogleCloudPlatform/kubernetes[Service]ExecStart=/opt/kubernetes/bin/kube-scheduler\-- address=127.0.0.1\-- master= http://127.0.0.1:8080\-- leader-elect=true\-- vault 2\-- logtostderr=false\-- log-dir=/opt/kubernetes/ LogRestart=on-failureRestartSec=5 [Install] WantedBy=multi-user.target launch Service [root@master-1 ~] # systemctl daemon-reload [root@master-1 ~] # systemctl start kube-scheduler [root@master-1 ~] # systemctl enable kube-scheduler View Service status [root@master-1 ~] # systemctl status kube-scheduler [root@master-1 ~] # netstat-lntp | grep kube-schedulertcp 0 0127.0.0.1WantedBy=multi-user.target 10251 0.0.0.0 * LISTEN 4445/kube-scheduler6. Master node deployment kube-proxy (optional)

(see node node deployment section, you need to create a corresponding kube-proxy home directory)

7. Using the above method, configure master-1 and master-2 to copy the ssl,cfg,bin file on master-1 to the corresponding location of other master nodes. Configure the startup file for each service and start it. 8. Deploy kubectl command line tool install binary package [root@master-1 ~] # cd / tmp/kubernetes/node/bin/ [root @ master-1 bin] # cp kubectl / opt/kubernetes/bin/

two。 Create an admin certificate signature

[root@master-1 ~] # vim / opt/kubernetes/ssl/admin-csr.json {"CN": "admin", "hosts": [], "key": {"algo": "rsa", "size": 2048}, "names": [{"C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "system:masters" "OU": "System"}]}

3. Generate admin certificate and private key

[root@master-1 ~] # cd / opt/kubernetes/ssl/ [root @ master-1 ssl] # cfssl gencert-ca=/opt/kubernetes/ssl/ca.pem\-ca-key=/opt/kubernetes/ssl/ca-key.pem\-config=/opt/kubernetes/ssl/ca-config.json\-profile=kubernetes admin-csr.json | cfssljson-bare admin sets cluster parameters [root@master-1 ~] # kubectl config set-cluster kubernetes\-- certificate- Authority=/opt/kubernetes/ssl/ca.pem\-embed-certs=true\-server= https://192.168.20.50:6443Cluster "kubernetes" set.

5. Set client authentication parameters:

[root@naster-1] # kubectl config set-credentials admin\-- client-certificate=/opt/kubernetes/ssl/admin.pem\-- embed-certs=true\-- client-key=/opt/kubernetes/ssl/admin-key.pemUser "admin" set.

6. Set context parameters

[root@master-1] # kubectl config set-context kubernetes\-cluster=kubernetes\-user=adminContext "kubernetes" created.

7. Set the default context:

[root@master-1 ~] # kubectl config use-context kubernetesSwitched to context "kubernetes".

8. Use the Kubectl tool to view the current status:

[root@master-1 ~] # kubectl get csNAME STATUS MESSAGE ERRORscheduler Healthy ok controller-manager Healthy ok etcd-1 Healthy {"health": "true"} etcd-2 Healthy {"health": "true"} etcd-0 Healthy {"health" ":" true "} Node node deployment 1. Install the required services

To extract the kubernetes-node-linux-amd64.tar.gz package at the node node, do the following

[root@k8s-node-1 ~] # cd / tmp/kubernetes/node/bin [root@k8s-node-1 bin] # cp kubelet kube-proxy / opt/kubernetes/bin/ [root@k8s-node-1 bin] # scp kubelet kube-proxy 192.168.20.48:/opt/kubernetes/bin/ [root@k8s-node-1 bin] # scp kubelet kube-proxy 192.168.20.49:/opt/kubernetes/bin/2. Configure roles and authentication parameters create role bindings on master-1 [root@master-1 ~] # kubectl create clusterrolebinding kubelet-bootstrap-- clusterrole=system:node-bootstrapper-- user=kubelet-bootstrapclusterrolebinding.rbac.authorization.k8s.io "kubelet-bootstrap" created create kubelet bootstrapping kubeconfig file Set the cluster parameter [root@master-1 ~] # kubectl config set-cluster kubernetes\-- certificate-authority=/opt/kubernetes/ssl/ca.pem\-- embed-certs=true\-- server= https://192.168.20.50:6443\-- kubeconfig=bootstrap.kubeconfigCluster "kubernetes" set.

3. Set client authentication parameters

[root@master-1] # kubectl config set-credentials kubelet-bootstrap\-token=197f33fcbbfab2d15603dcc4408358f5\-kubeconfig=bootstrap.kubeconfig User "kubelet-bootstrap" set.

4. Set context authentication parameters

[root@master-1] # kubectl config set-context default\-- cluster=kubernetes\-- user=kubelet-bootstrap\-- kubeconfig=bootstrap.kubeconfigContext "default" created.

5. Select the default context

[root@master-1 ~] # kubectl config use-context default-- kubeconfig=bootstrap.kubeconfigSwitched to context "default"

6. After doing the above, a config file for bootstrap.kubeconfig is generated in the current directory and distributed to each node:

[root@k8s-node-1 ~] # cp bootstrap.kubeconfig / opt/kubernetes/cfg/ [root@k8s-node-1 ~] # scp bootstrap.kubeconfig 192.168.20.47:/opt/kubernetes/cfg/ [root@k8s-node-1 ~] # scp bootstrap.kubeconfig 192.168.20.48:/opt/kubernetes/cfg/ [root@k8s-node-1 ~] # scp bootstrap.kubeconfig 192.168.20.49:/opt/kubernetes/cfg/ copy the updated configuration on master to other master nodes. 3. Set to support CNI

The following actions need to be performed on all node nodes

Set Kubernetes support for CNI: [root@k8s-node-2 ~] # mkdir-p / etc/cni/net.d [root@k8s-node-2 ~] # vim / etc/cni/net.d/10-default.conf {"name": "flannel", "type": "flannel", "delegate": {"bridge": "docker0", "isDefaultGateway": true Mtu: 1400}} 4. Configure the Kubelet service

The following actions need to be performed on all node nodes

Create a kubelet service profile [root@k8s-node-2 ~] # mkdir / var/lib/kubelet [root@k8s-node-2 ~] # vim / usr/lib/systemd/system/ kubelet.service [Unit] Description=Kubernetes KubeletDocumentation= https://github.com/GoogleCloudPlatform/kubernetesAfter=docker.serviceRequires=docker.service[Service]WorkingDirectory=/var/lib/kubeletExecStart=/opt/kubernetes/bin/kubelet\-- address=192.168.20.48\-- hostname-override=192.168.20.48\-- pod-infra- Container-image=mirrorgooglecontainers/pause-amd64:3.1\-experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig\-- kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig\-- cert-dir=/opt/kubernetes/ssl\-- network-plugin=cni\-- cni-conf-dir=/etc/cni/net.d\-- cni-bin-dir=/opt/kubernetes/bin/cni\-- cluster-dns=10.1.0.2\-- cluster-domain=cluster.local. \-- hairpin-mode hairpin-veth\-- allow-privileged=true\-- fail-swap-on=false\-- logtostderr=true\-- vault 2\-- logtostderr=false\-- log-dir=/opt/kubernetes/logRestart=on-failureRestartSec=5 [Install] WantedBy=multi-user.target launch Kubelet [root @ k8s-node-2 ~] # systemctl daemon-reload [root@k8s-node-2] # systemctl start kubelet [root@k8s-node-2 ~] # systemctl enable kubelet [root@k8s-node-2 ~] # systemctl status kubelet checks whether the csr request from the node node is received on the master node: [root@master-1 ~] # kubectl get csrNAME AGE REQUESTOR CONDITIONnode-csr-FDH7Y3rghf1WPsEJH2EYnofvOSeyHn2f-l_-4rH-LEk 2m kubelet-bootstrap Pending approves the TLS request of kubelet [root@master-1 ~] # kubectl get csr | grep 'Pending' | awk' NR > 0 {print $1}'| xargs kubectl certificate approvecertificatesigningrequest.certificates.k8s.io "node-csr-FDH7Y3rghf1WPsEJH2EYnofvOSeyHn2f-l_-4rH-LEk" approved [root @ kmaster-1 ~] # kubectl get csrNAME AGE REQUESTOR CONDITIONnode-csr-FDH7Y3rghf1WPsEJH2EYnofvOSeyHn2f-l_-4rH-LEk 11m kubelet-bootstrap Approved Check the node node status after Issued: [root@master-1 ~] # kubectl get nodeNAME STATUS ROLES AGE VERSION192.168.20.48 Ready 35s v1.14.1

View kubelet services on the node node

[root@k8s-node-2 ~] # netstat-lntp | grep kubelettcp 0 0127.0.0.1 lntp 10248 0.0.0.0 LISTEN 7917/kubelet tcp 0 0192.168.20.32 lntp 10250 0.0.0.0v LISTEN 7917/kubelet tcp 0 192.168.20.32: 10255 0.0.0.0 LISTEN 7917/kubelet tcp * LISTEN 7917/kubelet 0 192.168.20.32 4194 0.0.0.0. Deploy kube-proxy

1. Configure kube-proxy to use LVS, and all nodes execute:

Yum install-y ipvsadm ipset conntrack

two。 Create a certificate request

[root@master-1 ~] # vim kube-proxy-csr.json {"CN": "system:kube-proxy", "hosts": [], "key": {"algo": "rsa", "size": 2048}, "names": [{"C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s" "OU": "System"}]}

3. Generate a certificate

[root@master-1 ~] # cfssl gencert-ca=/opt/kubernetes/ssl/ca.pem\-ca-key=/opt/kubernetes/ssl/ca-key.pem\-config=/opt/kubernetes/ssl/ca-config.json\-profile=kubernetes kube-proxy-csr.json | cfssljson-bare kube-proxy

4. Distribute certificates to all node nodes

[root@master-1 ~] # cp kube-proxy*.pem / opt/kubernetes/ssl/ [root @ master-1 ~] # scp kube-proxy*.pem 192.168.20.47:/opt/kubernetes/ssl/ [root @ master-1 ~] # scp kube-proxy*.pem 192.168.20.48:/opt/kubernetes/ssl/ [root @ master-1 ~] # scp kube-proxy*.pem 192.168.20.49:/opt/kubernetes/ssl/

5. Create a kube-proxy profile

[root@k8s-node-2] # kubectl config set-cluster kubernetes\-certificate-authority=/opt/kubernetes/ssl/ca.pem\-embed-certs=true\-server= https://192.168.20.50:6443\-kubeconfig=kube-proxy.kubeconfigCluster "kubernetes" set.

6. To create a kube-proxy user:

[root@k8s-node-2] # kubectl config set-credentials kube-proxy\-- client-certificate=/opt/kubernetes/ssl/kube-proxy.pem\-- client-key=/opt/kubernetes/ssl/kube-proxy-key.pem\-- embed-certs=true\-- kubeconfig=kube-proxy.kubeconfigUser "kube-proxy" set.

7. Set the default context:

[root@k8s-node-2] # kubectl config set-context default\-- cluster=kubernetes\-- user=kube-proxy\-- kubeconfig=kube-proxy.kubeconfigContext "default" created.

8. Switch the context to default:

[root@k8s-node-2] # kubectl config use-context default-- kubeconfig=kube-proxy.kubeconfigSwitched to context "default".

9. Distribute kube-proxy.kubeconfig configuration files to all

[root@k8s-node-2 ~] # scp kube-proxy.kubeconfig 192.168.20.44:/opt/kubernetes/cfg/ [root@k8s-node-2 ~] # scp kube-proxy.kubeconfig 192.168.20.45:/opt/kubernetes/cfg/ [root@k8s-node-2 ~] # scp kube-proxy.kubeconfig 192.168.20.46:/opt/kubernetes/cfg/ [root@k8s-node-2 ~] # scp kube-proxy.kubeconfig 192.168.20.47 : / opt/kubernetes/cfg/ [root@k8s-node-2 ~] # scp kube-proxy.kubeconfig 192.168.20.48:/opt/kubernetes/cfg/ [root@k8s-node-2 ~] # scp kube-proxy.kubeconfig 192.168.20.459/opt/kubernetes/cfg/

10. Create a kube-proxy service profile

All nodes execute. Note that the IP in the configuration file needs to be modified to the corresponding IP of the local machine.

[root@k8s-node-1 ~] # mkdir / var/lib/kube-proxy [root@k8s-node-1 ~] # vim / usr/lib/systemd/system/kube- proxy. Service [Unit] Description=Kubernetes Kube-Proxy ServerDocumentation= https://github.com/GoogleCloudPlatform/kubernetesAfter=network.target[Service]WorkingDirectory=/var/lib/kube-proxyExecStart=/opt/kubernetes/bin/kube-proxy\-bind-address=192.168.20.47\-- hostname-override=192.168.20.47\-- kubeconfig=/ Opt/kubernetes/cfg/kube-proxy.kubeconfig\-- masquerade-all\-- feature-gates=SupportIPVSProxyMode=true\-- proxy-mode=ipvs\-- ipvs-min-sync-period=5s\-- ipvs-sync-period=5s\-- ipvs-scheduler=rr\-- logtostderr=true\-- vault 2\-- logtostderr=false\-- log-dir=/opt/kubernetes/logRestart=on-failureRestartSec=5LimitNOFILE=65536 [Install] WantedBy=multi-user.target

11. Start the service

Systemctl start kube-proxysystemctl enable kube-proxysystemctl status kube-proxy

twelve。 View service status, lvs status

[root@k8s-node-1] # ipvsadm-L-nIP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 10.1.0.1 size=4096 443 rr-> 192.168.20.44 nIP Virtual Server version 6443 Masq 100-> 192.168.20.45 nIP Virtual Server version 6443 100 -> 192.168.20.46 purl 6443 Masq 1 1 0

After all the node nodes are configured successfully, you can see the following results:

[root@master-1 ~] # kubectl get nodeNAME STATUS ROLES AGE VERSION192.168.20.47 Ready 6d21h v1.14.1192.168.20.48 Ready 4d1h v1.14.1192.168.20.49 Ready 4d1h v1.14.1Flannel network deployment

All nodes need to deploy flannel.

1. Create a Flannel certificate

1. Generate certificate file

[root@master-1 ~] # vim flanneld-csr.json {"CN": "flanneld", "hosts": [], "key": {"algo": "rsa", "size": 2048}, "names": [{"C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s" "OU": "System"}]}

two。 Generate a certificate

[root@master-1 ~] # cfssl gencert-ca=/opt/kubernetes/ssl/ca.pem\-ca-key=/opt/kubernetes/ssl/ca-key.pem\-config=/opt/kubernetes/ssl/ca-config.json\-profile=kubernetes flanneld-csr.json | cfssljson-bare flanneld

3. Distribute certificates

[root@master-1 ~] # cp flanneld*.pem / opt/kubernetes/ssl/ [root @ master-1 ~] # scp flanneld*.pem {all-k8s-node}: / opt/kubernetes/ssl/2. Deploy flannel

1. Extract the previously downloaded flannel package and distribute it to other nodes as follows:

Cp mk-docker-opts.sh flanneld / opt/kubernetes/bin/scp mk-docker-opts.sh flanneld {all-k8s-node}: / opt/kubernetes/bin/

two。 Create the following files and distribute them to each node node:

[root@k8s-node-1 tmp] # vim removedocker 0.shmalag binapace Delete default docker bridge, so that docker can start with flannel network.# exit on any errorset-erc=0ip link show docker0 > / dev/null 2 > & 1 | | rc= "$?" if [["$rc"-eq "0"]] Then ip link set dev docker0 down ip link delete docker0fi [root@k8s-node-1 tmp] # cp remove-docker0.sh / opt/kubernetes/bin/ [root@k8s-node-1 tmp] # scp remove-docker0.sh 192.168.20.48:/opt/kubernetes/bin/ [root@k8s-node-1 tmp] # scp remove-docker0.sh 192.168.20.49:/opt/kubernetes/bin/

3. Configure flannel

[root@k8s-node-1 ~] # vim / opt/kubernetes/cfg/flannelFLANNEL_ETCD= "- etcd-endpoints= https://192.168.20.31:2379,https://192.168.20.32:2379, Https://192.168.20.33:2379"FLANNEL_ETCD_KEY="-etcd-prefix=/kubernetes/network"FLANNEL_ETCD_CAFILE="--etcd-cafile=/opt/kubernetes/ssl/ca.pem"FLANNEL_ETCD_CERTFILE="--etcd-certfile=/opt/kubernetes/ssl/flanneld.pem"FLANNEL_ETCD_KEYFILE="--etcd-keyfile=/opt/kubernetes/ssl/flanneld-key.pem" creates flannel service file [root@k8s-node-1 ~] # vim / usr/lib / systemd/system/ flannel.service[ Unit] Description=Flanneld overlay address etcd agentAfter=network.targetBefore= docker.service[ service] EnvironmentFile=-/opt/kubernetes/cfg/flannelExecStartPre=/opt/kubernetes/bin/remove-docker0.shExecStart=/opt/kubernetes/bin/flanneld ${FLANNEL_ETCD} ${FLANNEL_ETCD_KEY} ${FLANNEL_ETCD_CAFILE} ${FLANNEL_ETCD_CERTFILE} ${FLANNEL_ETCD_KEYFILE} ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh-d / run/flannel/dockerType= notify [install] WantedBy=multi-user.targetRequiredBy=docker.service

5. Distribute the created configuration files to each node:

Scp / opt/kubernetes/cfg/flannel {all-k8s-node}: / opt/kubernetes/cfg/scp / usr/lib/systemd/system/flannel.service {all-k8s-node}: / usr/lib/systemd/system/3. Flannel CNI integration

1. Download the CNI plug-in

Wget https://github.com/containernetworking/plugins/releases/download/v0.7.5/cni-plugins-amd64-v0.7.5.tgz[root@k8s-node-1 tmp] # mkdir / opt/kubernetes/bin/cni [root@k8s-node-1 tmp] # tar xf cni-plugins-amd64-v0.7.5.tgz-C / opt/kubernetes/bin/cni

two。 Distribute the software to each node:

[root@k8s-node-1 ~] # scp-r / opt/kubernetes/bin/cni/* {all-k8s-node}: / opt/kubernetes/bin/cni/

3. Create a key in etcd

[root@master-1] # / opt/kubernetes/bin/etcdctl-- ca-file / opt/kubernetes/ssl/ca.pem-- cert-file / opt/kubernetes/ssl/flanneld.pem-- key-file / opt/kubernetes/ssl/flanneld-key.pem\-- no-sync-C https://192.168.20.44:2379,https://192.168.20.45:2379, Https://192.168.20.46:2379\ mk / kubernetes/network/config'{"Network": "10.2.0.0 vxlan 16", "Backend": {"Type": "vxlan", "VNI": 1}}'> / dev/null 2 > & 1

4. Each node starts flannel

[root@k8s-node-1 ~] # chmod + x / opt/kubernetes/bin/* [root@k8s-node-1 ~] # systemctl daemon-reload [root@k8s-node-1 ~] # systemctl start flannel [root@k8s-node-1 ~] # systemctl enable flannel configure Docker to use Flannel

1. Modify the file of docker's systemd:

[Unit] # modify After under Unit and add RequiresAfter=network-online.target firewalld.service flannel.serviceWants=network-online.targetRequires=flannel.service [Service] # add EnvironmentFile=-/run/flannel/dockerType=notifyEnvironmentFile=-/run/flannel/dockerExecStart=/usr/bin/dockerd $DOCKER_OPTS

two。 Other NODE nodes also make the same modification

[root@k8s-node-2 ~] # scp / usr/lib/systemd/system/docker.service {k8s-node}: / usr/lib/systemd/system/

3. Restart docker, the docker0 network card appears, and the 16 network segment is 10.2.0.0, indicating that the configuration is successful.

[root@k8s-node-3 ~] # systemctl daemon-reload [root@k8s-node-3 ~] # systemctl restart docker [root@k8s-node-3 ~] # ip a | grep-A 3 'docker0'7: docker0: mtu 1500 qdisc noqueue state DOWN link/ether 02:42:e9:2b:36:86 brd ff:ff:ff:ff:ff:ff inet 10.2.79.1 engine 24 scope global docker0 valid_lft forever preferred_lft forever plugin deployment 1. Create CoreDNS create coredns.yaml The content is as follows: apiVersion: v1kind: ServiceAccountmetadata: name: coredns namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: kubernetes.io/bootstrapping: rbac-defaults addonmanager.kubernetes.io/mode: Reconcile name: system:corednsrules:- apiGroups:-"" resources:-endpoints-services-pods-namespaces verbs :-list-watch---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults addonmanager.kubernetes.io/mode: EnsureExists name: system:corednsroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:corednssubjects:- kind: ServiceAccount name: coredns namespace: kube-system---apiVersion: v1kind: ConfigMapmetadata: name: coredns namespace: kube -system labels: addonmanager.kubernetes.io/mode: EnsureExistsdata: Corefile: |.: 53 {errors health kubernetes cluster.local. In-addr.arpa ip6.arpa {pods insecure upstream fallthrough in-addr.arpa ip6.arpa} prometheus: 9153 proxy. / etc/resolv.conf cache 30}-apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: coredns namespace: kube-system labels: k8s-app: coredns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "CoreDNS" spec: replicas: 2 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: coredns template: metadata: Labels: k8s-app: coredns spec: serviceAccountName: coredns tolerations:-key: node-role.kubernetes.io/master effect: NoSchedule-key: "CriticalAddonsOnly" operator: "Exists" containers:-name: coredns image: coredns/coredns:1.4.0 imagePullPolicy: IfNotPresent resources: limits: Memory: 170Mi requests: cpu: 100m memory: 70Mi args: ["- conf" VolumeMounts:-name: config-volume mountPath: / etc/coredns ports:-containerPort: 53 name: dns protocol: UDP-containerPort: 53 name: dns-tcp protocol: TCP livenessProbe: httpGet: path: / health port: 8080 Scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 dnsPolicy: Default volumes:-name: config-volume configMap: name: coredns items:-key: Corefile path: Corefile---apiVersion: v1kind: Servicemetadata: name: coredns namespace: kube-system labels: K8s- App: coredns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "CoreDNS" spec: selector: k8s-app: coredns clusterIP: 10.1.0.2 ports:-name: dns port: 53 protocol: UDP-name: dns-tcp port: 53 protocol: TCP execute this file: [root@master-1 tmp] # kubectl create-f coredns.yaml confirm that the DNS service is running: [ Root@master-1 ~] # kubectl get pod-n kube-system-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATEScoredns-76fcfc9f65-9fkfh 1 Running 2 3d7h 10.2.45.3 192.168.20.49 coredns-76fcfc9f65-zfplt 1/1 Running 1 3d6h 10.2.24.2 192.168.20.48 2. Deploy Dashboard

1. Execute the yaml in the directory and deploy Dashboard:

[root@master-1] # ll / tmp/dashboard/total 20 root root Jul 27 03:43 admin-user-sa-rbac.yaml-rw-r--r-- 1 root root 4253 Jul 27 03:47 kubernetes-dashboard.yaml-rw-r--r-- 1 root root 458 Jul 27 03:49 ui-admin-rbac.yaml-rw-r--r-- 1 root root 477 Jul 27 03:50 ui-read-rbac.yaml [root@ Master-1 ~] # kubectl create-f / tmp/dashboard/

two。 Confirm that the service is functioning properly:

[root@master-1 ~] # kubectl get pod-n kube-systemNAME READY STATUS RESTARTS AGEcoredns-76fcfc9f65-9fkfh 1 Running 2 3d7hcoredns-76fcfc9f65-zfplt 1 kubectl cluster 1 Running 1 3d6hkubernetes-dashboard-68ddcc97fc-w4bxf 1 3d6hkubernetes-dashboard-68ddcc97fc-w4bxf 1 Running 1 3d2h [root @ master-1 ~] # kubectl cluster -infoKubernetes master is running at https://192.168.20.50:6443CoreDNS is running at https://192.168.20.50:6443/api/v1/namespaces/kube-system/services/coredns:dns/proxykubernetes-dashboard is running at https://192.168.20.50:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxyTo further debug and diagnose cluster problems Use 'kubectl cluster-info dump'.

3. According to the prompt, use the url of dashboard, login, account admin/admin, and use the following command to generate token:

[root@master-1 ~] # kubectl-n kube-system describe secret $(kubectl-n kube-system get secret | grep admin-user | awk'{print $1}')

4. Copy the token and choose to log in using a token:

3. Heapster deployment (optional)

1. Deploy Heastper using the following file:

[root@master-1] # ll heastper/total 12 influxdb.yaml influxdb.yaml-1 root root 2306 Jul 26 20:28 grafana.yaml-rw-r--r-- 1 root root 1562 Jul 26 20:29 heapster.yaml-rw-r--r-- 1 root root 1161 Jul 26 20:29 influxdb.yaml [root@k8s-node-1] # kubectl create-f heastper/ log in to dashboard to view the chart of resource utilization in stone.

Use the kubectl cluster-info command to view the url address of the current service. Supplementary note etcd certificateless configuration description

In the actual production environment, if the private network environment is used, the etd cluster can be configured in certificateless mode, which will be easier in configuration and subsequent failure recovery.

Etcd certificateless configuration requires http access. To install the above documents, you need to modify the following configuration:

The configuration file of etcd comments out the security certificate section And change all url to http mode: # cat / opt/kubernetes/cfg/etcd.conf# [member] ETCD_NAME= "etcd-node-1" ETCD_DATA_DIR= "/ var/lib/etcd/default.etcd" # ETCD_SNAPSHOT_COUNTER= "10000" # ETCD_HEARTBEAT_INTERVAL= "100" # ETCD_ELECTION_TIMEOUT= "1000" ETCD_LISTEN_PEER_URLS= "http://192.168.20.31:2380"ETCD_LISTEN_CLIENT_URLS="http://192.168.20 .31 cluster 2379 "# ETCD_MAX_SNAPSHOTS=" 5 "# ETCD_MAX_WALS=" 5 "# ETCD_CORS=" # [cluster] ETCD_INITIAL_ADVERTISE_PEER_URLS= "http://192.168.20.31:2380"# if you use different ETCD_NAME (e.g. Test) # set ETCD_INITIAL_CLUSTER value for this name, i.e. "test= http://..."ETCD_INITIAL_CLUSTER="etcd-node-1=http://192.168.20.31:2380,etcd-node-2=http://192.168.20.32:2380, Etcd-node-3= http://192.168.20.33:2380"ETCD_INITIAL_CLUSTER_STATE="new"ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"ETCD_ADVERTISE_CLIENT_URLS="http://192.168.20.31:2379"#[security]#CLIENT_CERT_AUTH="true"#ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"#ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"#ETCD_KEY_FILE=" / opt/kubernetes/ssl/etcd-key.pem "# PEER_CLIENT_CERT_AUTH=" true "# ETCD_PEER_CA_FILE=" / opt/kubernetes/ssl/ca.pem "# ETCD_PEER_CERT_FILE=" / opt/kubernetes/ssl/etcd.pem "# ETCD_PEER_KEY_FILE=" / opt/kubernetes/ssl/etcd-key.pem "fannel network part comments out the certificate configuration parameters of etcd And URL the bit http:# cat / opt/kubernetes/cfg/flannel FLANNEL_ETCD= "- etcd-endpoints= http://192.168.20.31:2379,http://192.168.20.32:2379, Http://192.168.20.33:2379"FLANNEL_ETCD_KEY="-etcd-prefix=/kubernetes/network"#FLANNEL_ETCD_CAFILE="--etcd-cafile=/opt/kubernetes/ssl/ca.pem"#FLANNEL_ETCD_CERTFILE="--etcd-certfile=/opt/kubernetes/ssl/flanneld.pem"#FLANNEL_ETCD_KEYFILE="--etcd-keyfile=/opt/kubernetes/ssl/flanneld-key.pem"

Remove the certificate configuration of etcd from 3.kube-apiserver. In this file, you need to delete the parameter directly and change url to http:

# cat / usr/lib/systemd/system/kube- apiserver.service [unit] Description=Kubernetes API ServerDocumentation= https://github.com/GoogleCloudPlatform/kubernetesAfter=network.target[Service]ExecStart=/opt/kubernetes/bin/kube-apiserver\-admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction\-- bind-address=192.168.20.31\-- insecure-bind-address=127.0.0.1\-- authorization-mode=Node RBAC\-- runtime-config=rbac.authorization.k8s.io/v1\-- kubelet-https=true\-- anonymous-auth=false\-- basic-auth-file=/opt/kubernetes/ssl/basic-auth.csv\-- enable-bootstrap-token-auth\-- token-auth-file=/opt/kubernetes/ssl/bootstrap-token.csv\-- service-cluster-ip-range=10.1.0.0/16\-- service-node-port-range=20000-40000\-tls -cert-file=/opt/kubernetes/ssl/kubernetes.pem\-- tls-private-key-file=/opt/kubernetes/ssl/kubernetes-key.pem\-- client-ca-file=/opt/kubernetes/ssl/ca.pem\-- service-account-key-file=/opt/kubernetes/ssl/ca-key.pem\-- etcd-servers= http://192.168.20.31:2379, Http://192.168.20.32:2379,http://192.168.20.33:2379\-- enable-swagger-ui=true\-- allow-privileged=true\-- audit-log-maxage=30\-- audit-log-maxbackup=3\-- audit-log-maxsize=100\-- audit-log-path=/opt/kubernetes/log/api-audit.log\-- event-ttl=1h\-- vault 2\-- logtostderr=false\-- log-dir=/opt/kubernetes/logRestart=on-failureRestartSec=5Type=notifyLimitNOFILE=65536 [Install] WantedBy=multi-user.target

4. Restart flannel, kubelet, kube-apiserver and other services respectively.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report