In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
1. System environment
Using kubeadm to install high-availability k8s v.13.x is relatively simple and saves a lot of steps compared to previous versions.
Kubeadm installs highly available k8s v.11 and v1.12 points for me
Host information
Hostname IP address description component k8s-master01 ~ 03192.168.20.20 ~ 22master node * 3keepalived, nginx, etcd, kubelet, kube-apiserverk8s-master-lb192.168.20.10keepalived virtual IP without k8s-node01 ~ 08192.168.20.30 ~ 37worker node * 8kubelet
Host configuration
[root@k8s-master01 ~] # uname-r4.18.9-1.el7.elrepo.x86_64 [root@k8s-master01 ~] # uname-aLinux k8s-master01 4.18.9-1.el7.elrepo.x86_64 # 1 SMP Thu Sep 20 09:04:54 EDT 2018 x86 "64 GNU/Linux [root@k8s-master01] # hostnamek8s-master01 [root@k8s-master01 ~] # free-g total used free Shared buff/cache availableMem: 31002 2Swap: 2000 [root@k8s-master01 ~] # cat / proc/cpuinfo | grep processprocessor: 0processor: 1processor: 2processor: 3 [root@k8s-master01 ~] # cat / etc/redhat-release CentOS Linux release 7.5.1804 (Core)
Docker and K8s version
[root@k8s-master01 ~] # docker versionClient: Version: 17.09.1-ce API version: 1.32 Go version: go1.8.3 Git commit: 19e2cf6 Built: Thu Dec 7 22:23:40 2017 OS/Arch: linux/amd64Server: Version: 17.09.1-ce API version: 1.32 (minimum version 1.12) Go version: go1.8.3 Git commit: 19e2cf6 Built: Thu Dec 7 22:25:03 2017 OS/Arch: linux/amd64 Experimental: false [root@k8s-master01 ~] # kubectl versionClient Version: version.Info {Major: "1" Minor: "13", GitVersion: "v1.13.2", GitCommit: "cff46ab41ff0bb44d8584413b598ad8360ec1def", GitTreeState: "clean", BuildDate: "2019-01-10T23:35:51Z", GoVersion: "go1.11.4", Compiler: "gc", Platform: "linux/amd64"} Server Version: version.Info {Major: "1", Minor: "13", GitVersion: "v1.13.2", GitCommit: "cff46ab41ff0bb44d8584413b598ad8360ec1def", GitTreeState: "clean", BuildDate: "2019-01-10T23:28:14Z", GoVersion: "go1.11.4" Compiler: "gc", Platform: "linux/amd64"}
2. Configure SSH mutual trust
All nodes are configured with hosts:
[root@k8s-master01 ~] # cat / etc/hosts192.168.20.20 k8s-master01192.168.20.21 k8s-master02192.168.20.22 k8s-master03192.168.20.10 k8s-master-lb192.168.20.30 k8s-node01192.168.20.31 k8s-node02
Execute on k8s-master01:
[root@k8s-master01 ~] # ssh-keygen-t rsaGenerating public/private rsa key pair.Enter file in which to save the key (/ root/.ssh/id_rsa): Created directory'/ root/.ssh'.Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in / root/.ssh/id_rsa.Your public key has been saved in / root/.ssh/id_rsa.pub.The key fingerprint is:SHA256:TE0eRfhGNRXL3btmmMRq+awUTkR4RnWrMf6Q5oJaTn0 root@k8s-master01The key's randomart image is : +-[RSA 2048]-+ | = * + oo+o | | = oval. O.= | |. = + o + o | | o. = =. | | S + O. | | = B =. | | + O E = | | = o = o | |. . .o | +-[SHA256]-+ for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 K8sMurnode02tens do ssh-copy-id-I. ssh / id_rsa.pub $idone
All nodes turn off firewall and selinux
[root@k8s-master01] # systemctl disable-- now firewalld NetworkManagerRemoved symlink / etc/systemd/system/multi-user.target.wants/NetworkManager.service.Removed symlink / etc/systemd/system/multi-user.target.wants/firewalld.service.Removed symlink / etc/systemd/system/dbus-org.freedesktop.NetworkManager.service.Removed symlink / etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service.Removed symlink / etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. [root@k8s -master01 ~] # setenforce 0 [root@k8s-master01 ~] # sed-ri'/ ^ [^ #] * SELINUX=/s#=.+$#=disabled#' / etc/selinux/config
All nodes turn off dnsmasq (if enabled)
Systemctl disable-now dnsmasq
All nodes turn off swap
[root@k8s-master01 ~] # swapoff-a & & sysctl-w vm.swappiness=0vm.swappiness = 0 [root@k8s-master01 ~] # sed-ri'/ ^ [^ #] * swap/s@ ^ @ # @'/ etc/fstab
All nodes upgrade the system
Yum install epel-release-yyum install wget git jq psmisc vim-yyum update-y-- exclude=kernel*
Synchronization time of all nodes
Ln-sf / usr/share/zoneinfo/Asia/Shanghai / etc/localtimeecho 'Asia/Shanghai' > / etc/timezonentpdate time2.aliyun.com
# add to crontab
All Node limit configuration
Ulimit-SHn 65535
Master01 downloads installation files
[root@k8s-master01 ~] # git clone https://github.com/dotbalo/k8s-ha-install.git-b v1.13.x
All nodes create repo
Cd / etc/yum.repos.dmkdir bakmv *. Repo bak/cp / root/k8s-ha-install/repo/*.
All nodes upgrade the system and restart
Yum install wget git jq psmisc-yyum update-y-- exclude=kernel* & & reboot
Kernel upgrade
All nodes
[root@k8s-master01 ~] # rpm-- import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org[root@k8s-master01 ~] # rpm-Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpmRetrieving http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpmRetrieving http://elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpmPreparing... # # [100%] Updating / installing... 1:elrepo-release-7.0-3.el7.elrepo # # [100%]
Master01 downloads kernel files
Wget http://mirror.rc.usf.edu/compute_lock/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.18.9-1.el7.elrepo.x86_64.rpmwget http://mirror.rc.usf.edu/compute_lock/elrepo/kernel/el7/x86_64/RPMS/kernel-devel-4.18.9-1.el7.elrepo.x86_64.rpm
Copy to another node
[root@k8s-master01 ~] # for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8sMurray Node02 * * do scp kernel-ml-4.18.9-1.el7.elrepo.x86_64.rpm kernel-ml-devel-4.18.9-1.el7.elrepo.x86_64.rpm $i:/root/ Donekernel-ml-4.18.9-1.el7.elrepo.x86_64.rpm 100% 45MB 147.2MB/s 00:00 kernel-ml-devel-4.18.9-1.el7.elrepo.x86_64.rpm 100% 12MB 149.1MB/s 00:00 kernel-ml-4 .18.9-1.el7.elrepo.x86_64.rpm 100% 45MB 22.6MB/s 00:02 kernel-ml-devel-4.18.9-1.el7.elrepo.x86_64.rpm 100% 12MB 20.8MB/s 00:00 kernel-ml-4.18.9- 1.el7.elrepo.x86_64.rpm 100% 45MB 15.1MB/s 00:03 kernel-ml-devel-4.18.9-1.el7.elrepo.x86_64.rpm 100% 12MB 11.9MB/s 00:01 kernel-ml-4.18.9-1.el7.elrepo .x86 _ 64.rpm 100 45MB 45.1MB/s 00:01 kernel-ml-devel-4.18.9-1.el7.elrepo.x86_64.rpm 100 12MB 27.4MB/s 00:00 kernel-ml-4.18.9-1.el7.elrepo.x86_64. Rpm 100% 45MB 45.1MB/s 00:01 kernel-ml-devel-4.18.9-1.el7.elrepo.x86_64.rpm 100% 12MB 30.0MB/s 00:00
Install the kernel on all nodes
Yum localinstall-y kernel-ml*
All nodes modify the kernel boot order
Grub2-set-default 0 & & grub2-mkconfig-o / etc/grub2.cfg
Grubby-args= "user_namespace.enable=1"-update-kernel= "$(grubby-- default-kernel)"
Restart all nodes
Reboot
Confirm that all nodes are new kernels
[root@k8s-master01] # uname-r4.18.9-1.el7.elrepo.x86_64
To confirm whether the nf_conntrack_ipv4,ipvs can be loaded depends on this module, you need to confirm that it can be loaded normally.
[root@k8s-master02 ~] # modprobe nf_conntrack_ IPv4 [root @ k8s-master02 ~] # lsmod | grep nfnf_conntrack_ipv4 16384 0 nf_defrag_ipv4 16384 1 nf_conntrack_ipv4nf_conntrack 135168 1 nf_conntrack_ipv4libcrc32c 16384 2 nf_conntrack,xfs
All nodes install ipvsadm
Yum install ipvsadm ipset sysstat conntrack libseccomp-y
All nodes are set to enable automatic loading of modules
[root@k8s-master01 ~] # cat / etc/modules-load.d/ipvs.conf ip_vsip_vs_lcip_vs_wlcip_vs_rrip_vs_wrrip_vs_lblcip_vs_lblcrip_vs_dhip_vs_ship_vs_foip_vs_nqip_vs_sedip_vs_ ftp [root @ k8s-master01 ~] # systemctl disable-- now systemd-modules-load.service [root@k8s-master01 ~] # lsmod | grep ip_vsip_vs_ftp 16384 0 Nf_nat 32768 1 ip_vs_ftpip_vs_sed 16384 0 ip_vs_nq 16384 0 ip_vs_fo 16384 0 ip_vs_sh 16384 0 ip_vs_dh 16384 0 ip_vs_lblcr 16384 0 ip_vs_lblc 16384 0 ip_vs_wrr 16384 0 ip_vs_rr 16384 0 ip_vs_wlc 16384 0 ip_vs_lc 16384 0 ip_vs 151552 24 ip_vs_wlc Ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_fo,ip_vs_nq,ip_vs_lblc,ip_vs_wrrip_vs_lc,ip_vs_sed,ip_vs_ftpnf_conntrack 135168 3 nf_conntrack_ipv4,nf_nat,ip_vslibcrc32c 16384 4 nf_conntrack,nf_nat,xfs,ip_vs
All nodes are configured with k8s kernel
Cat / etc/sysconfig/kubelet EOF [root@k8s-master01] # [root@k8s-master01] # systemctl daemon-reload [root@k8s-master01] # systemctl enable kubelet & & systemctl start kubeletCreated symlink from / etc/systemd/system/multi-user.target.wants/kubelet.service to / etc/systemd/system/kubelet.service. [root@k8s-master01 ~] #
Note that if kubelet fails to start at this time, don't worry about it.
Install and start keepalived and docker-compose on all master nodes
Yum install-y keepalivedsystemctl enable keepalived & & systemctl restart keepalived# install docker-composeyum install-y docker-compose
4. Master01 node installation
The following actions are performed on the master01 node
Create a profile
Modify the corresponding configuration information. Note that nm-bond is changed to the name of the network card corresponding to the server.
[root@k8s-master01 k8s-ha-install] #. / create-config.sh create kubeadm-config.yaml files success. Config/k8s-master01/kubeadm-config.yamlcreate kubeadm-config.yaml files success. Config/k8s-master02/kubeadm-config.yamlcreate kubeadm-config.yaml files success. Config/k8s-master03/kubeadm-config.yamlcreate keepalived files success. Config/k8s-master01/keepalived/create keepalived files success. Config/k8s-master02/keepalived/create keepalived files success. Config/k8s-master03/keepalived/create nginx-lb files success. Config/k8s-master01/nginx-lb/create nginx-lb files success. Config/k8s-master02/nginx-lb/create nginx-lb files success. Config/k8s-master03/nginx-lb/create calico.yaml file success. Calico/calico.yaml [root@k8s-master01 k8s-ha-install] # pwd/root/k8s-ha-instal
Distribute files
[root@k8s-master01 k8s-ha-install] # export HOST1=k8s-master01 [root@k8s-master01 k8s-ha-install] # export HOST2=k8s-master02 [root@k8s-master01 k8s-ha-install] # export HOST3=k8s-master03 [root@k8s-master01 k8s-ha-install] # scp-r config/$HOST1/kubeadm-config.yaml $HOST1:/root/kubeadm-config.yaml 100 993 1.9MB/s 00:00 [root@k8s-master01 k8s-ha-install] # scp-r config/$HOST2/kubeadm-config.yaml $HOST2:/root/kubeadm-config.yaml 1071 63.8KB/s 00:00 [root@k8s-master01 k8s-ha-install] # scp-r config/$HOST3/kubeadm -config.yaml $HOST3:/root/kubeadm-config.yaml 100% 1112 27.6KB/s 00:00 [root@k8s-master01 k8s-ha-install] # scp-r config/$HOST1/keepalived/* $HOST1:/etc/keepalived/check_apiserver.sh 36.4KB/s 00:00 keepalived.conf 100 558 69.9KB/s 00:00 You have new mail in / var/spool/mail/root [root@k8s-master01 k8s-ha-install] # scp-r config/$HOST2/keepalived/* $HOST2:/etc/ Keepalived/check_apiserver.sh 100% 471 10.8KB/s 00:00 keepalived.conf 100% 558 275.5KB/s 00:00 [root@k8s-master01 k8sMurha- Install] # scp-r config/$HOST3/keepalived/* $HOST3:/etc/keepalived/check_apiserver.sh 100% 471 12.7KB/s 00:00 keepalived.conf 100% 558 1. 1MB/s 00:00 [root@k8s-master01 k8s-ha-install] # scp-r config/$HOST1/nginx-lb $HOST1:/root/docker-compose.yaml 100% 213 478.6KB/s 00:00 nginx-lb.conf 1036 2.6MB/s 00:00 [root@k8s-master01 k8s-ha-install] # scp-r config/$HOST2/nginx-lb $HOST2:/root/docker-compose.yaml 100% 213 12.5KB/s 00:00 nginx-lb.conf 1036 35.5KB/s 00:00 [root@k8s-master01 k8s-ha-install] # scp-r config/$HOST3/nginx-lb $HOST3:/root/docker-compose.yaml 100% 213 20.5KB/s 00:00 nginx-lb.conf 1036 94.3KB/s 00:00
All master nodes start nginx
Start nginx-lb, and modify the nginx configuration file proxy_connect_timeout 60s under nginx-lb. File=/root/nginx-lb/docker-compose.yaml up-ddocker-compose-- file=/root/nginx-lb/docker-compose.yaml ps
Restart keepalived
Systemctl restart keepalived
Download the image in advance
Kubeadm config images pull-config / root/kubeadm-config.yaml
Cluster initialization
Kubeadm init-config / root/kubeadm-config.yaml
....
Kubeadm join k8s-master-lb:16443-token cxwr3f.2knnb1gj83ztdg9l-discovery-token-ca-cert-hash sha256:41718412b5d2ccdc8b7326fd440360bf186a21dac4a0769f460ca4bdaf5d2825
.. [root@k8s-master01 ~] # cat ~ / .bashrcexport KUBECONFIG=/etc/kubernetes/admin.confEOF [root@k8s-master01 ~] # source ~ / .bashrc [root@k8s-master01 ~] # kubectl get nodesNAME STATUS ROLES AGE VERSIONk8s-master01 NotReady master 2m11s v1.13.2
View pods status
[root@k8s-master01] # kubectl get pods-n kube-system-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATEScoredns-89cc84847-2h7r6 0 root@k8s-master01 1 ContainerCreating 0 3m12s k8s-master01 coredns-89cc84847-fhwbr 0 ContainerCreating 0 3m12s k8s-master01 etcd-k8s-master01 1 k8s-master01 kube-apiserver-k8s-master01 1 Running 0 2m31s 192.168.20.20 k8s-master01 kube-apiserver-k8s-master01 1 Running 0 2m36s 192.168.20.20 k8s- Master01 kube-controller-manager-k8s-master01 1/1 Running 0 2m39s 192.168.20.20 k8s-master01 kube-proxy-kb95s 1/1 Running 0 3m12s 192.168.20.20 k8s-master01 kube-scheduler-k8s-master01 1/1 Running 0 2m46s 192.168.20.20 k8s-master01
In this case, the CoreDNS status is ContainerCreating, and the error is as follows:
Normal Scheduled 2m51s default-scheduler Successfully assigned kube-system/coredns-89cc84847-2h7r6 to k8s-master01 Warning NetworkNotReady 2m3s (x25 over 2m51s) kubelet, k8s-master01 network is not ready: [runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Because there is no network plug-in installed, don't worry about it for the time being.
Install calico
[root@k8s-master01 k8s-ha-install] # kubectl create-f calico/configmap/calico-config createdservice/calico-typha createddeployment.apps/calico-typha createdpoddisruptionbudget.policy/calico-typha createddaemonset.extensions/calico-node createdserviceaccount/calico-node createdcustomresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s .io / hostendpoints.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org createdclusterrole.rbac.authorization.k8s.io/calico-node createdclusterrolebinding.rbac.authorization.k8s.io/calico-node created
Check again
[root@k8s-master01 k8s-ha-install] # kubectl get po-n kube-systemNAME READY STATUS RESTARTS AGEcalico-node-tp2dz 2 + Running 0 42scoredns-89cc84847-2djpl 1 + Running 0 66scoredns-89cc84847-vt6zq 1 + 1 Running 0 66setcd -k8s-master01 1 + 1 Running 0 27skube-apiserver-k8s-master01 1 + 1 + 1 Running 0 16skube-controller-manager-k8s-master01 1 + + 1 Running 0 34skube-proxy-x497d 1 + + 1 Running 0 66skube-scheduler-k8s-master01 1 + + 1 Running 0 17s
5. Highly available configuration
Copy certificate
USER=rootCONTROL_PLANE_IPS= "k8s-master02 k8s-master03" for host in $CONTROL_PLANE_IPS Do ssh "${USER}" @ $host "mkdir-p / etc/kubernetes/pki/etcd" scp / etc/kubernetes/pki/ca.crt "${USER}" @ $host:/etc/kubernetes/pki/ca.crt scp / etc/kubernetes/pki/ca.key "${USER}" @ $host:/etc/kubernetes/pki/ca.key scp / etc/kubernetes/pki/sa.key "${USER}" @ $host:/etc/kubernetes/pki/sa.key scp / etc/kubernetes/pki/sa.pub "${USER}" @ $host:/etc/kubernetes/pki/sa.pub scp / etc/kubernetes/pki/front-proxy-ca.crt "${USER}" @ $host:/etc/kubernetes/pki/front-proxy-ca.crt scp / etc/kubernetes/pki/front-proxy-ca.key "${USER}" @ $host:/etc/kubernetes/pki/front-proxy-ca.key scp / etc/kubernetes/pki/etcd/ca. Crt "${USER}" @ $host:/etc/kubernetes/pki/etcd/ca.crt scp / etc/kubernetes/pki/etcd/ca.key "${USER}" @ $host:/etc/kubernetes/pki/etcd/ca.key scp / etc/kubernetes/admin.conf "${USER}" @ $host:/etc/kubernetes/admin.confdone
The following actions are performed in master02
Download the image in advance
Kubeadm config images pull-config / root/kubeadm-config.yaml
When master02 joins the cluster, the parameter that differs from that of node nodes is-- experimental-control-plane.
Kubeadm join k8s-master-lb:16443-token cxwr3f.2knnb1gj83ztdg9l-discovery-token-ca-cert-hash sha256:41718412b5d2ccdc8b7326fd440360bf186a21dac4a0769f460ca4bdaf5d2825-experimental-control-plane.This node has joined the cluster and a new control plane instance was created:* Certificate signing request was sent to apiserver and approval was received.* The Kubelet was informed of the new secure connection details.* Master label and taint were applied to the new node.* The Kubernetes control plane instances scaled up.* A new etcd member was added to the local/stacked etcd cluster.To start administering your cluster from this node You need to run the following as a regular user: mkdir-p $HOME/.kube sudo cp-I / etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id-u): $(id-g) $HOME/.kube/configRun 'kubectl get nodes' to see this node join the cluster.
Master01 View status
[root@k8s-master01 k8s-ha-install] # kubectl get noNAME STATUS ROLES AGE VERSIONk8s-master01 Ready master 15m v1.13.2k8s-master02 Ready master 9m55s v1.13.2
Other master nodes are similar
View final master status
[root@k8s-master01] # ipvsadm-lnIP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 10.96.0.1 size=4096 443 rr-> 192.168.20.20 lnIP Virtual Server version 6443 Masq 1 40-> 192.168.20.21 lnIP Virtual Server version 6443 Masq 100- > 192.168.20.22 Masq 6443 Masq 100 TCP 10.96.0.10 Masq 53 rr-> 172.168.0.10 rr 53 Masq 100-> 172.168.0.11 Masq 53 Masq 100 TCP 10.102.221.48 Masq 5473 rrUDP 10. 96.0.10 Masq 53 rr-> 172.168.0.10 Masq 53 Masq 100-> 172.168.0.10 kubectl get po 53 kubectl get po-n kube-systemNAME READY STATUS RESTARTS AGEcalico-node-49dwr 2/2 Running 0 26mcalico-node-kz2d4 2/2 Running 0 22mcalico-node-zwnmq 2/2 Running 0 4m6scoredns-89cc84847-dgxlw 1/1 Running 0 27mcoredns-89cc84847-n77x6 1/1 Running 0 27metcd-k8s-master01 1/1 Running 0 27metcd-k8s-master02 1/1 Running 0 22metcd-k8s-master03 1/1 Running 0 4m5skube-apiserver-k8s-master01 1/1 Running 0 27mkube-apiserver-k8s-master02 1/1 Running 0 22mkube-apiserver-k8s-master03 1/1 Running 3 4m6skube-controller-manager-k8s-master01 1/1 Running 1 27mkube-controller-manager-k8s-master02 1/1 Running 0 22mkube-controller-manager-k8s-master03 1/1 Running 0 4m6skube-proxy-f9qc5 1/ 1 Running 0 27mkube-proxy-k55bg 1 hand 1 Running 0 22mkube-proxy-kbg9c 1 4m6skube-scheduler-k8s-master01 1 Running 0 4m6skube-scheduler-k8s-master01 1 Running 1 27mkube-scheduler-k8s-master02 1 27mkube-scheduler-k8s-master02 1 Running 0 22mkubemuri Murray K8s- Master03 1 Running 0 4m6s [root@k8s-master01 ~] # kubectl get noNAME STATUS ROLES AGE VERSIONk8s-master01 Ready master 28m v1.13.2k8s-master02 Ready master 22m v1.13.2k8s-master03 Ready master 4m16s v1.13.2 [root@k8s-master01 ~] # kubectl get csrNAME AGE REQUESTOR CONDITIONcsr-6mqbv 28m system:node:k8s-master01 Approved Issuednode-csr-GPLcR1G4Nchf-zuB5DaTWncoluMuENUfKvWKs0j2GdQ 23m system:bootstrap:9zp70m Approved,Issuednode-csr-cxAxrkllyidkBuZ8fck6fwq-ht1_u6s0snbDErM8bIs 4m51s system:bootstrap:9zp70m Approved,Issued
Allow hpa to collect data through the interface on all master nodes
Vi / etc/kubernetes/manifests/kube-controller-manager.yaml-horizontal-pod-autoscaler-use-rest-clients=false
Allow automatic injection of istio on all master, modify / etc/kubernetes/manifests/kube-apiserver.yaml
Vi / etc/kubernetes/manifests/kube-apiserver.yaml-enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota
6. Node nodes join the cluster
Kubeadm join 192.168.20.10 discovery-token-ca-cert-hash sha256:e88a29f62ab77a59bf88578abadbcd37e89455515f6ecf3ca371656dc65b1d6e 16443-- token ll4usb.qmplnofiv7z1j0an-- discovery-token-ca-cert-hash sha256:e88a29f62ab77a59bf88578abadbcd37e89455515f6ecf3ca371656dc65b1d6e. [kubelet-start] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/ var/run/dockershim.sock" to the Node API object "k8s-node02" as an annotationThis node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the New secure connection details.Run 'kubectl get nodes' on the master to see this node join the cluster.
Master node view
[root@k8s-master01 k8s-ha-install] # kubectl get po-n kube-systemNAME READY STATUS RESTARTS AGEcalico-node-49dwr 2 + 2 Running 0 13hcalico-node-9nmhb 2 + 2 Running 0 11mcalico-node-k5nmt 2 + 2 Running 0 11mcalico-node-kz2d4 2/2 Running 0 13hcalico-node-zwnmq 2/2 Running 0 13hcoredns-89cc84847-dgxlw 1/1 Running 0 13hcoredns-89cc84847-n77x6 1/1 Running 0 13hetcd-k8s-master01 1 Running 0 13hetcd-k8s-master02 1 13hetcd-k8s-master02 1 Running 0 13hetcd-k8s-master03 1 max 1 Running 0 13hkube-apiserver-k8s-master01 1 18mkube-apiserver-k8s-master02 1 Running 0 18mkube-apiserver-k8s-master02 1 18mkube-apiserver-k8s-master02 1 Running 0 17mkubel apiserver- K8s-master03 1/1 Running 0 16mkube-controller-manager-k8s-master01 1/1 Running 0 19mkube-controller-manager-k8s-master02 1/1 Running 1 19mkube-controller-manager-k8s-master03 1/1 Running 0 19mkube-proxy-cl2zv 1/1 Running 0 11mkube-proxy-f9qc5 1/1 Running 0 13hkube-proxy-hkcq5 1/1 Running 0 11mkube-proxy-k55bg 1/1 Running 0 13hkube-proxy-kbg9c 1/1 Running 0 13hkube-scheduler-k8s-master01 1/ 1 Running 1 13hkube-scheduler-k8s-master02 1/1 Running 0 13hkube-scheduler-k8s-master03 1/1 Running 0 13hYou have new mail in / var/spool/mail/root [root@k8s-master01 k8s-ha-install] # kubectl get noNAME STATUS ROLES AGE VERSIONk8s-master01 Ready master 13h v1.13.2k8s-master02 Ready master 13h v1.13.2k8s-master03 Ready master 13h v1.13.2k8s-node01 Ready 11m v1.13.2k8s-node02 Ready 11m v1.13.2
7. Installation of other components
Deploy metrics server 0.3.1Compact 1.8+ installation
[root@k8s-master01 k8s-ha-install] # kubectl create-f metrics-server/clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader createdclusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator createdrolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader createdapiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io createdserviceaccount/metrics-server createddeployment.extensions/metrics-server createdservice/metrics-server createdclusterrole.rbac.authorization.k8s.io/system:metrics- Server createdclusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created [root@k8s-master01 k8s-ha-install] # kubectl get po-n kube-systemNAME READY STATUS RESTARTS AGEcalico-node-49dwr 2 Running 0 14hcalico-node-9nmhb 2 Running 0 69mcalico-node-k5nmt 2/2 Running 0 69mcalico-node-kz2d4 2/2 Running 0 14hcalico-node-zwnmq 2/2 Running 0 14hcoredns-89cc84847-dgxlw 1/1 Running 0 14hcoredns-89cc84847-n77x6 1/1 Running 0 14hetcd-k8s-master01 1/1 Running 0 14hetcd-k8s-master02 1/1 Running 0 14hetcd-k8s-master03 1/1 Running 0 14hkube-apiserver-k8s-master01 1/1 Running 0 6m23skube-apiserver-k8s-master02 1/1 Running 1 4m41skube-apiserver-k8s-master03 1/1 Running 0 4m34skube-controller-manager-k8s-master01 1/1 Running 0 78mkube-controller-manager-k8s-master02 1/1 Running 1 78mkube-controller-manager-k8s-master03 1/1 Running 0 77mkube-proxy-cl2zv 1 / 1 Running 0 69mkube-proxy-f9qc5 1/1 Running 0 14hkube-proxy-hkcq5 1/1 Running 0 69mkube-proxy-k55bg 1/1 Running 0 14hkube-proxy-kbg9c 1/1 Running 0 14hkube-scheduler-k8s-master01 1/1 Running 1 14hkube-scheduler-k8s-master02 1/1 Running 0 14hkube-scheduler-k8s-master03 1/1 Running 0 14hmetrics-server-7c5546c5c5-ms4nz 1/1 Running 0 25s
Check it in about 5 minutes.
[root@k8s-master01 k8s-ha-install] # kubectl top nodesNAME CPU (cores) CPU% MEMORY (bytes) MEMORY% k8s-master01 155m 3% 1716Mi 44% k8s-master02 337m 8% 1385Mi 36% k8s-master03 450m 11% 1180Mi 30% k8s-node01 153m 3 582Mi 7 k8s-node02 142m 3 601Mi 7 [root@k8s-master01 k8s-ha-install] # kubectl top pod-n kube-systemNAME CPU (cores) MEMORY (bytes) calico-node-49dwr 15m 71Mi calico-node-9nmhb 47m 60Mi calico-node-k5nmt 46m 61Mi calico-node-kz2d4 18m 47Mi calico-node-zwnmq 16m 46Mi coredns-89cc84847-dgxlw 2m 13Mi coredns-89cc84847- N77x6 2m 13Mi etcd-k8s-master01 27m 126Mi etcd-k8s-master02 23m 117Mi etcd-k8s-master03 19m 112Mi kube-apiserver-k8s-master01 29m 410Mi Kube-apiserver-k8s-master02 19m 343Mi kube-apiserver-k8s-master03 13m 343Mi kube-controller-manager-k8s-master01 23m 97Mi kube-controller-manager-k8s-master02 1m 16Mi kube-controller-manager-k8s-master03 1m 16Mi kube-proxy-cl2zv 18m 18Mi kube-proxy-f9qc5 8m 20Mi kube-proxy-hkcq5 30m 19Mi kube-proxy-k55bg 8m 20Mi kube-proxy-kbg9c 6m 20Mi kube-scheduler-k8s-master01 7m 20Mi kube-scheduler-k8s-master02 9m 19Mi kube-scheduler-k8s-master03 7m 19Mi metrics-server-7c5546c5c5-ms4nz 3m 14Mi
Deploy dashboard v1.10.0
[root@k8s-master01 k8s-ha-install] # kubectl create-f dashboard/secret/kubernetes-dashboard-certs createdserviceaccount/kubernetes-dashboard createdrole.rbac.authorization.k8s.io/kubernetes-dashboard-minimal createdrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal createddeployment.apps/kubernetes-dashboard createdservice/kubernetes-dashboard created
View pod and svc
[root@k8s-master01 k8s-ha-install] # kubectl get svc-n kube-systemNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEcalico-typha ClusterIP 10.102.221.48 5473/TCP 15hkube-dns ClusterIP 10.96.0.10 53/UDP 53/TCP 15hkubernetes-dashboard NodePort 10.105.18.61 443:30000/TCP 7smetrics-server ClusterIP 10.101.178.115 443/TCP 23m [root@k8s-master01 k8s-ha-install] # kubectl get po-n kube-system-l k8s-app=kubernetes-dashboardNAME READY STATUS RESTARTS AGEkubernetes-dashboard-845b47dbfc-j4r48 1 Running 0 7m14s
Visit: https://192.168.20.10:30000/#!/login
View token
[root@k8s-master01 k8s-ha-install] # kubectl-n kube-system describe secret $(kubectl-n kube-system get secret | grep admin-user | awk'{print $1}') Name: admin-user-token-455bdNamespace: kube-systemLabels: Annotations: kubernetes.io/service-account.name: admin-user kubernetes.io/service-account.uid: e6effde6-1a0a-11e9-ae1a-000c298bf023Type: kubernetes.io/service-account-tokenData====namespace : 11 bytestoken: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTQ1NWJkIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJlNmVmZmRlNi0xYTBhLTExZTktYWUxYS0wMDBjMjk4YmYwMjMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.Lw8hErqRoEC3e4VrEsAkFraytQI13NWj2osm-3lhaFDfgLtj4DIadq3ef8VgxpmyViPRzPh6fhq7EejuGH6V9cPsqEVlNBjWG0Wzfn0QuPP0xkxoW2V7Lne14Pu0-bTDE4P4UcW4MGPJAHSvckO9DTfYSzYghE2YeNKzDfhhA4DuWXaWGdNqzth_QjG_zbHsAB9kT3yVNM6bMVj945wZYSzXdJixSPBB46y92PAnfO0kAWsQc_zUtG8U1bTo7FdJ8BXgvNhytUvP7-nYanSIcpUoVXZRinQDGB-_aVRuoHHpiBOKmZlEqWOOaUrDf0DQJvDzt9TL-YHjimIstzv18Aca.crt: 1025 bytes
Prometheus deployment: https://www.cnblogs.com/dukuan/p/10177757.html
Sponsor author:
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.