In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-07 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
This article shares with you the steps of deploying Kubernetes clusters in Kubeadm. I believe most people don't know how to deploy them yet. In order to let you learn, I summarized the following contents. Without saying much, let's move on.
I. Environmental preparation
Operating system
IP address
Hostnam
module
CentOS7.5
192.168.200.111
Docker-server1
Kubeadm 、 kubelet 、 kubectl 、 docker-ce
CentOS7.5
192.168.200.112
Docker-server2
Kubeadm 、 kubelet 、 kubectl 、 docker-ce
CentOS7.5
192.168.200.113
Docker-server3
Kubeadm 、 kubelet 、 kubectl 、 docker-ce
Note: CPU:2C+ Memory:2G+ is recommended for all host configurations
1.1. Host initialization configuration
Disable firewall and selinux for all host configurations, configure hostname
[root@localhost] # iptables-F
[root@localhost ~] # setenforce 0
[root@localhost ~] # systemctl stop firewalld
Different host names are different (docker-server2, docker-server3, respectively)
[root@localhost ~] # hostname docker-server1
[root@localhost ~] # bash
[root@docker-server1 ~] # vim / etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
:: 1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.200.111 docker-server1
192.168.200.112 docker-server1
192.168.200.113 docker-server1
[root@docker-server1 ~] # scp / etc/hosts 192.168.200.112:/etc/
[root@docker-server1 ~] # scp / etc/hosts 192.168.200.113:/etc/
Disable swap virtual memory
[root@docker-server1 ~] # vim / etc/fstab
# / dev/mapper/centos-swap swap swap defaults 0 0 # disable swap auto mount
[root@docker-server1 ~] # swapoff / dev/mapper/centos-swap
[root@docker-server1] # free-h
Total used free shared buff/cache available
Mem: 1.9G 749M 101M 10M 1.1G 906M
Swap: 0B 0B 0B
1.2. deploy the docker environment
Install docker-ce (all host configurations)
[root@docker-server1 ~] # wget-O / etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@docker-server1 ~] # yum- y install yum-utils device-mapper-persistent-data lvm2
[root@docker-server1 ~] # yum-config-manager-- add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@docker-server1 ~] # ls / etc/yum.repos.d/
Backup CentOS-Base.repo CentOS-Media.repo docker-ce.repo
[root@docker-server1 ~] # yum-y install docker-ce
[root@docker-server1] # systemctl start docker & & systemctl enable docker
Ali Cloud Mirror Accelerator (all host configurations)
[root@docker-server1 ~] # cat / etc/docker/daemon.json
{
"registry-mirrors": ["https://nyakyfun.mirror.aliyuncs.com"]
}
END
[root@docker-server1 ~] # systemctl daemon-reload
[root@docker-server1 ~] # systemctl restart docker
[root@docker-server1 ~] # docker version
Client: Docker Engine-Community
Version: 19.03.5
API version: 1.40
Go version: go1.12.12
Git commit: 633a0ea
Built: Wed Nov 13 07:25:41 2019
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine-Community
Engine:
Version: 19.03.5
API version: 1.40 (minimum version 1.12)
Go version: go1.12.12
Git commit: 633a0ea
Built: Wed Nov 13 07:24:18 2019
OS/Arch: linux/amd64
Experimental: false
Containerd:
Version: 1.2.10
GitCommit: b34a5c8af56e510852c35414db4c1f4fa6172339
Runc:
Version: 1.0.0-rc8+dev
GitCommit: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
Docker-init:
Version: 0.18.0
GitCommit: fec3683
1.3. Release notes for related components
module
Version
Description
Kubernetes
1.17.3
Main program
Docker
19.03.5
Container
Flannel
0.11.0
Network plug-in
Etcd
3.3.15
Database
Coredns
1.6.2
Dns component
Kubernetes-dashboard
2.0.0-beta5
Web interface
II. Deployment of kubernetes cluster 2.1.introduction of components
All three nodes need to install the following three components
L kubeadm: installation tools so that all components will run as containers
L kubectl: client connection K8S API tool
L kubelet: a tool that runs on the node node to start the container
2.2.Configuring Ali Cloud yum source
All hosts are configured with yum source
It is recommended to use Aliyun's yum source installation:
[root@docker-server1 ~] # cat / etc/sysctl.d/k8s.conf / etc/sysconfig/modules/ipvs.modules kubeadm-config.yaml
[root@docker-server2 ~] # vim kubeadm-config.yaml
ApiVersion: kubeadm.k8s.io/v1beta2
CaCertPath: / etc/kubernetes/pki/ca.crt
Discovery:
BootstrapToken:
ApiServerEndpoint: 192.168.200.111:6443
Token: abcdef.0123456789abcdef
UnsafeSkipCAVerification: true
Timeout: 5m0s
TlsBootstrapToken: abcdef.0123456789abcdef
Kind: JoinConfiguration
NodeRegistration:
CriSocket: / var/run/dockershim.sock
Name: 192.168.200.112
Taints: null
[root@docker-server2] # kubeadm join-- config kubeadm-config.yaml
W0212 22:13:36.627811 3819 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when
Control-plane flag is not set. [preflight] Running pre-flight checks
WARNING IsDockerSystemdCheck: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd
". Please follow the guide at https://kubernetes.io/docs/setup/cri/[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl-n kube-system get cm kubeadm-config-oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system names
Pace [kubelet-start] Writing kubelet configuration to file "/ var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/ var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Docker-server3 host:
[root@docker-server3 ~] # kubeadm config print join-defaults > kubeadm-config.yaml
[root@docker-server3 ~] # vim kubeadm-config.yaml
ApiVersion: kubeadm.k8s.io/v1beta2
CaCertPath: / etc/kubernetes/pki/ca.crt
Discovery:
BootstrapToken:
ApiServerEndpoint: 192.168.200.111:6443
Token: abcdef.0123456789abcdef
UnsafeSkipCAVerification: true
Timeout: 5m0s
TlsBootstrapToken: abcdef.0123456789abcdef
Kind: JoinConfiguration
NodeRegistration:
CriSocket: / var/run/dockershim.sock
Name: 192.168.200.112
Taints: null
[root@docker-server3] # kubeadm join-- config kubeadm-config.yaml
W0212 22:13:38.565506 3838 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when
Control-plane flag is not set. [preflight] Running pre-flight checks
WARNING IsDockerSystemdCheck: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd
". Please follow the guide at https://kubernetes.io/docs/setup/cri/[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl-n kube-system get cm kubeadm-config-oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system names
Pace [kubelet-start] Writing kubelet configuration to file "/ var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/ var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Master View node Node Information
[root@docker-server1 ~] # kubectl get nodes
NAME STATUS ROLES AGE VERSION
192.168.200.111 Ready master 17m v1.17.3
192.168.200.112 Ready 111s v1.17.3
192.168.200.113 Ready 109s v1.17.3
Master View Pod Information
[root@docker-server1] # kubectl get pods-n kube-system
NAME READY STATUS RESTARTS AGE
Coredns-7f9c544f75-6b8gq 1bat 1 Running 0 17m
Coredns-7f9c544f75-tjg2l 1/1 Running 0 17m
Etcd-192.168.200.111 1/1 Running 0 17m
Kube-apiserver-192.168.200.111 1/1 Running 0 17m
Kube-controller-manager-192.168.200.111 1/1 Running 0 17m
Kube-flannel-ds-amd64-bl49r 1/1 Running 3 2m24s
Kube-flannel-ds-amd64-dfkgr 1/1 Running 0 9m14s
Kube-flannel-ds-amd64-j74w7 1/1 Running 0 2m26s
Kube-proxy-442vz 1/1 Running 0 2m26s
Kube-proxy-trrsg 1/1 Running 0 17m
Kube-proxy-xnn74 1/1 Running 0 2m24s
Kube-scheduler-192.168.200.111 1/1 Running 0 17m
2.10 Node Management Command
The following command does not need to be executed, but only as an understanding
Reset master configuration
[root@docker-server1 ~] # kubeadm reset
Delete node configuration
[root@docker-server2 ~] # docker ps-aq | xargs docker rm-f
[root@docker-server2 ~] # systemctl stop kubelet
[root@docker-server2 ~] # rm-rf / etc/kubernetes/*
[root@docker-server2 ~] # rm-rf / var/lib/kubelet/*
Install Dashboard UI3.1 and deploy Dashboard
Dashboard's github warehouse address: https://github.com/kubernetes/dashboard
In the code repository, there are relevant deployment files that give installation examples. We can get them directly and deploy them directly.
[root@docker-server1 ~] # wget https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended.yaml
By default in this deployment file, a separate namespace named kubernetes-dashboard is created and kubernetes-dashboard is deployed under that namespace. The image of dashboard is from the official docker hub, so you can obtain the image directly from the official without changing the image address.
3.2. Open port settings
By default, dashboard does not open access ports to the public. Here, simplify the operation, expose its ports directly by using nodePort, and modify the definition of the serivce section:
[root@docker-server1 ~] # vim recommended.yaml
Kind: Service
ApiVersion: v1
Metadata:
Labels:
K8s-app: kubernetes-dashboard
Name: kubernetes-dashboard
Namespace: kubernetes-dashboard
Spec:
Type: NodePort # add
Ports:
-port: 443
TargetPort: 8443
NodePort: 32443 # add
Selector:
K8s-app: kubernetes-dashboard
3.3. Permission configuration
Because this permission is too small, modify a Super Admin permission
[root@docker-server1 ~] # vim recommended.yaml
ApiVersion: rbac.authorization.k8s.io/v1
Kind: ClusterRoleBinding
Metadata:
Name: kubernetes-dashboard
RoleRef:
ApiGroup: rbac.authorization.k8s.io
Kind: ClusterRole
Name: cluster-admin
Subjects:
-kind: ServiceAccount
Name: kubernetes-dashboard
Namespace: kubernetes-dashboard
[root@docker-server1] # kubectl apply-f recommended.yaml
Namespace/kubernetes-dashboard created
Serviceaccount/kubernetes-dashboard created
Service/kubernetes-dashboard created
Secret/kubernetes-dashboard-certs created
Secret/kubernetes-dashboard-csrf created
Secret/kubernetes-dashboard-key-holder created
Configmap/kubernetes-dashboard-settings created
Role.rbac.authorization.k8s.io/kubernetes-dashboard created
Clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
Rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
Clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
Deployment.apps/kubernetes-dashboard created
Service/dashboard-metrics-scraper created
Deployment.apps/dashboard-metrics-scraper created
Get token
[root@docker-server1 ~] # kubectl describe secret-n kubernetes-dashboard $(kubectl get secret-n kubernetes-dashboard | grep kubernetes-dashboard-token | awk'{print $1}') | grep token | awk'{print $2}'
Kubernetes-dashboard-token-fk762
Kubernetes.io/service-account-token
EyJhbGciOiJSUzI1NiIsImtpZCI6Ik5aYmhQMDA4aktaeUVyQVpBd3Y5VUNsTXFQV1VBeTRhSml4ZWlmNUV2NzAifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1mazc2MiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjZmYTVmZDM2LWIyOTItNDc3NS1hMWU0LThiOGE5MTY1NmI3ZiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.J_XYUsmSB1wWApYQkSebgd3BvEHoZe5pBgayw8N0xG6TYBsPhMEBVyhE6pR-P-R2eZKPAK9xkajMIwxtwxnIi2NTPv--FiecLINj2_XV7pegkEmd7AREXEPQmjGqM3Fulc7VkVFaG1YIdRmgi069GImpqFuTF0t19wOaloetUHY6LMRJsyHyesjvc2V82a_qgrFNcVtw9l0b8HhxebRIH6crhCMXKRpsjeF8zUg-Aq4ZfJxxEcc6wM2bOzAh00vJECHKBc7sTH2va8xic7GL_hMyE5SZzSOVeaulODWCc5hQdSc2BxeY4TVFz6GJXDC6ZgVj8gnNgUXxw3NVSiDmyg
Log in to the system using token
To this K8S cluster installation is complete.
IV. Application deployment testing
Let's deploy a simple Nginx WEB service that listens to port 80 while the access / info path shows the hostname of the container. The service consists of three container instances and is exposed to the user through Nodeport.
[root@docker-server1] # kubectl run nginxweb-image=nginx-port=80-replicas=3
Kubectl run-- generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run--generator=run-pod/v1 or kubectl create instead.deployment.apps/nginxweb created
Looking at the created object, you can see that there are already three pod running
[root@docker-server1 ~] # kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
Nginxweb 0/3 3 0 14s
[root@docker-server1 ~] # kubectl get po
NAME READY STATUS RESTARTS AGE
Nginxweb-6d7457b898-5qcbs 0bat 1 ContainerCreating 0 31s
Nginxweb-6d7457b898-m5tvh 0/1 ContainerCreating 0 31s
Nginxweb-6d7457b898-v58bj 0/1 ContainerCreating 0 31s
Create a svc to expose the service through Nodeport
[root@docker-server1] # kubectl expose deployment nginxweb-name=nginxwebsvc-port=80-target-port=80-type=NodePort
Service/nginxwebsvc exposed
Looking at the svc, you can see that the port randomly assigned by NodePort is 30715.
[root@docker-server1 ~] # kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE
Kubernetes ClusterIP 10.96.0.1 443/TCP 147m
Nginxwebsvc NodePort 10.96.63.33 80:30715/TCP 52s
Next, the nginxwebsvc can be accessed by the user's operating system through the ip address http://192.168.200.111:30715/ of the master host, and the nginxwebsvc will load balance the 80 requests to the actual nginxweb pod.
These are the steps for Kubeadm to deploy a Kubernetes cluster, and you need to use it yourself to understand the details. If you want to know more about it, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 274
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.