Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to build k8s cluster in kubeadm

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article is about how to build a k8s cluster in kubeadm. The editor thinks it is very practical, so I share it with you. I hope you can get something after reading this article. Let's take a look at it.

First, environmental preparation:

Two Centos7 hosts, 166 (master) 167 (node01), both machines have docker installed, the following operations are performed on both machines.

Modify the contents of / etc/hosts file

[zjin@master ~] $cat / etc/hosts10.3.4.166 master10.3.4.167 node01

Disable the firewall

[zjin@master ~] $sudo systemctl stop firewalld [zjin@master ~] $sudo systemctl disable firewalld

Close selinux

Cat / etc/selinux/configSELINUX=disabled

Create / etc/sysctl.d/k8s.conf file

Net.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward = 1

Then execute the following command:

[zjin@master ~] $sudo modprobe br_ netfilter [zjin @ master ~] $sudo sysctl-p / etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward = 1 2. Pull the image:

On master:

Docker pull akipa11/kube-apiserver-amd64:v1.10.0docker pull akipa11/kube-scheduler-amd64:v1.10.0docker pull akipa11/kube-controller-manager-amd64:v1.10.0docker pull akipa11/kube-proxy-amd64:v1.10.0docker pull akipa11/k8s-dns-kube-dns-amd64:1.14.8docker pull akipa11/k8s-dns-dnsmasq-nanny-amd64:1.14.8docker pull akipa11/k8s-dns-sidecar-amd64:1.14.8docker pull akipa11/etcd-amd64:3. 1.12docker pull akipa11/flannel:v0.10.0-amd64docker pull akipa11/pause-amd64:3.1docker tag akipa11/kube-apiserver-amd64:v1.10.0 k8s.gcr.io/kube-apiserver-amd64:v1.10.0docker tag akipa11/kube-scheduler-amd64:v1.10.0 k8s.gcr.io/kube-scheduler-amd64:v1.10.0docker tag akipa11/kube-controller-manager-amd64:v1.10.0 k8s.gcr.io/kube-controller-manager-amd64:v1.10 .0docker tag akipa11/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0docker tag akipa11/k8s-dns-kube-dns-amd64:1.14.8 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8docker tag akipa11/k8s-dns-dnsmasq-nanny-amd64:1.14.8 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8docker tag akipa11/k8s-dns-sidecar- Amd64:1.14.8 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8docker tag akipa11/etcd-amd64:3.1.12 k8s.gcr.io/etcd-amd64:3.1.12docker tag akipa11/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64docker tag akipa11/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1

On node01:

Docker pull akipa11/kube-proxy-amd64:v1.10.0docker pull akipa11/flannel:v0.10.0-amd64docker pull akipa11/pause-amd64:3.1docker pull akipa11/kubernetes-dashboard-amd64:v1.8.3docker pull akipa11/heapster-influxdb-amd64:v1.3.3docker pull akipa11/heapster-grafana-amd64:v4.4.3docker pull akipa11/heapster-amd64:v1.4.2docker pull akipa11/k8s-dns-kube-dns-amd64:1.14.8docker pull akipa11/k8s-dns-dnsmasq-nanny- Amd64:1.14.8docker pull akipa11/k8s-dns-sidecar-amd64:1.14.8docker tag akipa11/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64docker tag akipa11/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1docker tag akipa11/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0docker tag akipa11/k8s-dns-kube-dns-amd64:1.14 . 8 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8docker tag akipa11/k8s-dns-dnsmasq-nanny-amd64:1.14.8 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8docker tag akipa11/k8s-dns-sidecar-amd64:1.14.8 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8docker tag akipa11/kubernetes-dashboard-amd64:v1.8.3 k8s.gcr.io/kubernetes-dashboard -amd64:v1.8.3docker tag akipa11/heapster-influxdb-amd64:v1.3.3 k8s.gcr.io/heapster-influxdb-amd64:v1.3.3docker tag akipa11/heapster-grafana-amd64:v4.4.3 k8s.gcr.io/heapster-grafana-amd64:v4.4.3docker tag akipa11/heapster-amd64:v1.4.2 k8s.gcr.io/heapster-amd64:v1.4.2 III. Install kubeadm, kubelet, kubectl

1. Configure the yum source:

Cat-kubernetes-version=v1.10.0\ >-pod-network-cidr=10.244.0.0/16\ >-apiserver-advertise-address=10.3.4.166\ >-ignore-preflight-errors=Swap

The command for initializing the cluster: kubeadm init. The following parameter is the cluster version to be installed. Because we choose flannel as the network plug-in for Pod, we need to specify-pod-network-cidr=10.244.0.0/16, followed by the communication address of apiserver. Here is the IP address of our master node, and-ignore-preflight-errors=Swap indicates ignoring the error message of swap.

Finally, we can see the message that the cluster installation was successful:

Your Kubernetes master has initialized successfully!To start using your cluster You need to run the following as a regular user: mkdir-p $HOME/.kube sudo cp-I / etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id-u): $(id-g) $HOME/.kube/configYou should now deploy a podnetwork to the cluster.Run "kubectl apply-f [podnetwork] .yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of machines by running the following on Each nodeas root: kubeadm join 10.3.4.166 token b9ftqo.6a3igsfxq96b1dt6 6443-discovery-token-ca-cert-hash sha256:d4517be6c40e40e1bbc749b24b35c0a7f68c0f75c1380c32b24d1ccb42e0decc

Enter the following command to configure access to the cluster using kubectl:

[zjin@master ~] $sudo mkdir-p $HOME/.kube [zjin@master ~] $sudo cp-I / etc/kubernetes/admin.conf $HOME/.kube/config [zjin@master ~] $sudo chown $(id-u): $(id-g) $HOME/.kube/config

Once kubectl is configured, we can use kubectl to view information about the cluster:

[zjin@master ~] $kubectl get csNAME STATUS MESSAGE ERRORcontroller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health": "true"} [zjin@master ~] $kubectl get csrNAME AGE REQUESTOR CONDITIONcsr-nff2l 6m system:node:master Approved,Issued

If you encounter an error during the cluster installation, you can use the following command to reset:

$kubeadm reset$ ifconfig cni0 down & & ip link delete cni0 $ifconfig flannel.1 down & & ip link delete flannel.1$ rm-rf / var/lib/cni/ V. Install pod network

What we install here is the flannel network plug-in.

Wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Note that the inage version number of the contents of the file here should be changed to v0.10.0

[zjin@master] $kubectl apply-f kube-flannel.yml podsecuritypolicy.policy "psp.flannel.unprivileged" createdclusterrole.rbac.authorization.k8s.io "flannel" createdclusterrolebinding.rbac.authorization.k8s.io "flannel" createdserviceaccount "flannel" createdconfigmap "kube-flannel-cfg" createddaemonset.apps "kube-flannel-ds-amd64" createddaemonset.apps "kube-flannel-ds-arm64" createddaemonset.apps "kube-flannel-ds-arm" createddaemonset.apps "kube-flannel-ds-ppc64le" createddaemonset.apps "kube-flannel-ds-s390x" created

After the installation is complete, we can use the kubectl get pods command to view the running status of the components in the cluster:

[zjin@master ~] $kubectl get pods-- all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGEkube-system etcd-master 1 Running 0 40skube-system kube-apiserver-master 1 Running 0 40skube-system kube-controller-manager-master 1 Running 0 40skube-system kube-dns-86f4d74b45-4vbx5 3 Running 0 12mkube-system kube-flannel-ds-amd64-wskq5 1, Running 0 52skube-system kube-proxy-7dk2l 1, Running 0 12mkube-system kube-scheduler-master 1, Running 0 40s

As you can see, they are all Running states.

Add nodes

Install docker, kubeadm, kubelet, kubectl with the same version number on node01, and execute the following command:

[zjin@node01] $sudo kubeadm join 10.3.4.166 token ebimj5.91xj7atpxbke4x yz-- discovery-token-ca-cert-hash sha256:1eda2afcd5711343714ec2d2b6c6ea73ec06737 ee350b229d5b2eebfd82fb58a-- ignore-preflight-errors=Swap

If an error is reported:

[preflight] Some fatal errors occurred: [ERROR CRI]: unable to check if the container runtime at "/ var/run/dockershim.sock" is running: fork/exec / bin/crictl-r / var/run/dockershim.sock info: no such file or directory

This is an error caused by the cri-tools version, which can be solved by uninstalling cri-tools.

Yum remove cri-tools

Then execute the command to join the node:

[zjin@node01] $sudo kubeadm join 10.3.4.166 token ebimj5.91xj7atpxbke4x yz-- discovery-token-ca-cert-hash sha256:1eda2afcd5711343714ec2d2b6c6ea73ec06737 ee350b229d5b2eebfd82fb58a-- ignore-preflight-errors=Swap [preflight] Running pre-flight checks. [WARNING SystemVerification]: docker version is greater than the most re cently validated version. Docker version: 18.03.0-ce. Max validated version: 17. 03 [WARNING FileExisting-crictl]: crictl not found in system pathSuggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [discovery] Trying to connect to API Server "10.3.4.166 purl 6443" [discovery] Created cluster-info discovery client Requesting info from "https:/ / 10.3.4.166 discovery 6443" [discovery] Requesting info from "https://10.3.4.166:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate va lidates against pinned roots Will use API Server "10.3.4.166 This node has joined the cluster:* Certificate signing request was sent to master and a response was received.* The Kubelet was informed of the new secure connection details.Run 6443" [discovery] Successfully established connection with API Server "10.3.4.166 Successfully established connection with API Server" This node has joined the cluster:* Certificate signing request was sent to master and a response was received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the master to see this node join the cluster.

Then copy the ~ / .kube / config file of the master node to the corresponding location of the current node to use the kubectl command line tool.

[zjin@master] $kubectl get nodesNAME STATUS ROLES AGE VERSIONmaster Ready master 47m v1.10.0node01 Ready 3m v1.10.0 is how to build a k8s cluster in kubeadm. The editor believes that there are some knowledge points that we may see or use in our daily work. I hope you can learn more from this article. For more details, please follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report