Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

The Construction method of Kubernetes Cluster

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article focuses on "how to build a Kubernetes cluster". Interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn "how to build a Kubernetes cluster".

0. Summary

Use kubeadm to build a single-node kubernets instance for learning only. The operating environment and software are summarized as follows:

~ version remarks OSUbuntu 18.0.4192.168.132.152 my.servermaster.local/192.168.132.154 my.worker01.localDocker18.06.1~ce~3-the highest version supported by the latest version (1.12.3) of 0~ubuntuk8s, Kubernetes1.12.3 target software must be fixed

The above systems and software are basically up to date in 2018.11, in which docker needs to be aware that the version supported by K8s must be installed.

1. Installation steps

Turn off the system switching partition

Swapoff-a

When installing and running, docker is used by default, and docker can be installed.

Apt-get install docker-ce=18.06.1~ce~3-0~ubuntu

The command to install kubeadm is the same as that on the official website, but the package source is changed to Aliyun.

Apt-get update & & apt-get install-y apt-transport-httpscurl-s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add-cat re-tag- > Delete the old tagvim. / loadloadimages. Shacks are bound to images # config the image mapdeclare-An images map= () images ["k8s.gcr.io/kube-apiserver:v1.12.3"] = "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-" Apiserver:v1.12.3 "images [" k8s.gcr.io/kube-controller-manager:v1.12.3 "] =" registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.12.3 "images [" k8s.gcr.io/kube-scheduler:v1.12.3 "] =" registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.12.3 "images [" k8s.gcr.io/kube-proxy:v1.12 " .3 "] =" registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.12.3 "images [" k8s.gcr.io/pause:3.1 "] =" registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 "images [" k8s.gcr.io/etcd:3.2.24 "] =" registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24 "images [" k8s.gcr.io/coredns " : 1.2.2 "] =" registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.2 "# # re-tag foreachfor key in ${! images [@]} do docker pull ${images [$key]} docker tag ${images [$key]} $key docker rmi ${images [$key]} done### checkdocker images# c. Execute script quasi-mirror image sudo chmod + x load_images.sh./load_images.sh2.2 initialize cluster (master)

Initialization requires you to specify at least two parameters:

Kubernetes-version: get the version by accessing the public network with kubeadm

Pod-network-cidr: required for flannel network plug-in configuration

# the final result of executing the initialization command sudo kubeadm init-- kubernetes-version=v1.12.3-- pod-network-cidr=10.244.0.0/16### is as follows... Your Kubernetes master has initialized fully fulfilled to start using your cluster You need to run the following as a regular user: mkdir-p $HOME/.kube sudo cp-I / etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id-u): $(id-g) $HOME/.kube/configYou should now deploy a podnetwork to the cluster.Run "kubectl apply-f [podnetwork] .yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of machines by running the following on Each nodeas root: kubeadm join 192.168.132.152 token ymny55.4jlbbkxiggmn9ezh-- discovery-token-ca-cert-hash sha256:70265fafdb22d524c15616543d0b76527c686329221340b3b8da3652abed46b92.3 configure a non-administrator account based on success information to use kubectlmkdir-p $HOME/.kubesudo cp-I / etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id-u): $(id-g) $HOME/.kube/config

Use a non-root account to view the node:

Kubectl get nodesNAME STATUS ROLES AGE VERSIONservermaster NotReady master 28m v1.12.3

It is found that there is a master node, but the status is NotReady, and a decision needs to be made here:

If you want it to be stand-alone, execute the following

Kubectl taint nodes-all node-role.kubernetes.io/master-

If you want the build to continue, continue with the next steps, where the status of the primary node can be ignored.

2.4 apply network plug-ins

View the contents of the kube-flannel.yml file and copy it to the local file to prevent terminal from getting it remotely

Kubectl apply-f kube-flannel.ymlclusterrole.rbac.authorization.k8s.io/flannel createdclusterrolebinding.rbac.authorization.k8s.io/flannel createdserviceaccount/flannel createdconfigmap/kube-flannel-cfg createddaemonset.extensions/kube-flannel-ds-amd64 createddaemonset.extensions/kube-flannel-ds-arm64 createddaemonset.extensions/kube-flannel-ds-arm createddaemonset.extensions/kube-flannel-ds-ppc64le createddaemonset.extensions/kube-flannel-ds-s390x created2.5 create a new worker node

New worker node reference [1. Installation steps] you can create a new one on another server. The worker node does not need to prepare 2.1 / 2.3 and all the steps after that. You only need to complete the basic installation, enter the new worker node after installation, and execute the previous step to get the join command:

Kubeadm join 192.168.132.152 discovery-token-ca-cert-hash sha256:70265fafdb22d524c15616543d0b76527c686329221340b3b8da3652abed46b9... discovery-token-ca-cert-hash sha256:70265fafdb22d524c15616543d0b76527c686329221340b3b8da3652abed46b9... ... This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the master to see this node join the cluster.2.6 check cluster (1 master, 1 worker) kubectl get nodesNAME STATUS ROLES AGE VERSIONservermaster Ready master 94m v1.12.3worker01 Ready 54m v1.12.32.7 create dashboard

Copy the kubernetes-dashboard.yaml content to the local file, but the remote file cannot be accessed by the command line. Edit the last configuration Dashboard Service, and add type and nodePort. The result is as follows:

#-Dashboard Service-# kind: ServiceapiVersion: v1metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-systemspec: ports:-port: 443 targetPort: 8443 nodePort: 30000 type: NodePort selector: k8s-app: kubernetes-dashboard

Execute the command to create the dashboard service on the master node:

Kubectl create-f kubernetes-dashboard.yaml secret/kubernetes-dashboard-certs createdserviceaccount/kubernetes-dashboard createdrole.rbac.authorization.k8s.io/kubernetes-dashboard-minimal createdrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal createddeployment.apps/kubernetes-dashboard createdservice/kubernetes-dashboard created

The browser enters the worker node ip and port to access using https: https://my.worker01.local:30000/#!/login can verify that dashboard is installed successfully.

2.8Login to Dashboard

Get secret through kubectl, then get the detailed token, and then copy to the login page in the previous step and select Token (token), that is, you can log in.

# View the key and list all the keys under the kube-system namespace kubectl-n kube-system get secretNAME TYPE DATA AGEclusterrole-aggregation-controller-token-vxzmt kubernetes.io/service-account-token 3 10h### View token Select the clusterrole-aggregation-controller-token-* key kubectl-n kube-system describe secret clusterrole-aggregation-controller-token-vxzmtName: clusterrole-aggregation-controller-token-vxzmtNamespace: kube-systemLabels: Annotations: kubernetes.io/service-account.name: clusterrole-aggregation-controller kubernetes.io/service-account.uid: dfb9d9c3-f646-11e8-9861-000c29b7e604Type: kubernetes.io/service-account-tokenData====ca.crt: 1025 bytesnamespace: 11 bytestoken: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJjbHVzdGVycm9sZS1hZ2dyZWdhdGlvbi1jb250cm9sbGVyLXRva2VuLXZ4em10Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImNsdXN0ZXJyb2xlLWFnZ3JlZ2F0aW9uLWNvbnRyb2xsZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJkZmI5ZDljMy1mNjQ2LTExZTgtOTg2MS0wMDBjMjliN2U2MDQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06Y2x1c3RlcnJvbGUtYWdncmVnYXRpb24tY29udHJvbGxlciJ9.MfjiMrmyKl1GUci1ivD5RNrzo_s6wxXwFzgM_3NIAmTCfRQreYdhat3yyd6agaCLpUResnNC0ZGRi4CBs_Jgjqkovhb80V05_YVIvCrlf7xHxBKEtGfkJ-qLDvtAwR5zrXNNd0Ge8hTRxw67gZ3lGMkPpw5nfWmc0rzk90xTTQD1vAtrHMvxjr3dVXph5rT8GNuCSXA_J6o2AwYUbaKCc2ugdx8t8zX6oFJfVcw0ZNYYYIyxoXzzfhdppORtKR9t9v60KsI_-q0TxY-TU-JBtzUJU-hL6lB5MOgoBWpbQiV-aG8Ov74nDC54-DH7EhYEzzsLci6uUQCPlHNvLo_J2A

3. Encounter problems

Master has been set up, and worker has also join. Get nodes has found that it is still NotReady status.

Reason: it is too complicated to say clearly that it is still a K8s issue. Looking at issue can basically determine that it is a cni (Container Network Interface) problem, and flannel coverage modifies this problem.

Solution: install the flannel plug-in (kubectl apply-f kube-flannel.yml)

Configuration error restart to build the cluster

Solution: kubeadm reset

Cannot access dashboard

Reason: Back-off pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0

Solution:

Change k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0 in kubernetes-dashboard-ce.yaml file to registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.0

Download the image in advance and configure the tag. Pay attention to the location of the download worker node. You can view more specific information through: kubectl describe pod kubernetes-dashboard-85477d54d7-wzt7-n kube-system

How to increase the failure time of token

Reason: 15 minutes by default

Solution:

...-name: kubernetes-dashboardimage: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.0ports:- containerPort: 8443 protocol: TCPargs:-auto-generate-certificates-token-ttl=86400 # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. #-apiserver-host= http://my-address:portvolumeMounts:......

If you create dashboard: kubectl-n kube-system edit deployment kubernetes-dashboard, add a line to the args section of the Deployment section about containers:-token-ttl=86400, in the same way that you modify it before creation

Before creating dashboard: you can modify the Dashboard Deployment section of the kubernetes-dashboard.yaml file and add a line about the args section of containers:-token-ttl=86400. The number is customized in seconds as follows:

4. Efficiency & skill 4.1 kubeadm automatic completion

Use kubeadm completion-- help to view the usage details. The autocomplete command of bash is posted here directly.

Kubeadm completion bash > ~ / .kube/kubeadm_completion.bash.incprintf "\ n # Kubeadm shell completion\ nsource'$HOME/.kube/kubeadm_completion.bash.inc'\ n" > $HOME/.bash_profilesource $HOME/.bash_profile4.2 kubectl automatic completion

Use kubectl completion-- help to view the usage details. Here, post the bash autocomplete command directly. Note that the second line command does not copy at once, copy the first line printf first and then copy the rest.

Kubectl completion bash > ~ / .kube/completion.bash.incprintf "# Kubectl shell completionsource'$HOME/.kube/completion.bash.inc'" > > $HOME/.bash_profilesource $HOME/.bash_profile4.3 using private docker registry

Create a secret, and then add an imagePullSecrets configuration to the specified image. Create and view the secret as follows:

Kubectl create secret docker-registry regcred-- docker-server=registry.domain.cn:5001-- docker-username=xxxxx-- docker-password=xxxxx-- docker-email=jimmy.w@aliyun.comkubectl get secret regcred-- output=yamlkubectl get secret regcred-- output= "jsonpath= {.data.\ .dockerconfigjson}" | base64-- decode

Configure imagePullSecrets as follows:

... containers:- name: mirage image: registry.domain.cn:5001/mirage:latestimagePullSecrets:-name: regcredports:- containerPort: 3008 protocol: TCPvolumeMounts:...... 4.4 use HostAliases to add entries to the Pod / etc/hosts file

If there are some special portals or those previously placed in / etc/hosts, you can configure them by configuring hostAliases, which is the same as the local hosts, and these hostAlieas configurations will be placed in the container / etc/hosts, as shown below:

ApiVersion: v1kind: Podmetadata: name: hostaliases-podspec: hostAliases:-ip: "127.0.0.1" hostnames:-"foo.local"-"bar.local"-ip: "10.1.2.3" hostnames:-"foo.remote"-"bar.remote" containers:-name: cat-hosts image: busybox command:-cat args:-"/ etc/hosts" I believe you have a deeper understanding of "the method of building Kubernetes cluster", so you might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report