In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article is about how to initialize a k8s cluster in docker. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.
There are many ways to deploy K8s, but we use the kubeadm tool to deploy.
Official address of Kubeadmn: https://github.com/kubernetes/kubeadm
I. Environment
Master,etcd: 172.16.1.100
Node1: 172.16.1.101
Node2: 172.16.1.102
K8s version: 1.11
II. Premise
1. Communication based on hostname: / etc/hosts
172.16.1.100 master172.16.1.101 node01172.16.1.102 node02
2. Time synchronization
3. Turn off firewalld and iptables.service, which must be disabled, because K8s will set the iptables network policy and so on.
Systemctl stop iptables.service systemctl disable iptables.service systemctl stop firewalld.service systemctl disable firewalld.service
4. Network bridging is all set to 1.
[root@k8s-master yum.repos.d] # cat / proc/sys/net/bridge/bridge-nf-call-ip6tables 1 [root@k8s-master yum.repos.d] # cat / proc/sys/net/bridge/bridge-nf-call-iptables 1 III, installation and configuration step 1, download k8s installation package
Download the kubernetes package: https://github.com/kubernetes/kubernetes/releases
For convenience, we do not use the above installation package to install k8s, just to let you know. We use kubeadm to install this installation.
2. Prepare the yum source (both master and nodes are required)
A) docker source
Wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
B) k8s source
[root@k8s-master yum.repos.d] # cat k8s.repo [k8s] name=k8s repobaseurl= https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/gpgcheck=1gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpgenabled=1[root@k8s-master yum.repos.d] # yum repolist [root@k8s-master yum.repos.d] # wget https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg [root@k8s -master yum.repos.d] # rpm--import yum-key.gpg [root@k8s-master yum.repos.d] # wget https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg[root@k8s-master yum.repos.d] # rpm--import rpm-package-key.gpg3, First install kublete, kubeadm, docker (execute on master)
Yum-y install docker-ce kubelet kubeadm kubectl (executed on master) [root@k8s-master yum.repos.d] # rpm-ql kubelet/etc/kubernetes/manifests # manifest directory / etc/sysconfig/kubelet # configuration file / etc/systemd/system/kubelet.service/usr/bin/kubelet # main program 4, install agent (to climb over the wall)
Due to some indescribable reason in China, the source of docker default pull image needs to be changed.
Root@k8s-master yum.repos.d] # vim / usr/lib/systemd/system/docker.service [Service] # indicates that when you access the https service, you can access it through the following proxy. The purpose of doing this is to access a foreign docer image, or it will be blocked and commented out when you run out of it. To continue to use the domestic image Environment= "HTTPS_PROXY= http://www.ik8s.io:10080"Environment="NO_PROXY=127.0.0.0/8,172.16.0.0/16"[root@k8s-master yum.repos.d] # systemctl daemon-reload [root@k8s-master yum.repos.d] # systemctl start docker [root@k8s-master yum.repos.d] # docker info #, you can see the following two HTTPS Proxy: http://www.ik8s.io:10080No Proxy : 127.0.0.0 telnet www.ik8s.io 8172.16.0) 16 [chenzx@sa ~] $10080 # make sure this port is connected. Run kubeadm int to initialize the cluster (on master)
The process pre-checks prerequisites, generates certificates, private keys, generates configuration files, generates manifest files for static pod, and completes deployment (addons)
[root@k8s-master yum.repos.d] # systemctl enable kubelet # can only be set to boot at first, but don't start the service manually (even if it doesn't start now), wait until initialization is complete. [root@k8s-master chenzx] # systemctl enable docker
[root@k8s-master chenzx] # kubeadm init-- help
-- apiserver-advertise-address: indicates what the external address of apiserver is. The default is 0.0.0.0.
-- apiserver-bind-port: indicates what the port of apiserver is. Default is 6443.
-- cert-dir: the directory where the certificate is loaded. Default is / etc/kubernetes/pki.
-- config: configuration file
-- ignore-preflight-errors: if there are any errors in the pre-check, you can ignore them, such as IsPrivilegedUser,Swap. Etc.
-- kubernetes-version: specify what the version information of k8s is to initialize
-- pod-network-cidr: specify which network segment to use for pod. The default is 10.244.0.0Universe 16.
-- service-cidr: specify which network segment is used by the service component. Default is 10.96.0.0lap12.
[root@k8s-master chenzx] # cat / etc/sysconfig/kubelet # specify additional initialization information The following indicates that the swap function of the operating system KUBELET_EXTRA_ARGS= "--fail-swap-on=false" [root@k8s-master chenzx] # kubeadm init-- kubernetes-version=v1.11.1-- pod-network-cidr=10.244.0.0/16-- service-cidr=10.96.0.0/12-- ignore-preflight-errors=Swap [preflight/images] Pulling images required for setting up a Kubernetes cluster # # means to start pulling the image [preflight/images] This might take a minute or two Depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' # # if you feel the Internet is slow You can run the kubeadm config images pull command to drag the image to the local [certificates] Generated apiserver-kubelet-client certificate and key. # # you can see generating a bunch of certificates [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [controlplane] wrote Static Pod manifest for component kube-apiserver to "/ etc/kubernetes/manifests/kube-apiserver.yaml" # # yml controls how much cpu and memory [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/ etc/kubernetes/manifests/kube-controller-manager.###markmaster is allocated to pod to help us mark this node as the primary node [markmaster] Marking the node k8s-master as master by adding the label" node-role.kubernetes.io/master='' "[markmaster] Marking the Node k8s-master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] # # bootstraptoken is the boot token [bootstraptoken] using token: as5gwu.ktojf6cueg0doexi [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials## used by other nodes to join the cluster begins with k8s version 1.11, and DNS is officially replaced by CoreDNS. It supports many new functions, such as dynamic configuration of resources, etc. [addons] Applied essential addon: CoreDNS##kube-proxy is hosted on K8S and is responsible for producing iptables and ipvs rules for service. Starting from k8s1.11, ipvs [addons] Applied essential addon: kube-proxy## sees that Your Kubernetes master has initialized initialization is successful. You need to run the following as a regular user:## also needs to manually run the command mkdir-p $HOME/.kube sudo cp-I / etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id-u): $(id-g) $HOME/.kube/config## after other machines pack the package. You can execute the following command to add the nodes node to the cluster, and remember to save the following command, or you can't add it in the future if you can't find it. # # in fact, the purpose of this design is that not everyone can join the cluster. You need to join You can now join any number of machines by running the following on each nodeas root with the following token: kubeadm join 172.16.1.100 discovery-token-ca-cert-hash sha256:399a7de763b95e52084d7bd4cad71dc8fa1bf6dd453b02743d445eee59252cc5 6443-- token as5gwu.ktojf6cueg0doexi-- discovery-token-ca-cert-hash sha256:399a7de763b95e52084d7bd4cad71dc8fa1bf6dd453b02743d445eee59252cc5 [root@k8s-master chenzx] # docker imagesREPOSITORY TAG IMAGE ID CREATED SIZEk8s.gcr.io/kube-proxy -amd64 v1.11.1 d5c25579d0ff 7 weeks ago 97.8MBk8s.gcr.io/kube-apiserver-amd64 v1.11.1 816332bd9d11 7 weeks ago 187MBk8s.gcr.io/kube-controller-manager-amd64 v1.11.1 52096ee87d0e 7 weeks ago 155MBk8s.gcr.io/kube-scheduler-amd64 V1.11.1 272b3a60cd68 7 weeks ago 56.8MBk8s.gcr.io/coredns 1.1.3 b3b94275d97c 3 months ago 45.6MBk8s.gcr.io/etcd-amd64 3.2.18 b8df3b177be2 4 months ago 219MBk8s.gcr.io/pause 3.1 da86e6ba6ca1 8 months ago 742kB
Description: pause can be a container, this container does not need to start, pause can enable other containers to copy the basic network and storage components.
If there is an installation error, you can execute the kubeadm reset command to reset, and then re-execute kubeadm init... Command
Note: kubeadm join 172.16.1.100 kubeadm join output in initialization above-- token. Be sure to paste this sentence into notepad and save it, because you will use this command to add node to the cluster in the future, and the command cannot be reproduced, so remember!
[root@k8s-master chenzx] # mkdir-p $HOME/ .kube [root @ k8s-master chenzx] # cp-I / etc/kubernetes/admin.conf $HOME/.kube/config6, install k8s package on nodes node (executed on all nodes nodes) yum-y install docker-ce kubelet kubeadm (executed on node, kubectl can not be installed on nodes) 7, view status information (on master)
View component information:
[root@k8s-master chenzx] # kubectl get cs NAME STATUS MESSAGE ERRORscheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"}
View node information:
[root@k8s-master chenzx] # kubectl get nodesNAME STATUS ROLES AGE VERSIONk8s-master NotReady master 51m v1.11.2
Note: the status is NotReady because the flannel component is still missing, without which it is impossible to set up the network.
8. Install flannel network components (executed on master)
Download address: https://github.com/coreos/flannel
Install flannel: [root @ k8s-master chenzx] # kubectl apply-f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
View the status of all pod running in the kube-system namespace on the current master node:
[root@k8s-master chenzx] # kubectl get pods-n kube-system NAME READY STATUS RESTARTS AGEcoredns-78fcdf6894-6j6nt 0 2hcoredns-78fcdf6894-pnmjj 1 Running 0 2hcoredns-78fcdf6894-pnmjj 0 Running 0 2hetcd-k8s-master 1 Running 0 1hkube-apiserver-k8s-master 1/1 Running 0 1hkube-controller-manager-k8s-master 1/1 Running 0 1hkube-flannel-ds-amd64-txxw2 1/1 Running 0 1hkube-proxy-frkp9 1/1 Running 0 2hkube-scheduler-k8s-master 1/1 Running 0 1h
In addition, all the above pod must be guaranteed to be in running status. If which one is not, you can check why with a command similar to the following:
Kubectl dscrible pods coredns-78fcdf6894-6j6nt-n kube-system View flannel Image: [root@k8s-master chenzx] # docker images quay.io/coreos/flannelREPOSITORY TAG IMAGE ID CREATED SIZEquay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 7 months ago 44.6MB View nodes Node Information See that status has become ready this time [root@k8s-master chenzx] # kubectl get nodesNAME STATUS ROLES AGE VERSIONk8s-master Ready master 1h v1.11.2
View the current node namespace:
[root@k8s-master chenzx] # kubectl get nsNAME STATUS AGEdefault Active 3hkube-public Active 3hkube-system Active 3h9, execute kubeadm join (on node1 and node2, which means to join the cluster)
This process will first check whether the prerequisite meets the requirements, then complete the authentication of the master node based on the token authentication method of domain sharing, and complete the local pod resource installation, including kubbe-proxy and DNS deployed by addons method.
1) modify the configuration file and start the service on node1 and node2:
[root@k8s-master chenzx] # vim / usr/lib/systemd/system/docker.service
[Service] Environment= "HTTPS_PROXY= http://www.ik8s.io:10080"Environment="NO_PROXY=127.0.0.0/8,172.16.0.0/16"
[root@k8s-master chenzx] # vim / etc/sysconfig/kubelet
# specify additional initialization information KUBELET_EXTRA_ARGS= "--fail-swap-on=false" [root@k8s-node1 chenzx] # systemctl daemon-reload [root@k8s-node1 chenzx] # systemctl start docker [root@k8s-node1 chenzx] # systemctl enable docker [root@k8s-node1 chenzx] # systemctl enable kubelet
[root@k8s-node1 chenzx] # docker info
HTTPS Proxy: http://www.ik8s.io:10080No Proxy: 127.0.0.0/8172.16.0.0/16
[root@k8s-node1 chenzx] # kubeadm join 172.16.1.100 discovery-token-ca-cert-hash sha256:399a7de763b95e52084d7bd4cad71dc8fa1bf6dd453b02743d445eee59252cc5 6443-- token as5gwu.ktojf6cueg0doexi-- discovery-token-ca-cert-hash sha256:399a7de763b95e52084d7bd4cad71dc8fa1bf6dd453b02743d445eee59252cc5-- ignore-preflight-errors=Swap (Note: this command is obtained during kubeadm init initialization)
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/ var/run/dockershim.sock" to the Node API object "k8s-node1" as an annotationThis node has joined the cluster:* Certificate signing request was sent to master and a response was received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the master to see this node join the cluster.
[root@k8s-node1 chenzx] # docker images
REPOSITORY TAG IMAGE ID CREATED SIZEk8s.gcr.io/kube-proxy-amd64 v1.11.1 d5c25579d0ff 7 weeks ago 97.8MBquay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 7 months ago 44.6MBk8s.gcr.io/pause 3.1 Da86e6ba6ca1 8 months ago 742kB
[root@k8s-master chenzx] # kubectl get nodes (see on master)
NAME STATUS ROLES AGE VERSIONk8s-master Ready master 4h v1.11.2k8s-node1 Ready 55m v1.11.2
[root@k8s-master chenzx] # kubectl get pods-n kube-system-o wide (see on master)
NAME READY STATUS RESTARTS AGE IP NODEcoredns-78fcdf6894-6j6nt 0 k8s-masteretcd-k8s-master 1 Running 0 4h k8s-mastercoredns-78fcdf6894-pnmjj 0 Running 0 4h k8s-masteretcd-k8s-master 1/1 Running 0 3h 172.16.1.100 k8s-masterkube-apiserver-k8s-master 1/1 Running 0 3h 172.16.1.100 k8s-masterkube-controller-manager-k8s-master 1/1 Running 0 3h 172.16.1.100 K8s-masterkube-flannel-ds-amd64-87tqv 1 k8s-masterkube-proxy-2rf4m 1 Running 0 57m 172.16.1.101 k8s-node1kube-flannel-ds-amd64-txxw2 1 Running 0 3h 172.16.1.100 k8s-masterkube-proxy-2rf4m 1 Running 0 57m 172.16.1.101 k8s-node1kube-proxy-frkp9 1/1 Running 0 4h 172.16.1.100 k8s-masterkube-scheduler-k8s-master 1/1 Running 0 3h 172.16.1.100 k8s-master
The above command is also executed on node2.
At this point, the installation of k8s has been completed.
Thank you for reading! This is the end of the article on "how to initialize k8s cluster in docker". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it for more people to see!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.