In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Today, the editor will bring you an article on setting up K8s and using clusters. The editor thinks it is very practical, so I will share it for you as a reference. Let's follow the editor and have a look.
K8S Settings:
(1) add Ali docker source
Shell > wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo-O / etc/yum.repos.d/docker-ce.repo
(2) install docker
Shell > yum-y install docker-ce
Shell > docker-v
Shell > systemctl enable docker
Shell > systemctl start docker
(3) install kubernetes and add sources
Shell > cat yum install-y kubelet kubeadm kubectl
Shell > systemctl enable kubelet & & systemctl start kubelet
(4) initialize k8s master
Shell > kubeadm init-- apiserver-advertise-address 10.10.202.140-- pod-network-cidr=10.244.0.0/16
Kubeadm init\
-- apiserver-advertise-address=10.10.202.140\
-- image-repository registry.aliyuncs.com/google_containers\
-- pod-network-cidr=10.244.0.0/16
-- apiserver-advertise-address indicates which interface of Master to use to communicate with other nodes of Cluster.
If the Master has more than one interface, it is recommended to specify it explicitly. If not, the kubeadm will automatically select the interface with a default gateway.
-- pod-network-cidr specifies the range of Pod networks. Kubernetes supports a variety of network schemes, and different network schemes have their own requirements for pod-network-cidr. Here it is set to 10.244.0.0and16 because we will use the flannel network scheme, which must be set to this CIDR.
[root@node140 /] # kubeadm init\
-- apiserver-advertise-address=10.10.202.140\
-- image-repository registry.aliyuncs.com/google_containers\
-- pod-network-cidr=10.244.0.0/16
W1211 22 https://dl.k8s.io/release/stable-1.txt": 26 version.go:101 52.608250 70792 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": for https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W1211 22:26:52.608464 70792 version.go:102] falling back to the local client version: v1.17.0
W1211 22 virtual 26 validation.go:28 52.608775 70792 validation.go:28] Cannot validate kube-proxy config-no validator is available
W1211 22 virtual 26 validation.go:28 52.608797 70792 validation.go:28] Cannot validate kubelet config-no validator is available
[init] Using Kubernetes version: v1.17.0
[preflight] Running pre-flight checks
WARNING IsDockerSystemdCheck: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/ var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/ var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/ etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [node140 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.10.202.140]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [node140 localhost] and IPs [10.10.202.140 127.0.0.1:: 1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [node140 localhost] and IPs [10.10.202.140 127.0.0.1:: 1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/ etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/ etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W1211 22 Node,RBAC 27 manifests.go:214 45.746769 70792] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W1211 22 Node,RBAC 27 manifests.go:214 45.748837 70792 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/ etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/ etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 34.003938 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see-upload-certs
[mark-control-plane] Marking the node node140 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node node140 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: y6wdsf.dkce7wf8lij4rbgf
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/ etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
Mkdir-p $HOME/.kube
Sudo cp-I / etc/kubernetes/admin.conf $HOME/.kube/config
Sudo chown $(id-u): $(id-g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply-f [podnetwork] .yaml" with one of the options listed at:
Https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
Kubeadm join 10.10.202.140 6443-- token y6wdsf.dkce7wf8lij4rbgf\
-- discovery-token-ca-cert-hash sha256:2c307c40531df0dec0908647a9913c09174a0962531694c383fbc14315c1ae07
Kubeadm performs pre-initialization checks.
② generates token and certificates.
③ generates the KubeConfig file that kubelet needs to communicate with Master.
④ installs the Master component and downloads the Docker image of the component from goolge's Registry, which may take some time, depending on the quality of the network.
⑤ installs the add-ons kube-proxy and kube-dns.
⑥ Kubernetes Master initialized successfully.
⑦ prompts you how to configure kubectl, which will be put into practice later.
⑧ prompts you to install the Pod network, which will be put into practice later.
⑨ prompts how to register other nodes with Cluster, which will be put into practice later.
(5) add tab
Mkdir-p $HOME/.kube
Sudo cp-I / etc/kubernetes/admin.conf $HOME/.kube/config
Sudo chown $(id-u): $(id-g) $HOME/.kube/config
(6) install pod network
Kubectl apply-f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
(7) add hosts to the cluster
Shell > kubeadm join 10.10.202.140 token y6wdsf.dkce7wf8lij4rbgf 6443-- token y6wdsf.dkce7wf8lij4rbgf\
-- discovery-token-ca-cert-hash sha256:2c307c40531df0dec0908647a9913c09174a0962531694c383fbc14315c1ae07
Error report: 3 error report
[preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
WARNING IsDockerSystemdCheck: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
Error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: / proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with-ignore-preflight-errors=...
To see the stack trace of this error execute with-- vault 5 or higher
The first error: dockers is not started by systemd, the process modifies docker.service
# ExecStart=/usr/bin/dockerd-H fd://-- containerd=/run/containerd/containerd.sock
ExecStart=/usr/bin/dockerd-exec-opt native.cgroupdriver=systemd
The second one is not set: systemctl enable kubelet.service
The third kernel is not set: echo 1 > / proc/sys/net/bridge/bridge-nf-call-iptables
Solve the problem according to the error report.
(7) configure kubectl
Shell > mkdir-p $HOME/.kube
Shell > sudo cp-I / etc/kubernetes/admin.conf $HOME/.kube/config
Shell > sudo chown $(id-u): $(id-g) $HOME/.kube/config
(8) View cluster status
Shell > kubectl get cs
NAME STATUS MESSAGE ERROR
Scheduler Healthy ok
Controller-manager Healthy ok
Etcd-0 Healthy {"health": "true"}
(9) add nodes to the cluster
Step 1: environmental preparation
1.node node shuts down firewall and selinux
two。 Disable swap
Resolve hostname
4. Start the kernel function
Start kubeket
Step 2: join node141 node142 node143
Shell > kubeadm join 10.10.202.141 token y6wdsf.dkce7wf8lij4rbgf 6443-- token y6wdsf.dkce7wf8lij4rbgf
-- discovery-token-ca-cert-hash sha256:2c307c40531df0dec0908647a9913c09174a0962531694c383fbc14315c1ae07
(10) View the cluster
Shell > kubectl get nodes
NAME STATUS ROLES AGE VERSION
Node140 Ready master 90d v1.17.0
Node141 Ready 90d v1.17.0
Node142 Ready 90d v1.17.0
Node143 Ready 90d v1.17.0
It will become read after a while.
(11) method to remove a node node:
(1) enter maintenance mode
Shell > kubectl drain host1-delete-local-data-force-ignore-daemonsets
(2) Delete nodes
Shell > kubectl delete node node141
The above is the specific introduction of k8s setup and use cluster, the content is more comprehensive, and I also believe that there are quite a few tools that we may see or use in our daily work. Through this article, I hope you can gain more.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.