In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Delete the node node in the existing K8s cloud platform by mistake, and then add the deleted node to the cluster. If it is a new server, you must also install docker and K8s basic components.
1. View the number of nodes and delete node nodes (master nodes)
[root@k8s01 ~] # kubectl get nodes
NAME STATUS ROLES AGE VERSION
K8s01 Ready master 40d v1.15.3
K8s02 Ready 40d v1.15.3
K8s03 Ready 40d v1.15.3
[root@k8s01 ~] # kubectl delete nodes k8s03
Node "k8s03" deleted
[root@k8s01 ~] # kubectl get nodes
NAME STATUS ROLES AGE VERSION
K8s01 Ready master 40d v1.15.3
K8s02 Ready 40d v1.15.3
[root@k8s01 ~] #
two。 Clear the cluster information on the deleted node node
[root@k8s03 ~] # kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or' kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1017 15:43:41.491522 3010 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/ var/lib/kubelet"
[reset] Deleting contents of config directories: [/ etc/kubernetes/manifests / etc/kubernetes/pki]
[reset] Deleting files: [/ etc/kubernetes/admin.conf / etc/kubernetes/kubelet.conf / etc/kubernetes/bootstrap-kubelet.conf / etc/kubernetes/controller-manager.conf / etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/ var/lib/kubelet / etc/cni/net.d / var/lib/dockershim / var/run/kubernetes] The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually.
For example:
Iptables-F & & iptables-t nat-F & & iptables-t mangle-F & & iptables-X If your cluster was setup to utilize IPVS, run ipvsadm-- clear (or similar)
To reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[root@k8s03 ~] #
3. Check the token value of the cluster on the master node
[root@k8s01] # kubeadm token create-- print-join-command
Kubeadm join 192.168.54.128Viru 6443-token mg4o13.4ilr1oi605tj850w-discovery-token-ca-cert-hash sha256:363b5b8525ddb86f4dc157f059e40c864223add26ef53d0cfc9becc3cbae8ad3
[root@k8s01 ~] #
4. Add the node node back to the k8s cluster
[root@k8s03] # kubeadm join 192.168.54.128 token mg4o13.4ilr1oi605tj850w 6443-- discovery-token-ca-cert-hash sha256:363b5b8525ddb86f4dc157f059e40c864223add26ef53d0cfc9becc3cbae8ad3
[preflight] Running pre-flight checks
WARNING IsDockerSystemdCheck: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.2. Latest validated version: 18.09
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl-n kube-system get cm kubeadm-config-oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/ var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/ var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@k8s03 ~] #
5. View the status of the entire cluster
[root@k8s01 ~] # kubectl get nodes
NAME STATUS ROLES AGE VERSION
K8s01 Ready master 40d v1.15.3
K8s02 Ready 40d v1.15.3
K8s03 Ready 41s v1.15.3
[root@k8s01 ~] #
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.