In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Kubernetes v1.14.3 HA cluster installation directory structure
Cluster planning
Hostname ip role component master1-3192.168.14.138-140master+etcdetcd kube-apiserver kube-controller-manager kubectl kubeadm kubelet kube-proxy flannelworker1192.168.14.141nodekubectl kubeadm kubelet kube-proxy flannelvip192.168.14.142 to achieve high availability of apiserver
Component version
Component version centos7.3.1611kernel3.10.0-957.el7.x86_64kubeadmv1.14.3kubeletv1.14.3kubectlv1.14.3kube-proxyv1.14.3flannelv0.11.0etcd3.3.10docker18.09.5kubernetes-dashboardv1.10.1keepalived1.3.5haproxy1.5.18
Highly available architecture description
Kubernetes Architecture concept
Kube-apiserver: cluster core, cluster API interface, hub of communication between cluster components; cluster security control
Etcd: the data center of the cluster, which is used to store the configuration and status information of the cluster and synchronize the information through RAFT.
Kube-scheduler: the dispatching center of the cluster Pod. Under the default kubeadm installation, the-leader-elect parameter has been set to true to ensure that only one kube-scheduler in the master cluster is active.
Kube-controller-manager: cluster state manager. When the cluster state is different from the expected state, kcm will try to restore the desired state of the cluster. For example, when a pod dies, kcm will try to create a new pod to restore the desired state of the corresponding replicas set. In the default kubeadm installation, the-leader-elect parameter has been set to true to ensure that only one kube-controller-manager in the master cluster is active.
Kubelet: agent on kubernetes node, responsible for dealing with docker engine on node
Kube-proxy: one on each node, which is responsible for forwarding traffic from service vip to endpoint pod, which is realized by setting iptables rules.
Load balancing description
Haproxy: mainly used for apiserver load balancing
Keepalived: mainly used for apiserver high availability.
The main function of haproxy+keepalived is to achieve load balancing in high availability state. First, a virtual address VIP is generated through keepalived (when the host point goes down and drifts to another machine, where the VIP is on the same machine as the local ip address, it all represents the local machine, sharing the network card, sharing the local service service, and the local interface socket), and then when the access VIP:PORT is loaded to the actual port RIP:6443 of the backend through haproxy, that is, the real apiserver service.
Preparatory work
1. Install docker
referenc
2. Modify kernel parameters
Cat / .bashrc source ~ / .bashrc
2. Initialize other master hosts in the control plane
Commands for other master nodes to join the control plane kubeadm join 192.168.14.142 token g9up4e.wr2zvn1y0ld2u9c8 8443-- discovery-token-ca-cert-hash sha256:d065d5c2bfce0f5e0f784ed18cb0989dd19542721969c12888f04496b03f121c-- experimental-control-plane-- certificate-key
3, cluster certificate and hash value expire after 2 hours. The method to regenerate the certificate is as follows
Control plane generates the decryption key sudo kubeadm init phase upload-certs of the certificate-- experimental-upload-certs Cluster Regeneration token kubeadm token create Cluster Regeneration hash value openssl x509-pubkey-in / etc/kubernetes/pki/ca.crt | openssl rsa-pubin-outform der 2 > / dev/null | openssl dgst-sha256-hex | sed's / ^. * / / 'the control plane rejoins the cluster kubeadm join-- token: -- discovery-token-ca-cert-hash sha256:-- experimental-control-plane-- method for certificate-key cert work nodes to rejoin the cluster kubeadm join-- token:-- discovery-token-ca-cert-hash sha256:
4. Check whether the three master are initialized successfully.
Kubectl get nodes
6. Add worker nodes to the cluster
Kubeadm join 192.168.14.131 6443-token xl9aa5.2lxz30aupuf9lbhh-discovery-token-ca-cert-hash sha256:0fa135166d86ad9e654f7b92074b34a42a5a25152c05e9253df62af8541c7bad
7. Install the network plug-in Flannel (just run on the master node)
Kubectl apply-f https://raw.githubusercontent.com/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/Documentation/kube-flannel.yml
8. Check the cluster status
Master1 Ready master 4h51m v1.14.3master2 Ready 4h46m v1.14.3master3 Ready master 4h60m v1.14.3worker1 Ready master 4h49m v1.14.3
Reference link:
Kubeadm High availability installation kubernetes v1.14.3
Create a single cluster through kubeadm
Create a highly available cluster through kubeadm
Configure each kubelet in the cluster through kubeadm
Follow-up to add, mark!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 280
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.