In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-07 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces "K8s cluster deployment of high availability architecture". In daily operation, I believe many people have doubts about K8s cluster deployment of high availability architecture. The editor consulted all kinds of materials and sorted out simple and useful operation methods. I hope it will be helpful to answer the doubts of "K8s cluster deployment of high availability architecture"! Next, please follow the editor to study!
Environment
System role IPcentos7.4 master-1 10.10.25.149centos7.4 master-2 10.10.25 .112centos7.4 node-1 10.10.25.150centos7.4 node-2 10.10.25.151centos7.4 lb-1 backup 10 .10.25.111centos7.4 lb-2 master 10.10.25.110 centos7.4 VIP 10.10.25.113
Deploy master02 nodes
Copy the / opt/kubernetes/ directory on master01 scp-r / opt/kubernetes/ root@10.10.25.112:/opt copy the related services scp / usr/lib/systemd/system/ {kube-apiserver,kube-scheduler on master01 Kube-controller-manager} .service root@10.10.25.112:/usr/lib/systemd/systemvim / usr/lib/systemd/system/kube- apiserver.service[ Unit] Description=Kubernetes API ServerDocumentation= https://github.com/GoogleCloudPlatform/kubernetesAfter=network.target[Service]ExecStart=/opt/kubernetes/bin/kube-apiserver\-admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction\-bind-address=10.10.25.112\-- insecure-bind-address=127.0.0.1\-- authorization-mode=Node RBAC\-- runtime-config=rbac.authorization.k8s.io/v1\-- kubelet-https=true\-- anonymous-auth=false\-- basic-auth-file=/opt/kubernetes/ssl/basic-auth.csv\-- enable-bootstrap-token-auth\-- token-auth-file=/opt/kubernetes/ssl/bootstrap-token.csv\-- service-cluster-ip-range=10.1.0.0/16\-- service-node-port-range=20000-40000\-tls -cert-file=/opt/kubernetes/ssl/kubernetes.pem\-tls-private-key-file=/opt/kubernetes/ssl/kubernetes-key.pem\-client-ca-file=/opt/kubernetes/ssl/ca.pem\-service-account-key-file=/opt/kubernetes/ssl/ca-key.pem\-etcd-cafile=/opt/kubernetes/ssl/ca.pem\-etcd-certfile=/opt/kubernetes/ssl/kubernetes.pem\-etcd-keyfile= / opt/kubernetes/ssl/kubernetes-key.pem\-- etcd-servers= https://10.10.25.149:2379, Https://10.10.25.150:2379, Https://10.10.25.151:2379\-- enable-swagger-ui=true\-- allow-privileged=true\-- audit-log-maxage=30\-- audit-log-maxbackup=3\-- audit-log-maxsize=100\-- audit-log-path=/opt/kubernetes/log/api-audit.log\-- event-ttl=1h\-- logtostderr=false 2\-- logtostderr=false\-- log-dir=/opt/kubernetes/logRestart=on-failureRestartSec=5Type=notifyLimitNOFILE=65536 [Install] WantedBy=multi-user.target launch apiserversystemctl Start kube-apiserver# ps-aux | grep kubesystemctl start kube-scheduler kube-controller-manager joins the system pathvim / root/.bash_profile adds PATH=$PATH:$HOME/bin:/opt/kubernetes/binsource .bash _ profile
View component status
# / opt/kubernetes/bin/kubectl get csNAME STATUS MESSAGE ERRORscheduler Healthy ok controller-manager Healthy ok etcd-1 Healthy {"health": "true"} etcd-2 Healthy {"health": "true"} etcd-0 Healthy {"health" ":" true "} it has been explained that you can connect to the ETCD cluster at this time
View node status
# / opt/kubernetes/bin/kubectl get nodeNAME STATUS ROLES AGE VERSION10.10.25.150 NotReady 14d v1.10.310.25.151 NotReady 14d v1.10.3 indicates that master02 is still unable to communicate with node
Configure single-node LB load balancer
Note: synchronization is required for high availability cluster time.
Lb02 node configuration
Configure the nginx yum source, using a 4-tier proxy
Vim / etc/yum.repos.d/ nginx.repos [nginx] name=nginx repobaseurl= https://nginx.org/packages/centos/7/$basearch/gpgcheck=0enabled=1yum install-y nginx
Modify Nginx configuration file
Vim / etc/nginx/nginx.conf stream {log_format main "remote_addr $upstream_addr $time_local $status"; access_log / var/log/nginx/k8s-access.log main; upstream k8s-apiserver {server 10.10.25.149log_format main 6443; server 10.10.25.112virtual 6443;} server {listen 10.10.25.110 virtual 6443; proxy_pass k8s-apiserver }}
Modify node node
Cd / opt/kubernetes/cfg/vim bootstrap.kubeconfig modify server: https://10.10.25.149:6443 to server: https://10.10.25.110:6443vim kubelet.kubeconfig modify server: https://10.10.25.149:6443 to server: https://10.10.25.110:6443vim kube-proxy.kubeconfig modify server: https://10.10.25.149:6443 to server: https://10 .10.25.110: 6443systemctl restart kubeletsystemctl restart kube-proxy
After starting at this time, you will find that master01 master02 can not communicate with the node node. Check the node log and find that the certificate is incorrect, which roughly means that the kube-proxy certificate belongs to the master01 node rather than the LB node. So next we need to regenerate the Kube-proxy certificate
Master01 regenerates the api-server certificate
Edit certificate json file [root@master ssl] # cat kubernetes-csr.json {"CN": "kubernetes", "hosts": ["127.0.0.1", "10.10.25.149", "10.10.25.112", "10.10.25.110", "10.10.25.111", "10.10.25.113", "10.1.0.1", "kubernetes" "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local"], "key": {"algo": "rsa", "size": 2048}, "names": {"C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "K8s" "OU": "System"}]} description: the IP in the json file includes the master01 master02 node IP, the address of all LB nodes IP and VIP Because we finally need to implement Nginx + Keepalive 0 single-node load balancing architecture to generate certificate cfssl gencert-ca=/opt/kubernetes/ssl/ca.pem\-ca-key=/opt/kubernetes/ssl/ca-key.pem\-config=/opt/kubernetes/ssl/ca-config.json\-profile=kubernetes kubernetes-csr.json | cfssljson-bare kubernetes is copied to the corresponding node cp kubernetes*.pem / opt/kubernetes/ssl/scp kubernetes*.pem 10.10 .25.112: / opt/kubernetes/ssl/scp kubernetes*.pem 10.10.25.150:/opt/kubernetes/ssl/scp kubernetes*.pem 10.10.25.151:/opt/kubernetes/ssl/
Restart the service of the master node
Systemctl start kube-scheduler kube-controller-manager kube-apiserver
Restart the node node service
Systemctl restart kube-proxy kubelet
Verification
# kubectl get nodeNAME STATUS ROLES AGE VERSION10.10.25.150 Ready 15d v1.10.310.25.151 Ready 15d v1.10.3 indicates that single-node load balancing has been achieved. There is a point to note here. When the above configuration is completed and there is no error, it is possible that the obtained node status is notready. The possible reason for this problem is that node cannot be registered in the log. In this case, we need to register manually and execute the command on master01 to kubectl get csr | grep 'Pending' | awk' NR > 0 {print $1}'| xargs kubectl certificate approve
Lb01 node configuration
Similarly, the process of installing nginx is not detailed, and the nginx configuration is the same. All that needs to be changed is the bound IP
Vim / etc/nginx/nginx.conf stream {log_format main "remote_addr $upstream_addr $time_local $status"; access_log / var/log/nginx/k8s-access.log main; upstream k8s-apiserver {server 10.10.25.149log_format main 6443; server 10.10.25.112virtual 6443;} server {listen 10.10.25.111 virtual 6443; proxy_pass k8s-apiserver }}
Using Keepalive to achieve High availability of LB Node
Both nodes are required to install keepalive
Yum install keepalived-y
Set lb02 to keepalived and master node
Modify lb02keepalived configuration file
Vim / etc/keepalived/keepalived.conf! Configuration File for keepalivedglobal_defs {notification_email {acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc} notification_email_from Alexandre.Cassen@firewall.loc smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id LVS_DEVEL vrrp_skip_check_adv_addr # vrrp_strict vrrp_garp_interval 0 vrrp_gna_interval 0} vrrp_script check_nginx {script "/ etc/keepalived/check _ nginx.sh "# script checks ngixn status} vrrp_instance VI_1 {state MASTER interface ens192 virtual_router_id 51 priority 100 advert_int 1 authentication {auth_type PASS auth_pass 1111} virtual_ipaddress {10.10.25.113 and 24} track_script {check_nginx}}
Write a script for nginx state detection
Cat / etc/keepalived/check_nginx.sh #! / bin/shcount=$ (ps-ef | grep nginx | egrep-cv "grep | $$") # get the number of nginx processes if ["$count"-eq 0]; then systemctl stop keepalivedfi grants script execution permission chmod + x / etc/keepalived/check_nginx.sh
Start keepalived
Systemctl start keepalived
Check whether the VIP is valid.
Ip addr2: ens192: mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:0c:29:2e:86:82 brd ff:ff:ff:ff:ff:ff inet 10.10.25.110/24 brd 10.10.25.255 scope global dynamic ens192 valid_lft 71256sec preferred_lft 71256sec inet 10.10.25.113/32 scope global ens192 valid_lft forever preferred_lft forever inet6 fe80::58b8:49be:54a7:4c43/64 scope link valid_lft forever preferred_lft forever
Configure lb01keepalived
Keepalived configuration file modified to backup
Cat / etc/keepalived/keepalived.conf! Configuration File for keepalivedglobal_defs {notification_email {acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc} notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_DEVEL vrrp_skip_check_adv_addr # vrrp_strict vrrp_garp_interval 0 vrrp_gna_interval 0} vrrp_script check_nginx {script "/ etc/keepalived/check _ nginx.sh "} vrrp_instance VI_1 {state BACKUP interface ens192 virtual_router_id 51 priority 90 advert_int 1 authentication {auth_type PASS auth_pass 1111} virtual_ipaddress {10.10.25.113} track_script {check_nginx}}
Write a script for nginx state detection
Cat / etc/keepalived/check_nginx.sh #! / bin/shcount=$ (ps-ef | grep nginx | egrep-cv "grep | $$") # get the number of nginx processes if ["$count"-eq 0]; then systemctl stop keepalivedfi grants script execution permission chmod + x / etc/keepalived/check_nginx.sh
Start lb01keepalived
Systemctl start keepalived
Keepalive failover
Do keepalived failover, test method 1 opens a window to ping VIP2 kill master node nginx3 to observe whether VIP is migrated to backup and packet loss of VIP 4 start nginx and keepalive5 of master node to observe VIP drift back to master node
Access to K8s cluster
Connect the node node to VIP
Cd / opt/kubernetes/cfg/vim bootstrap.kubeconfig modify server: https://10.10.25.110:6443 to server: https://10.10.25.113:6443vim kubelet.kubeconfig modify server: https://10.10.25.110:6443 to server: https://10.10.25.113:6443vim kube-proxy.kubeconfig modify server: https://10.10.25.110:6443 to server: https://10 .10.25.113: 6443systemctl restart kubeletsystemctl restart kube-proxy
Restart the service
Systemctl restart kubeletsystemctl restart kube-proxy
Modify the nginx configuration file (required for both nodes)
Cat / etc/nginx/nginx.confuser nginx;worker_processes 2 time_local errorists log / var/log/nginx/error.log warn;pid / var/run/nginx.pid;events {worker_connections 1024;} stream {log_format main "remote_addr $upstream_addr $time_local $status"; access_log / var/log/nginx/k8s-access.log main; upstream k8s-apiserver {server 10.10.25.149VR 6443 Server 10.10.25.112 listen 6443;} http {include / etc/nginx/mime.types; default_type application/octet-stream Log_format main'$remote_addr-$remote_user [$time_local] "$request"'$status $body_bytes_sent "$http_referer"'"$http_user_agent"$http_x_forwarded_for"; access_log / var/log/nginx/access.log main; sendfile on; # tcp_nopush on; keepalive_timeout 65; # gzip on Include / etc/nginx/conf.d/*.conf;}
Restart Nginx
Systemctl restart nginx
Verify VIP access
Kubectl get nodeNAME STATUS ROLES AGE VERSION10.10.25.150 Ready 15d v1.10.310.25.151 Ready 15d v1.10.3 indicates that the connection to VIP has come to an end, and the study on "K8s cluster deployment of highly available architecture" is over. I hope to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.