Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Introduction of multi-node deployment and construction method of load balancing

2025-04-07 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

The following brings you an introduction to multi-node deployment and how to build load balancing, hoping to give you some help in practical application. Load balancing involves a lot of things, there are not many theories, and there are many books on the Internet. Today, we will use the accumulated experience in the industry to do an answer.

Introduction to multi-node deployment in the production environment, when building the kubernetes platform, we will also consider the high availability of the platform. The kubenetes platform is managed by the master center, and each node server is provisioned and managed by the master CVM. In the previous article, we built the deployment of a single node (a master server). When the master CVM goes down, our platform cannot be used. At this time, we have to consider multi-node (multi-master) deployment, which has reached the high availability of platform services. Introduction to load balancing

When we build a multi-node deployment, multiple master are running at the same time, and the same master is always used to deal with work problems. When the master server is faced with multiple request tasks, the processing speed will slow down. At the same time, it is a waste of resources for other master servers not to process requests. At this time, we consider doing load balancing services.

This time, the load balancer is built using nginx service as a four-layer load balancer, and keepalived as an elegant address.

Experimental deployment Experimental Environment lb01:192.168.80.19 (load balancing Server) lb02:192.168.80.20 (load balancing Server) Master01:192.168.80.12Master01:192.168.80.11Node01:192.168.80.13Node02:192.168.80.14 Multi-master deployment master01 Server Operation [root@master01 kubeconfig] # scp-r / opt/kubernetes/ root@192.168.80.11:/opt / / directly Copy the kubernetes directory to master02The authenticity of host '192.168.80.11 (192.168.80.11)' can't be established.ECDSA key fingerprint is SHA256:Ih0NpZxfLb+MOEFW8B+ZsQ5R8Il2Sx8dlNov632cFlo.ECDSA key fingerprint is MD5:a9:ee:e5:cc:40:c7:9e:24:5b:c1:cd:c1:7b:31:42:0f.Are you sure you want to continue connecting (yes/no)? YesWarning: Permanently added '192.168.80.11' (ECDSA) to the list of known hosts.root@192.168.80.11's password:token.csv 100% 84 61.4KB/s 00:00kube-apiserver 100% 929 1.6MB/s 00:00kube-scheduler 100% 94 183.2KB/s 00:00kube-controller-manager 100% 483 969.2KB/s 00:00kube-apiserver 100% 184MB 106.1MB/s 00:01kubectl 100% 55MB 85.9MB/s 00:00kube-controller-manager 100% 155MB 111.9MB/s 00:01kube-scheduler 100% 55MB 115.8MB/s 00:00ca-key.pem 100% 1675 2.7MB/s 00:00ca.pem 1359 2.6MB/s 00:00server-key.pem 100% 1679 2.5MB/s 00:00server.pem 100% 1643 2.7MB/s 00:00 [root@master01 Kubeconfig] # scp / usr/lib/systemd/system/ {kube-apiserver Kube-controller-manager Kube-scheduler} .service root@192.168.80.11:/usr/lib/systemd/system / / copy the three components in master startup script root@192.168.80.11's password:kube-apiserver.service 100282 274.4KB/s 00:00kube-controller-manager.service 100% 317 403.5KB/s 00:00kube-scheduler.service 100 281 379.4KB/s 00:00 [root@master01 kubeconfig] # scp-r / opt/etcd/ root@192.168.80.11:/opt/ Special Note: master02 must have an etcd certificate Otherwise, the apiserver service cannot start copying the existing etcd certificate on master01 for master02 to use root@192.168.80.11's password:etcd 100% 509 275.7KB/s 00:00etcd 100% 18MB 95.3MB/s 00:00etcdctl 100% 15MB 75.1MB/s 00:00ca-key.pem 100% 1679 941.1KB/s 00:00ca.pem 100% 1265 1.6MB/s 00:00server-key.pem 1675 2.0MB/s 00:00server.pem 1338 1.5MB/s 00:00master02 server operation [root@master02 ~] # systemctl stop firewalld.service / / turn off firewall [root@master02 ~] # setenforce 0 / / close selinux [root@master02 ~] # vim / opt/kubernetes/cfg/kube-apiserver / / change file.-- etcd-servers= https://192.168.80.12:2379, Https://192.168.80.13:2379, Https://192.168.80.14:2379\-- bind-address=192.168.80.11\ / change IP address-- secure-port=6443\-- advertise-address=192.168.80.11\ / / change IP address-- allow-privileged=true\-- service-cluster-ip-range=10.0.0.0/24\...: wq [root@master02 ~] # systemctl start kube-apiserver.service / / start the apiserver service [root@master02 ~ ] # systemctl enable kube-apiserver.service / / set boot Created symlink from / etc/systemd/system/multi-user.target.wants/kube-apiserver.service to / usr/lib/ systemd/system/kube-apiserver.service. [root@master02 ~] # systemctl start kube-controller-manager.service / / launch controller-manager [root@master02 ~] # systemctl enable kube-controller-manager.service / / set boot Created symlink from / etc/systemd/system/multi-user. Target.wants/kube-controller-manager.service to / usr/lib/systemd/system/kube-controller-manager.service. [root@master02 ~] # systemctl start kube-scheduler.service / / launcher [root @ master02 ~] # systemctl enable kube-scheduler.service / / set boot Created symlink from / etc/systemd/system/multi-user.target.wants/kube-scheduler.service to / usr/lib/ systemd/system/kube-scheduler.service. [root@master02 ~] # vim / etc/profile / / Edit add environment variables. Export PATH=$PATH:/opt/kubernetes/bin/:wq [root@master02 ~] # source / etc/profile / / re-execute [root@master02 ~] # kubectl get node / / View node information NAME STATUS ROLES AGE VERSION192.168.80.13 Ready 146m v1.12.3192.168.80.14 Ready 144m v1.12.3 / Multi-master configuration successfully deployed load balance

Lb01, lb02 synchronous operation keepalived service configuration file download extraction code: fkoh

[root@lb01 ~] # systemctl stop firewalld.service [root@lb01 ~] # setenforce 0 [root@lb01 ~] # vim / etc/yum.repos.d/nginx.repo / / configure nginx service yum source [nginx] name=nginx repobaseurl= http://nginx.org/packages/centos/7/$basearch/gpgcheck=0:wq[root@lb01 yum.repos.d] # yum list / / reload yum loaded plug-in: fastestmirrorbase | | 3.6 kB 00:00:00extras | 2.9 kB 00 yum install nginx 00 00... [root@lb01 yum.repos.d] # yum install nginx-y / / install the plug-in for nginx service loaded: fastestmirrorLoading mirror | Speeds from cached hostfile* base: mirrors.aliyun.com* extras: mirrors.163.com... [root@lb01 yum.repos.d] # vim / etc/nginx/nginx.conf / / Edit nginx configuration file... events {worker_connections 1024 } stream {/ / add the layer 4 forwarding module log_format main'$remote_addr $upstream_addr-[$time_local] $status $upstream_bytes_sent';access_log / var/log/nginx/k8s-access.log main;upstream k8s-apiserver {server 192.168.80.12 upstream_addr 6443; / / pay attention to the IP address server 192.168.80.11 upstream_addr 6443;} server {listen 6443; proxy_pass k8s-apiserver }} http {include / etc/nginx/mime.types;default_type application/octet-stream ...: wq [root@lb01 yum.repos.d] # systemctl start nginx / / start the nginx service to access the test nginx service [root@lb01 yum.repos.d] # yum install keepalived-y / / install the keepalived service loaded plug-in: fastestmirrorLoading mirror speeds from cached hostfile* base: mirrors.aliyun.com* extras: mirrors.163.com... [root@lb01 yum.repos.d] # mount.cifs / / 192.168 .80.2 / shares/K8S/k8s02 / mnt/ Mount the host directory Password for root@//192.168.80.2/shares/K8S/k8s02: [root@lb01 yum.repos.d] # cp / mnt/keepalived.conf / etc/keepalived/keepalived.conf / / copy the prepared keepalived configuration file to overwrite the source configuration document cp: do you want to overwrite "/ etc/keepalived/keepalived.conf"? Yes [root@lb01 yum.repos.d] # vim / etc/keepalived/keepalived.conf / / Edit the configuration file. Vrrp_script check_nginx {script "/ etc/nginx/check_nginx.sh" / Note script location modification} vrrp_instance VI_1 {state MASTERinterface ens33 / / Note Nic name virtual_router_id 51 / / VRRP routing ID instance, each instance is a unique priority 100 / / priority The slave server sets 90advert_int 1 / / specifies the notification interval of VRRP heartbeat packets. Default is 1 second authentication {auth_type PASS auth_pass 1111} virtual_ipaddress {192.168.80.100 hip 24 / / elegant address} track_script {check_nginx}} / / delete all of the following: wq

Lb02 server keepalived profile modification

[root@lb02 ~] # vim / etc/keepalived/keepalived.conf...vrrp_script check_nginx {script "/ etc/nginx/check_nginx.sh" / / Note script location modification} vrrp_instance VI_1 {state BACKUP / / modify role to backupinterface ens33 / / Nic name virtual_router_id 51 / / VRRP routing ID instances, each > instance is a unique priority 90 / / priority The slave server sets 90advert_int 1 / / specifies the notification interval of VRRP heartbeat packets. Default is 1 second authentication {auth_type PASS auth_pass 1111} virtual_ipaddress {192.168.80.100 IP address} track_script {check_nginx}} / / Delete all of the following: wq

Synchronous operation of lb01 and lb02

[root@lb01 yum.repos.d] # vim / etc/nginx/check_nginx.sh / / Edit the nginx status script count=$ (ps-ef | grep nginx | egrep-cv "grep | $") if ["$count"-eq 0] Thensystemctl stop keepalivedfi:wqchmod + x / etc/nginx/check_nginx.sh / / add script execution permissions [root@lb01 yum.repos.d] # systemctl start keepalived / / start the service lb01 server operation [root@lb01 ~] # ip a / / View address information 1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1link/loopback 00 brd 00:00:00:00:00:00inet 127.0.0. 1 scope host lo valid_lft forever preferred_lft foreverinet6:: 1 scope host valid_lft forever preferred_lft forever2: ens33: mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:0c:29:e9:04:ba brd ff:ff:ff:ff:ff:ffinet 192.168.80.19 brd 24 brd 192.168.80.255 scope global ens33 valid_lft forever preferred_lft foreverinet 192.168.80.100 scope global secondary ens33 / / virtual address Successfully configure valid_lft forever preferred_lft foreverinet6 fe80::c3ab:d7ec:1adf:c5df/64 scope link valid_lft forever preferred_lft foreverlb02 server operation [root@lb02 ~] # ip a / / View address information 1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1link/loopback 00 qdisc noqueue state UNKNOWN qlen 1link/loopback 00 00 brd 00:00:00:00:00:00inet 127.0.0.1 scope host lo valid_lft forever preferred_ Lft foreverinet6:: 1/128 scope host valid_lft forever preferred_lft forever2: ens33: mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:0c:29:7d:c7:ab brd ff:ff:ff:ff:ff:ffinet 192.168.80.20/24 brd 192.168.80.255 scope global ens33 valid_lft forever preferred_lft foreverinet6 fe80::cd8b:b80c:8deb:251f/64 scope link valid_lft forever preferred_lft foreverinet6 fe80::c3ab:d7ec:1adf:c5df/ 64 scope link tentative dadfailed valid_lft forever preferred_lft forever / / No virtual IP address lb02 belongs to standby service lb01 server stops nginx service Once again, in the lb02 server IP address Check whether the virtual IP address has drifted successfully [root@lb01 ~] # systemctl stop nginx.service [root@lb01 nginx] # ip A1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1link/loopback 0000VOV 0000VOGO 0000 brd 00:00:00:00:00:00inet 127.0.0.1 8 scope host lo valid_lft forever preferred_lft foreverinet6:: 1scope host lo valid_lft forever preferred_lft foreverinet6 128 scope host valid_lft forever preferred_lft forever2: ens33: mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:0c:29:e9:04:ba brd ff:ff:ff:ff:ff:ffinet 192.168.80.19 1000link/ether 00:0c:29:e9:04:ba brd ff:ff:ff:ff:ff:ffinet 24 brd 192.168.80.255 scope global ens33 valid_lft forever preferred_lft foreverinet6 fe80::c3ab:d7ec:1adf:c5df/64 scope link valid_lft forever preferred_lft forever [root@lb02 ~] # ip a / / View on the lb02 server 1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1link/loopback 00:00: 00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft foreverinet6:: 1/128 scope host valid_lft forever preferred_lft forever2: ens33: mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:0c:29:7d:c7:ab brd ff:ff:ff:ff:ff:ffinet 192.168.80.20/24 brd 192.168.80.255 scope global ens33 valid_lft forever Preferred_lft foreverinet 192.168.80.100 preferred_lft foreverinet 24 scope global secondary ens33 / / Drift address transfer to lb02 valid_lft forever preferred_lft foreverinet6 fe80::cd8b:b80c:8deb:251f/64 scope link valid_lft forever preferred_lft foreverinet6 fe80::c3ab:d7ec:1adf:c5df/64 scope link tentative dadfailed valid_lft forever preferred_lft forever restart nginx on the lb01 server, Keepalived service [root@lb01 nginx] # systemctl start nginx [root@lb01 nginx] # systemctl start keepalived.service [root@lb01 nginx] # ip A1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1link/loopback 00VOGO 00GOOOOOOOOOOOOOY 00 brd 00:00:00:00:00:00inet 127.0.0.1 scope host lo valid_lft forever preferred_lft foreverinet6:: 1 scope host valid_lft forever preferred_lft forever2 128 scope host valid_lft forever preferred_lft forever2: ens33: mtu 1500 qdisc pfifo_fast state UP Qlen 1000link/ether 00:0c:29:e9:04:ba brd ff:ff:ff:ff:ff:ffinet 192.168.80.19 qlen 1000link/ether 00:0c:29:e9:04:ba brd ff:ff:ff:ff:ff:ffinet 24 brd 192.168.80.255 scope global ens33 valid_lft forever preferred_lft foreverinet 192.168.80.100 scope global ens33 valid_lft forever preferred_lft foreverinet 24 scope global secondary ens33 / / Drift address is preempted back because priority valid_lft forever preferred_lft foreverinet6 fe80::c3ab:d7ec:1adf:c5df/64 scope link valid_lft forever preferred is configured _ lft forever modifies the configuration file [root@node01 ~] # vim / opt/kubernetes/cfg/bootstrap.kubeconfig...server: https://192.168.80.100:6443...:wq[root@node01 ~] # vim / opt/kubernetes/cfg/kubelet.kubeconfig...server: https://192.168.80.100:6443...:wq[root@node01 ~] # vim / opt/kubernetes/cfg/kube-proxy.kubeconfig.. on all node nodes .server: https://192.168.80.100:6443...:wq[root@node01 ~] # systemctl restart kubelet.service / / restart the service [root@node01 ~] # systemctl restart kube-proxy.service to view log information on the lb01 server [root@lb01 nginx] # tail / var/log/nginx/k8s-access.log192.168.80.13 192.168.80.12 root@node01 6443-[11/Feb/2020:15:23:52 + 0800] 200 1118192. 168.80.13 192.168.80.11 purl 6443-[11/Feb/2020:15:23:52 + 0800] 20011192.168.80.14 192.168.80.12purl 6443-[11/Feb/2020:15:26:01 + 0800] 20011192.168.80.14 192.168.80.12purl 6443-[11/Feb/2020:15:26:01 + 0800] 2001120 operate the test platform function on master01 [root@master01 ~] # kubectl run nginx- -image=nginx / / create pod node kubectl run-- generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.deployment.apps/nginx created [root@master01 ~] # kubectl get pods / / View pod information NAME READY STATUS RESTARTS AGEnginx-dbddb74b8-sdcpl 1 Running 0 33m / / created successfully [root@master01 ~] # kubectl logs nginx-dbddb74b8-sdcpl / / View log information Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes Subresource=proxy) (pods/log nginx-dbddb74b8-sdcpl) / / error report [root@master01 ~] # kubectl create clusterrolebinding cluster-system-anonymous-- clusterrole=cluster-admin-- user=system:anonymous / / solve the log error problem clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created [root@master01 ~] # kubectl logs nginx-dbddb74b8-sdcpl / / check the log again [root@master01 ~] # / / No access at this time No log information is displayed in all logs

Visit the nginx web page in the node node

[root@master01 ~] # kubectl get pods-o wide / / first view the pod network information on the master01 node NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODEnginx-dbddb74b8-sdcpl 1Accord 1 Running 038m 172.17.33.2 192.168.80.14 [root@node01] # curl 172.17.33.2 / on the node node The operation can directly access the Welcome to nginxworker body {width: 35em Margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif;} Welcome to nginx!

If you see this page, the nginx web server is successfully installed andworking. Further configuration is required.

For online documentation and support please refer tonginx.org.Commercial support is available atnginx.com.

Thank you for using nginx.

Go back to the master01 server to view log information [root@master01 ~] # kubectl logs nginx-dbddb74b8-sdcpl172.17.12.0-- [12/Feb/2020:06:45:54 + 0000] "GET / HTTP/1.1" 200612 "-" curl/7.29.0 "-" / / when access information appears, multi-node construction and load balancing configuration are completed.

After reading the above introduction to multi-node deployment and how to build load balancer, if there is anything else you need to know, you can find out what you are interested in in the industry information or find our professional and technical engineer to answer it. Technical engineers have more than ten years of experience in the industry.

More articles on related knowledge points:

Detailed steps of K8s multi-node deployment, with the construction of UI interface

What does multi-node deployment mean and examples of service node deployment

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report