In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
K8s multi-node deployment-> load balancing using Nginx services-> UI interface presentation
Special note: the k8s cluster with a single master must be deployed before this lab begins.
You can see my last blog: https://blog.csdn.net/JarryZho/article/details/104193913
Environment deployment: related software packages and documentation:
Link: https://pan.baidu.com/s/1l4vVCkZ03la-VpIFXSz1dA
Extraction code: rg99
Use Nginx for load balancing:
Lb1:192.168.195.147/24 mini-2
Lb2:192.168.195.133/24 mini-3
Master node:
Master1:192.168.18.128/24 CentOS 7-3
Master2:192.168.18.132/24 mini-1
Node node:
Node1:192.168.18.148/24 CentOS 7-4
Node2:192.168.18.145/24 CentOS 7-5
VRRP drift address: 192.168.18.100
Multi-master cluster architecture diagram:
-master2 deployment-step 1: turn off master2's firewall service [root@master2 ~] # systemctl stop firewalld.service [root@master2 ~] # setenforce 0 step 2: operate on master1 Copy the kubernetes directory to master2 [root @ master1 k8s] # scp-r / opt/kubernetes/ root@192.168.18.132:/optThe authenticity of host '192.168.18.132 (192.168.18.132)' can't be established.ECDSA key fingerprint is SHA256:mTT+FEtzAu4X3D5srZlz93S3gye8MzbqVZFDzfJd4Gk.ECDSA key fingerprint is MD5:fa:5a:88:23:49:60:9b:b8:7e:4b:14:4b:3f:cd:96:a0.Are you sure you want to continue connecting (yes/no)? YesWarning: Permanently added '192.168.18.132' (ECDSA) to the list of known hosts.root@192.168.18.132's password:token.csv 100% 84 90.2KB/s 00:00kube-apiserver 100% 934 960.7KB/s 00:00kube-scheduler 100% 94 109.4KB/s 00:00kube-controller-manager 100% 483 648.6KB/s 00:00kube-apiserver 100% 184MB 82.9MB/s 00:02kubectl 100% 55MB 81.5MB/s 00:00kube-controller-manager 100% 155MB 70.6MB/s 00:02kube-scheduler 100% 55MB 77.4MB/s 00:00ca-key.pem 100% 1675 1.2MB/s 00:00ca.pem 1359 1.5MB/s 00:00server-key.pem 100 1675 1.2MB/s 00:00server.pem 100 1643 1.7MB/s 00:00 step 3: copy the startup script kube-apiserver.service for the three components in master1 Kube-controller-manager.service Kube-scheduler.service to master2 [root @ master1 k8s] # scp / usr/lib/systemd/system/ {kube-apiserver,kube-controller-manager Kube-scheduler} .service root@192.168.18.132:/usr/lib/systemd/system/root@192.168.18.132's password:kube-apiserver.service 100% 282 286.6KB/s 00:00kube-controller-manager.service 100% 317 223.9KB/s 00:00kube-scheduler.service 100% 281 362.4KB/s 00:00 step 4: operate on master2 Modify the IP [root@master2 ~] # cd / opt/kubernetes/cfg/ [root@master2 cfg] # lskube-apiserver kube-controller-manager kube-scheduler token.csv [root@master2 cfg] # vim kube-apiserver5-- bind-address=192.168.18.132\ 7-- advertise-address=192.168.18.132\ # in the configuration file kube-apiserver to change the IP address on lines 5 and 7 to the address of master2 # press Esc to exit insert mode after modification Enter: wq save exit step 5: copy the existing etcd certificate on master1 for master2 to use
Special note: master2 must have an etcd certificate, otherwise the apiserver service cannot be started
[root@master1 k8s] # scp-r / opt/etcd/ root@192.168.18.132:/opt/root@192.168.18.132's password:etcd 100% 516 535.5KB/s 00:00etcd 100% 18MB 90.6MB/s 00: 00etcdctl 100% 15MB 80.5MB/s 00:00ca-key.pem 100% 1675 1.4MB/s 00:00ca.pem 100% 1265 411.6KB/s 00:00server -key.pem 100% 1679 2.0MB/s 00:00server.pem 1338 429.6KB/s 00:00 step 6: start the three component services in master2 [root@master2 cfg] # systemctl start kube-apiserver.service [root@master2 cfg] # systemctl enable kube-apiserver.serviceCreated Symlink from / etc/systemd/system/multi-user.target.wants/kube-apiserver.service to / usr/lib/systemd/system/kube-apiserver.service. [root@master2 cfg] # systemctl status kube-apiserver.service ● kube-apiserver.service-Kubernetes API Server Loaded: loaded (/ usr/lib/systemd/system/kube-apiserver.service) Enabled; vendor preset: disabled) Active: active (running) since five 2020-02-07 09:16:57 CST 56min ago [root@master2 cfg] # systemctl start kube-controller-manager.service [root@master2 cfg] # systemctl enable kube-controller-manager.serviceCreated symlink from / etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to / usr/lib/systemd/system/kube-controller-manager.service. [root@master2 cfg] # systemctl status kube-controller-manager.service ● kube-controller-manager.service-Kubernetes Controller Manager Loaded: loaded (/ usr/lib/systemd/system/kube-controller-manager.service; enabled Vendor preset: disabled) Active: active (running) since five 2020-02-07 09:17:02 CST 57min ago [root@master2 cfg] # systemctl start kube-scheduler.service [root@master2 cfg] # systemctl enable kube-scheduler.serviceCreated symlink from / etc/systemd/system/multi-user.target.wants/kube-scheduler.service to / usr/lib/systemd/system/kube-scheduler.service. [root@master2 cfg] # systemctl status kube-scheduler.service ● kube-scheduler.service-Kubernetes Scheduler Loaded: loaded (/ usr/lib/systemd/system/kube-scheduler.service; enabled Vendor preset: disabled) Active: active (running) since five 2020-02-07 09:17:07 CST Step 7 of 58min ago: add environment variables and add export PATH=$PATH:/opt/kubernetes/bin/ [root@master2 cfg] # source / etc/profile [root@master2 cfg] # kubectl get nodeNAME STATUS ROLES AGE VERSION192.168.18.145 Ready 21h v1.12.3198.18.148 Ready 22h v1.12.12 at the end of [root@master2 cfg] # vim / etc/profile#. At this point, you can see the addition of node1 and node2.
At this point, master2 is deployed.
-Nginx load balancing deployment-
Note: nginx service is used for load balancing here. Nginx after version 1.9 has a four-layer forwarding function (load balancing), in which stream is added.
Multi-node principle:
Unlike a single node, the core point of a multi-node is to point to a core address. When we were working on a single node, we wrote the definition of vip address to the k8s-cert.sh script file (192.168.18.100). Vip enabled apiserver, and multi-master opened ports to accept apiserver requests from node nodes. At this time, if a new node joins, not directly to the moster node, but directly to the vip to spiserver the request. Then the vip schedules and distributes it to a master for execution. After receiving the request, the master will issue a certificate to the changed node node.
Step 1: upload keepalived.conf and nginx.sh files to the root directory of lb1 and lb2 to `lb1` [root @ lb1 ~] # lsanaconda-ks.cfg keepalived.conf public video document music initial-setup-ks.cfg nginx.sh template picture download desktop `lb2` [root @ lb2 ~] # lsanaconda-ks.cfg keepalived.conf public video document music initial-setup-ks.cfg Nginx.sh template image download desktop step 2: lb1 (192.168.18.147) Operation [root@lb1 ~] # systemctl stop firewalld.service [root@lb1 ~] # setenforce 0 [root@lb1 ~] # vim / etc/yum.repos.d/ nginx.repos [nginx] name=nginx repobaseurl= http://nginx.org/packages/centos/7/$basearch/gpgcheck=0# after modification, press Esc to exit the insert mode Enter: wq save exit `reload yum repository `[root@lb1 ~] # yum list` install nginx service` [root@lb1 ~] # yum install nginx-y [root@lb1 ~] # vim / etc/nginx/nginx.conf# insert the following stream {log_format main'$remote_addr $upstream_addr-[$time_local] $status $upstream_bytes_sent' under line 12 Access_log / var/log/nginx/k8s-access.log main; upstream k8s-apiserver {server 192.168.18.128 server 6443; # here is the ip address of master1 server 192.168.18.132 server 6443; # here is the ip address of master2} server {listen 6443; proxy_pass k8s-apiserver }} # Press Esc to exit insert mode after modification is completed Input: wq save exit syntax `[root@lb1 ~] # nginx-tnginx: the configuration file / etc/nginx/nginx.conf syntax is oknginx: configuration file / etc/nginx/nginx.conf test is successful [root@lb1 ~] # cd / usr/share/nginx/html/ [root@lb1 html] # ls50x.html index.html [root@lb1 html] # vim index.html14 Welcome to mater nginx! # 14 add master to line # to distinguish # press Esc to exit insert mode after modification Enter: wq save and exit `start service `[root@lb2 ~] # systemctl start nginx browser to verify access, enter 192.168.18.147, you can visit the nginx home page of master
Deploy keepalived service [root@lb1 html] # yum install keepalived-y` modify configuration file `[root@lb1 html] # cd ~ [root@lb1 ~] # cp keepalived.conf / etc/keepalived/keepalived.confcp: is "/ etc/keepalived/keepalived.conf" overwritten? Yes# uses the keepalived.conf configuration file we uploaded earlier to overwrite the original configuration file [root@lb1 ~] # vim / etc/keepalived/keepalived.conf18 script "/ etc/nginx/check_nginx.sh" # 18 line directory to / etc/nginx/, script, and then write 23 interface ens33 # eth0 to ens33 The name of the network card here can be queried using the ifconfig command to query the 24 virtual_router_id 51 # vrrp routing ID instance. Each instance is a unique 25 priority 10 priority. The standby server sets 9031 virtual_ipaddress {32 192.168.18.100 vip 24 # vip address to the previously set 192.168.18.100 vip address below line 38 delete # after modification, press Esc to exit the insert mode. Enter: wq save and exit `write script `[root@lb1 ~] # vim / etc/nginx/check_nginx.sh count=$ (ps-ef | grep nginx | egrep-cv "grep | $$") # count if ["$count"-eq 0] Then systemctl stop keepalivedfi# match is 0, close keepalived service # press Esc to exit insert mode after writing is complete, enter: wq save exit [root@lb1 ~] # chmod + x / etc/nginx/check_ nginx.sh [root @ lb1 ~] # ls / etc/nginx/check_nginx.sh/etc/nginx/check_nginx.sh # now the script is executable Green [root@lb1 ~] # systemctl start keepalived [root@lb1 ~] # ip a2: ens33: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:24:63:be brd ff:ff:ff:ff:ff:ff inet 192.168.18.147 and 24 brd 192.168.18.255 scope global dynamic ens33 valid_lft 1370sec preferred_lft 1370sec inet `192.168.18.100 / 24` scope global secondary ens33 # drifts at this time Address in lb1 valid_lft forever preferred_lft forever inet6 fe80::1cb1:b734:7f72:576f/64 scope link valid_lft forever preferred_lft forever inet6 fe80::578f:4368:6a2c:80d7/64 scope link tentative dadfailed valid_lft forever preferred_lft forever inet6 fe80::6a0c:e6a0:7978:3543/64 scope link tentative dadfailed valid_lft forever preferred_lft forever step 3: lb2 (192.168.18.133) Operation [root@lb2 ~] # systemctl stop firewalld.service [root@lb2 ~] # setenforce 0 [root@lb2 ~] # vim / etc/yum.repos.d/ nginx.repos [nginx] name=nginx repobaseurl= http://nginx.org/packages/centos/7/$basearch/gpgcheck=0# after modification, press Esc to exit the insertion mode Enter: wq save exit `reload yum repository `[root@lb2 ~] # yum list` install nginx service` [root@lb2 ~] # yum install nginx-y [root@lb2 ~] # vim / etc/nginx/nginx.conf# insert the following stream {log_format main'$remote_addr $upstream_addr-[$time_local] $status $upstream_bytes_sent' under line 12 Access_log / var/log/nginx/k8s-access.log main; upstream k8s-apiserver {server 192.168.18.128 server 6443; # here is the ip address of master1 server 192.168.18.132 server 6443; # here is the ip address of master2} server {listen 6443; proxy_pass k8s-apiserver }} # Press Esc to exit the insertion mode after modification is completed, and enter: wq Save exit `Detection Syntax `[root@lb2 ~] # nginx-tnginx: the configuration file / etc/nginx/nginx.conf syntax is oknginx: configuration file / etc/nginx/nginx.conf test is successful [root@lb2 ~] # vim / usr/share/nginx/html/index.html14 Welcome to backup nginx! # 14 add backup to line # exit insert mode after modification # press Esc to exit insertion mode Enter: wq save and exit `start service `[root@lb2 ~] # systemctl start nginx browser to verify access, enter 192.168.18.133, you can visit the nginx home page of master
Deploy keepalived service [root@lb2 ~] # yum install keepalived-y` modify configuration file `[root@lb2 ~] # cp keepalived.conf / etc/keepalived/keepalived.confcp: is "/ etc/keepalived/keepalived.conf" overwritten? Yes# uses the keepalived.conf configuration file we uploaded earlier to overwrite the original configuration file [root@lb2 ~] # vim / etc/keepalived/keepalived.conf18 script "/ etc/nginx/check_nginx.sh" # 18 line directory to / etc/nginx/, script after writing 22 state BACKUP # 22 line role MASTER to BACKUP23 interface ens33 # eth0 to ens3324 virtual_router_id 51 # vrrp routing ID instance Each instance is a unique priority of 25 priority 90 #, and the slave server is 9031 virtual_ipaddress {32 192.168.18.100 vip 24 # vip address changed to the following line (192.168.18.100 delete 38). After modification, press Esc to exit insert mode Enter: wq save and exit `write script `[root@lb2 ~] # vim / etc/nginx/check_nginx.sh count=$ (ps-ef | grep nginx | egrep-cv "grep | $$") # count if ["$count"-eq 0] Then systemctl stop keepalivedfi# match is 0, close keepalived service # press Esc to exit insert mode after writing is complete, enter: wq save exit [root@lb2 ~] # chmod + x / etc/nginx/check_ nginx.sh [root @ lb2 ~] # ls / etc/nginx/check_nginx.sh/etc/nginx/check_nginx.sh # now the script is executable Green [root@lb2 ~] # systemctl start keepalived [root@lb2 ~] # ip a2: ens33: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:9d:b7:83 brd ff:ff:ff:ff:ff:ff inet 192.168.18.133 and 24 brd 192.168.18.255 scope global dynamic ens33 valid_lft 958sec preferred_lft 958sec inet6 fe80::578f:4368:6a2c:80d7/64 scope link valid_lft forever preferred _ lft forever inet6 fe80::6a0c:e6a0:7978:3543/64 scope link tentative dadfailed valid_lft forever preferred_lft forever# does not have 192.168.18.100 at this time Because the address is on lb1 (master) step 4: verify address drift `stop the nginx service in lb1 `[root@lb1 ~] # pkill nginx [root@lb1 ~] # systemctl status nginx ● nginx.service-nginx-high performance web server Loaded: loaded (/ usr/lib/systemd/system/nginx.service) Disabled; vendor preset: disabled) Active: failed (Result: exit-code) since five 2020-02-07 12:16:39 CST; 1min 40s ago# status is disabled at this time `check whether the keepalived service is disabled at the same time `[root@lb1 ~] # systemctl status keepalived.service ● keepalived.service-LVS and VRRP High Availability Monitor Loaded: loaded (/ usr/lib/systemd/system/keepalived.service; disabled Vendor preset: disabled) Active: inactive (dead) # the keepalived service is disabled at this time, which means the check_nginx.sh script was successfully executed [root@lb1 ~] # ps-ef | grep nginx | egrep-cv "grep | $$" judgment condition is 0 at this time. The keepalived service should be stopped to check whether the drift address on lb1 exists `[root@lb1 ~] # ip a2: ens33: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:24:63:be brd ff:ff:ff:ff:ff:ff inet 192.168.18.147 scope global dynamic ens33 valid_lft 1771sec preferred_lft 1771sec inet6 fe80::1cb1:b734:7f72:576f/64 24 brd 192.168.18.255 scope global dynamic ens33 valid_lft 1771sec preferred_lft 1771sec inet6 fe80::1cb1:b734:7f72:576f/64 Scope link valid_lft forever preferred_lft forever inet6 fe80::578f:4368:6a2c:80d7/64 scope link tentative dadfailed valid_lft forever preferred_lft forever inet6 fe80::6a0c:e6a0:7978:3543/64 scope link tentative dadfailed valid_lft forever preferred_lft forever# at this time 192.168.18.100 drift address disappears If the dual-computer hot backup is successful The address should be drifted to the lb2 `and then check the lb2 to see if the drift address exists `[root@lb2 ~] # ip a2: ens33: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:9d:b7:83 brd ff:ff:ff:ff:ff:ff inet 192.168.18.133 link/ether 00:0c:29:9d:b7:83 brd ff:ff:ff:ff:ff:ff inet 24 brd 192.168.18.255 scope global dynamic ens33 valid_lft 1656sec preferred_lft 1656sec inet 192.168.18.10024 Scope global secondary ens33 valid_lft forever preferred_lft forever inet6 fe80::578f:4368:6a2c:80d7/64 scope link valid_lft forever preferred_lft forever inet6 fe80::6a0c:e6a0:7978:3543/64 scope link tentative dadfailed valid_lft forever preferred_lft forever# now drifts the address 192.168.18.100 to lb2. Step 5: resume operation: start nginx and keepalived service on lb1. [root@lb1 ~] # systemctl start nginx [root@lb1 ~] # systemctl start keepalived` drift address will return to lb1 `[ root@lb1 ~] # ip a2: ens33: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:24:63:be brd ff:ff:ff:ff:ff:ff inet 192.168.18.147 / 24 brd 192.168.18.255 scope global dynamic ens33 valid_lft 1051sec preferred_lft 1051sec inet 192.168.18.100/24 scope global secondary ens33 valid_lft forever preferred_lft forever inet6 fe80::1cb1:b734:7f72:576f/64 scope link valid_lft forever preferred_lft forever inet6 fe80::578f:4368:6a2c:80d7/64 scope link tentative dadfailed valid_lft forever preferred_lft forever inet6 fe80::6a0c:e6a0:7978:3543/ 64 scope link tentative dadfailed valid_lft forever preferred_lft forever# otherwise the drift address on the lb2 will disappear step 6: at this point we use the host's cmd command to test whether the drift address is connected C:\ Users\ zhn > ping 192.168.18.100 is Ping 192.168.18.100 has 32 bytes of data: reply from 192.168.18.100: byte = 32 time
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.