Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Multi-node deployment of Kubernetes

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

This article is about the multi-node deployment of Kubernetes. The editor thought it was very practical, so I shared it with you to learn. The following information is about the multi-node deployment of Kubernetes.

Multi-master cluster architecture diagram:

Master2 deployment

1. Priority is given to shutting down master2's firewall service

[root@master2 ~] # systemctl stop firewalld.service [root@master2 ~] # setenforce 0

2. Operate on master1 and copy kubernetes directory and server components to master2

[root@master1 K8s] # scp-r / opt/kubernetes/ root@192.168.18.140:/opt [root@master1 K8s] # scp / usr/lib/systemd/system/ {kube-apiserver,kube-controller-manager,kube-scheduler} .service root@192.168.18.140:/usr/lib/systemd/system/

3. Modify the configuration file in master02

[root@master2 ~] # cd / opt/kubernetes/cfg/ [root@master2 cfg] # vim kube-apiserver5-- bind-address=192.168.18.140\ 7-- advertise-address=192.168.18.140\ # Line 5 and 7 IP address needs to be changed to master2 address

4. Copy the existing etcd certificate on master1 for master2 to use.

(note: master2 must have an etcd certificate, otherwise the apiserver service cannot be started)

[root@master1 k8s] # scp-r / opt/etcd/ root@192.168.18.132:/opt/root@192.168.18.132's password:etcd 100% 516 535.5KB/s 00:00etcd 100% 18MB 90.6MB/s 00: 00etcdctl 100% 15MB 80.5MB/s 00:00ca-key.pem 100% 1675 1.4MB/s 00:00ca.pem 100% 1265 411.6KB/s 00:00server -key.pem 100% 1679 2.0MB/s 00:00server.pem 1338 429.6KB/s 00:00

5. Start the three component services in master2

[root@master2 cfg] # systemctl start kube-apiserver.service # # enable Service [root@master2 cfg] # systemctl enable kube-apiserver.service # # Service Boot [root@master2 cfg] # systemctl start kube-controller-manager.service [root@master2 cfg] # systemctl enable kube-controller-manager.service [root@master2 cfg] # systemctl start kube-scheduler.service [root@master2 cfg] # systemctl enable kube-scheduler.service

6. Modify environment variables

[root@master2 cfg] # vim / etc/profileexport PATH=$PATH:/opt/kubernetes/bin/ # # add environment variable [root@master2 cfg] # source / etc/profile # # Refresh configuration file [root@master2 cfg] # kubectl get node # # View cluster node information NAME STATUS ROLES AGE VERSION192.168.18.129 Ready 21h v1.12.3192.168.18.130 Ready 22h v1.12.The join of node1 and node2 can be seen at this time-when master2 is deployed, Nginx load balancer deploys lb01 and lb02 to do the same

Install the nginx service and copy the nginx.sh and keepalived.conf scripts to the home directory

[root@localhost ~] # lsanaconda-ks.cfg keepalived.conf Public Video document Music initial-setup-ks.cfg nginx.sh template Picture download Desktop [root@lb1 ~] # systemctl stop firewalld.service [root@lb1 ~] # setenforce 0 [root@lb1 ~] # vim / etc/yum.repos.d/ nginx.repos [nginx] name=nginx repobaseurl= http://nginx.org/packages/centos/7/$basearch/gpgcheck=0## multiple Newly load yum repository [root@lb1 ~] # yum list## install nginx service [root@lb1 ~] # yum install nginx-y [root@lb1 ~] # vim / etc/nginx/nginx.conf## insert stream module stream {log_format main'$remote_addr $upstream_addr-[$time_local] $status $upstream_bytes_sent' under line 12 Access_log / var/log/nginx/k8s-access.log main; upstream k8s-apiserver {server 192.168.18.128 server 6443; # here is the ip address of master1, server 192.168.18.140, server 6443; # here is the ip address of master2} server {listen 6443; proxy_pass k8s-apiserver }} # # Detection syntax [root@lb1 ~] # nginx-tnginx: the configuration file / etc/nginx/nginx.conf syntax is oknginx: configuration file / etc/nginx/nginx.conf test is successful## modify the home page to distinguish [root@lb1 ~] # cd / usr/share/nginx/html/ [root@lb1 html] # ls50x.html index.html [root@lb1 html] # vim index.html14 Welcome to mater nginx! # 14 add master to the line Add backup to the line [root@lb2 ~] # cd / usr/share/nginx/html/ [root@lb2 html] # ls50x.html index.html [root@lb1 html] # vim index.html14 Welcome to backup nginx! # 14 to distinguish # # start the service [root@lb1 ~] # systemctl start nginx [root@lb2 ~] # systemctl start nginx

Browser authentication access, enter 192.168.18.150, you can visit master's nginx home page

Browser authentication access, enter 192.168.18.151, you can visit backup's nginx home page

Keepalived installation and deployment

Lb01 and lb02 operate in the same way

1. Install keeplived

[root@lb1 html] # yum install keepalived-y

2. Modify the configuration file

[root@lb1~] # lsanaconda-ks.cfg keepalived.conf Public Video document Music initial-setup-ks.cfg nginx.sh template Image download Desktop [root@lb1~] # cp keepalived.conf / etc/keepalived/keepalived.confcp: do you want to overwrite "/ etc/keepalived/keepalived.conf"? Yes [root@lb1 ~] # vim / etc/keepalived/keepalived.conf # lb01 is Master configured as follows:! Configuration File for keepalivedglobal_defs {# receive email address notification_email {acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc} # email sending address notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id NGINX_MASTER} vrrp_script check_nginx {script "/ etc/nginx/check_nginx.sh"} vrrp_instance VI_1 {state MASTER interface ens33 virtual_router_id 51 # VRRP routing ID instance Each instance is a unique priority 100 # priority. The slave server sets 90 advert_int 1 # to specify the VRRP heartbeat packet notification interval. The default 1 second authentication {auth_type PASS auth_pass 1111} virtual_ipaddress {192.168.18.100 track_script {check_nginx}} # lb02 is configured as follows:! Configuration File for keepalivedglobal_defs {# receive email address notification_email {acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc} # email sending address notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id NGINX_MASTER} vrrp_script check_nginx {script "/ etc/nginx/check_nginx.sh"} vrrp_instance VI_1 {state BACKUP interface ens33 virtual_router_id 51 # VRRP routing ID instance Each instance is a unique priority 90 # priority. The slave server sets 90 advert_int 1 # to specify the VRRP heartbeat packet notification interval. Default is 1 second authentication {auth_type PASS auth_pass 1111} virtual_ipaddress {192.168.18.100 check_nginx} 24} track_script {check_nginx}}

3. Make management script

[root@lb1 ~] # vim / etc/nginx/check_nginx.shcount=$ (ps-ef | grep nginx | egrep-cv "grep | $") if ["$count"-eq 0]; then systemctl stop keepalivedfi

4. Grant execution permission and start the service

[root@lb1 ~] # chmod + x / etc/nginx/check_ nginx.sh [root @ lb1 ~] # systemctl start keepalived

5. View address information

Lb01 address information

[root@lb1] # ip A1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00 scope host valid_lft forever preferred_lft forever2: ens33: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:ba:e6 : 18 brd ff:ff:ff:ff:ff:ff inet 192.168.18.150 lb01 24 brd 192.168.35.255 scope global ens33 valid_lft forever preferred_lft forever inet 192.168.18.100 Accord 24 scope global secondary ens33 # # Drift address valid_lft forever preferred_lft forever inet6 fe80::6ec5:6d7:1b18:466e/64 scope link tentative dadfailed valid_lft forever preferred_lft forever inet6 in lb01 Fe80::2a3:b621:ca01:463e/64 scope link tentative dadfailed valid_lft forever preferred_lft forever inet6 fe80::d4e2:ef9e:6820:145a/64 scope link tentative dadfailed valid_lft forever preferred_lft forever3: virbr0: mtu 1500 qdisc noqueue state DOWN qlen 1000 link/ether 52:54:00:14:39:99 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 Valid_lft forever preferred_lft forever4: virbr0-nic: mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000 link/ether 52:54:00:14:39:99 brd ff:ff:ff:ff:ff:ff

Lb02 address information

[root@lb2] # ip A1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00 scope host valid_lft forever preferred_lft forever2: ens33: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:1d:ec : b0 brd ff:ff:ff:ff:ff:ff inet 192.168.18.151/24 brd 192.168.35.255 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::6ec5:6d7:1b18:466e/64 scope link tentative dadfailed valid_lft forever preferred_lft forever inet6 fe80::2a3:b621:ca01:463e/64 scope link tentative dadfailed valid_lft forever preferred_lft forever inet6 fe80::d4e2:ef9e:6820:145a/64 scope Link tentative dadfailed valid_lft forever preferred_lft forever3: virbr0: mtu 1500 qdisc noqueue state DOWN qlen 1000 link/ether 52:54:00:14:39:99 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever4: virbr0-nic: mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000 link/ether 52:54:00:14:39:99 brd ff:ff:ff:ff:ff:ff

6. Handover in case of test failure

Make Ib01 fail, verify address drift

[root@lb1 ~] # pkill nginx [root@lb1 ~] # systemctl status nginx ● nginx.service-nginx-high performance web server Loaded: loaded (/ usr/lib/systemd/system/nginx.service; disabled; vendor preset: disabled) Active: failed (Result: exit-code) since six 2020-02-08 16:54:45 CST 11s ago Docs: http://nginx.org/en/docs/ Process: 13156 ExecStop=/bin/kill-s TERM $MAINPID (code=exited, status=1/FAILURE) Main PID: 6930 (code=exited, status=0/SUCCESS) [root@localhost ~] # systemctl status keepalived.service # keepalived service is also closed, indicating that the check_nginx.sh in nginx is effective ● keepalived.service-LVS and VRRP High Availability Monitor Loaded: loaded (/ usr/lib/systemd/system/keepalived.service; disabled) Vendor preset: disabled) Active: inactive (dead)

View the Ib01 address:

[root@lb1] # ip A1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00 scope host valid_lft forever preferred_lft forever2: ens33: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:ba:e6 : 18 brd ff:ff:ff:ff:ff:ff inet 192.168.18.150/24 brd 192.168.35.255 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::6ec5:6d7:1b18:466e/64 scope link tentative dadfailed valid_lft forever preferred_lft forever inet6 fe80::2a3:b621:ca01:463e/64 scope link tentative dadfailed valid_lft forever preferred_lft forever inet6 fe80::d4e2:ef9e:6820:145a/64 scope Link tentative dadfailed valid_lft forever preferred_lft forever3: virbr0: mtu 1500 qdisc noqueue state DOWN qlen 1000 link/ether 52:54:00:14:39:99 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever4: virbr0-nic: mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000 link/ether 52:54:00:14:39:99 brd ff:ff:ff:ff:ff:ff

View the Ib02 address:

[root@Ib2] # ip A1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00 scope host valid_lft forever preferred_lft forever2: ens33: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:1d:ec : B0 brd ff:ff:ff:ff:ff:ff inet 192.168.18.151 scope global ens33 valid_lft forever preferred_lft forever inet 24 brd 192.168.35.255 scope global ens33 valid_lft forever preferred_lft forever inet 192.168.18.100 Universe 24 scope global secondary ens33 # Drift address transferred to valid_lft forever preferred_lft forever inet6 fe80::6ec5:6d7:1b18:466e/64 scope link tentative dadfailed valid_lft forever preferred_lft forever in lb02 Inet6 fe80::2a3:b621:ca01:463e/64 scope link tentative dadfailed valid_lft forever preferred_lft forever inet6 fe80::d4e2:ef9e:6820:145a/64 scope link tentative dadfailed valid_lft forever preferred_lft forever3: virbr0: mtu 1500 qdisc noqueue state DOWN qlen 1000 link/ether 52:54:00:14:39:99 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 Valid_lft forever preferred_lft forever4: virbr0-nic: mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000 link/ether 52:54:00:14:39:99 brd ff:ff:ff:ff:ff:ff

Resume operation, start nginx service and keepalived service successively in Ib01

[root@localhost ~] # systemctl start nginx [root@localhost ~] # systemctl start keepalived.service [root@localhost ~] # ip A1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 0000 inet: 0000 scope host valid_lft forever preferred_lft forever2: ens33: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:ba:e6:18 brd ff:ff:ff:ff:ff:ff inet 192.168.35.104 scope global secondary ens33 24 brd 192.168.35.255 scope global ens33 valid_lft forever preferred_lft forever inet 192.168.35.200 Universe 24 scope global secondary ens33 # drift address transferred back to valid_lft forever preferred_lft forever inet6 fe80::6ec5 in lb01: 6d7:1b18:466e/64 scope link tentative dadfailed valid_lft forever preferred_lft forever inet6 fe80::2a3:b621:ca01:463e/64 scope link tentative dadfailed valid_lft forever preferred_lft forever inet6 fe80::d4e2:ef9e:6820:145a/64 scope link tentative dadfailed valid_lft forever preferred_lft forever3: virbr0: mtu 1500 qdisc noqueue state DOWN qlen 1000 link/ether 52:54:00:14:39:99 brd ff:ff:ff:ff:ff:ff Inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever4: virbr0-nic: mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000 link/ether 52:54:00:14:39:99 brd ff:ff:ff:ff:ff:ff

Because the drift address is on the lb01, the real nginx home page should contain master when accessing the drift address

Node node binds VIP address

1. Modify node node configuration file Unified VIP

[root@localhost ~] # vim / opt/kubernetes/cfg/bootstrap.kubeconfig [root@localhost ~] # vim / opt/kubernetes/cfg/kubelet.kubeconfig [root@localhost ~] # vim / opt/kubernetes/cfg/kube-proxy.kubeconfig# are all changed to VIP address server: https://192.168.18.100:6443

2. Replace and restart the service after completing the direct self-test

[root@node1 ~] # cd / opt/kubernetes/cfg/ [root@node1 cfg] # grep 100 * bootstrap.kubeconfig: server: https://192.168.18.100:6443kubelet.kubeconfig: server: https://192.168.18.100:6443kube-proxy.kubeconfig: server: https://192.168.18.100:6443[root@node1 cfg] # systemctl restart kubelet.service [root@node1 cfg] # systemctl restart kube-proxy.service

3. View nginx's k8s log on lb01

[root@lb1] # tail / var/log/nginx/k8s-access.log192.168.18.130 192.168.18.128Partition 6443-[07/Feb/2020:14:18:54 + 0800] 2001119192.168.18.130 192.168.18.140Viru 6443-[07/Feb/2020:14:18:54 + 0800] 20011192.168.18.129 192.168.18.12807/Feb/2020:14:18:54 6443-[07/Feb/2020:14:18:57 + 0800] 2001120192. 168.18.129 192.168.18.140 purl 6443-[07/Feb/2020:14:18:57 + 0800] 200 1120

4. Operate on master1

# Test creation pod [root@master1 ~] # kubectl run nginx-- image=nginxkubectl run-- generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.deployment.apps/nginx created# View status [root@master1 ~] # kubectl get podsNAME READY STATUS RESTARTS AGEnginx-dbddb74b8-7hdfj 0 32s# 1 ContainerCreating 0 32s# status is ContainerCreating is being created [root@master1 ~] # kubectl get podsNAME READY STATUS RESTARTS AGEnginx-dbddb74b8-7hdfj 1 73s# 1 Running 0 73s# now is Running Indicates that the creation is complete and is running # Note: log problem [root@master1 ~] # kubectl logs nginx-dbddb74b8-7hdfjError from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) (pods/log nginx-dbddb74b8-7hdfj) # the log is not visible at this time Need to enable permission # bind anonymous users in the cluster to give administrator permission [root@master1 ~] # kubectl create clusterrolebinding cluster-system-anonymous-- clusterrole=cluster-admin-- user=system:anonymousclusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created [root@master1 ~] # kubectl logs nginx-dbddb74b8-7hdfj # at this time will not report wrong view pod network # [root@master1 ~] # kubectl get pods-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODEnginx-dbddb74b8-7hdfj 1bat 1 Running 0 20m 172.17.32.2 192.168.18.129

5. You can directly access the node1 node of the corresponding network segment.

[root@node1 ~] # curl 172.17.32.2Welcome to nginx! Body {width: 35eme; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif;} Welcome to nginx!

If you see this page, the nginx web server is successfully installed andworking. Further configuration is required.

For online documentation and support please refer tonginx.org.Commercial support is available atnginx.com.

Thank you for using nginx.

# what you see at this time is the information of nginx in the container

Access will generate logs, and we can go back to master1 to view the logs.

[root@master1 ~] # kubectl logs nginx-dbddb74b8-7hdfj172.17.32.1-[07/Feb/2020:06:52:53 + 0000] "GET / HTTP/1.1" 200612 "-" curl/7.29.0 "-" # at this point you can see the record of node1's access using the gateway (172.17.32.1)

What is described above is the details of Kubernetes's multi-node deployment, and the specific usage needs to be used by everyone through hands-on experiments. If you want to know more about it, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report