Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Deploy a complete K8S cluster (part two)

2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Deploy UI

[root@k8s-master1 YAML] # kubectl apply-f dashboard.yaml

Namespace/kubernetes-dashboard created

Serviceaccount/kubernetes-dashboard created

Service/kubernetes-dashboard created

Secret/kubernetes-dashboard-certs created

Secret/kubernetes-dashboard-csrf created

Secret/kubernetes-dashboard-key-holder created

Configmap/kubernetes-dashboard-settings created

Role.rbac.authorization.k8s.io/kubernetes-dashboard created

Clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created

Rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created

Clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created

Deployment.apps/kubernetes-dashboard created

Service/dashboard-metrics-scraper created

Deployment.apps/dashboard-metrics-scraper created

[root@k8s-master1 YAML] # kubectl get pods-n kubernetes-dashboard

NAME READY STATUS RESTARTS AGE

Dashboard-metrics-scraper-566cddb686-v5s8t 1/1 Running 0 22m

Kubernetes-dashboard-7b5bf5d559-sqpd7 1/1 Running 0 22m

[root@k8s-master1 YAML] # kubectl get svc-n kubernetes-dashboard

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE

Dashboard-metrics-scraper ClusterIP 10.0.0.180 8000/TCP 23m

Kubernetes-dashboard NodePort 10.0.0.163 443:30001/TCP 23m

[root@k8s-master1 YAML] # kubectl apply-f dashboard-adminuser.yaml

Serviceaccount/admin-user created

Clusterrolebinding.rbac.authorization.k8s.io/admin-user created

Create a token that can access dashboard

[root@k8s-master1 src] # kubectl-n kubernetes-dashboard describe secret $(kubectl-n kubernetes-dashboard get secret | grep admin-user | awk'{print $1}')

Name: admin-user-token-2k5k9

Namespace: kubernetes-dashboard

Labels:

Annotations: kubernetes.io/service-account.name: admin-user

Kubernetes.io/service-account.uid: 14110df7-4a24-4a06-a99e-18c3a60c5b13

Type: kubernetes.io/service-account-token

Data

=

Ca.crt: 1359 bytes

Namespace: 20 bytes

Token: eyJhbGciOiJSUzI1NiIsImtpZCI6IkV5VUtIek9UeUs1WnRnbzJzVzgyaEJKblM3UDFiMXdHTEdPeFhkZmxwaDAifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTJrNWs5Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIxNDExMGRmNy00YTI0LTRhMDYtYTk5ZS0xOGMzYTYwYzViMTMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.eURKAOmq-DOPyf7B_ZH2nIg4QxcMhmy6VL4miZuoXx7g70V69rhQjEdR156TujxHkXIFz4X6biifycm_gLxShn2sAwoiBohzKOogJZLo1hXWl6pAGHbAGLuEZsvN5GrSmyUhC955ztheNve0xx5QTwFLtXFSOuTwnzzKEHYMyfivYTVmf8iovx0S2SS1IQxqFOZxMNH5DKUCK7tleEZxnXcHzUG2zTSn6D7nL8EtAzOAD_kVx6dKsQt4fbMqiOcyG_qFfFopU9ZJwsILTDma4k3iecRAb4KmNlRaasFdXLptF6SDs0IceHqE9hm3yoOB7pZXWsptNafmcrFCSOEjaQ

To access the link above, there are two ways to verify, one is profile verification, the other is token verification, now choose the second way, Token verification login, and fill in the green text token.

The picture shows that the login to dashboard has been successful.

Deploy coredns:

[root@k8s-master1 YAML] # kubectl apply-f coredns.yaml

Serviceaccount/coredns created

Clusterrole.rbac.authorization.k8s.io/system:coredns created

Clusterrolebinding.rbac.authorization.k8s.io/system:coredns created

Configmap/coredns created

Deployment.apps/coredns created

Service/kube-dns created

Test with the bs.yml file to see if dns can parse

[root@k8s-master1 src] # kubectl apply-f bs.yaml

Pod/busybox created

[root@k8s-master1 YAML] # kubectl get pods

NAME READY STATUS RESTARTS AGE

Busybox 1/1 Running 0 6m47s

Web-d86c95cc9-8tmkl 1bat 1 Running 0 65m

Enter the docker corresponding to busybox,Ping to see if it can be parsed.

[root@k8s-master1 YAML] # kubectl exec-it busybox sh

/ # ping web

PING web (10.0.0.203): 56 data bytes

64 bytes from 10.0.0.203: seq=0 ttl=64 time=0.394 ms

64 bytes from 10.0.0.203: seq=1 ttl=64 time=0.323 ms

^ C

-web ping statistics

2 packets transmitted, 2 packets received, 0 packet loss

Round-trip min/avg/max = 0.323 ms 0.358 ms

/ # ping kubernetes

PING kubernetes (10.0.0.1): 56 data bytes

64 bytes from 10.0.0.1: seq=0 ttl=64 time=0.344 ms

64 bytes from 10.0.0.1: seq=1 ttl=64 time=0.239 ms

^ C

-kubernetes ping statistics

2 packets transmitted, 2 packets received, 0 packet loss

Round-trip min/avg/max = 0.239 ms 0.291 hand 0.344

/ #

As shown above, it can be parsed, which means that coredns has already installed OK

Deploy keepalived nginx (both machines need to be deployed)

[root@lvs1 ~] # rpm-ivh http://nginx.org/packages/rhel/7/x86_64/RPMS/nginx-1.16.0-1.el7.ngx.x86_64.rpm

Retrieving http://nginx.org/packages/rhel/7/x86_64/RPMS/nginx-1.16.0-1.el7.ngx.x86_64.rpm

Warning: / var/tmp/rpm-tmp.oiFMgm: Header V4 RSA/SHA1 Signature, key ID 7bd9bf62: NOKEY

Preparing... # # [100%]

Updating / installing...

1:nginx-1:1.16.0-1.el7.ngx # # [100%]

Thanks for using nginx!

Please find the official documentation for nginx here:

* http://nginx.org/en/docs/

Please subscribe to nginx-announce mailing list to get

The most important news about nginx:

* http://nginx.org/en/support.html

Commercial subscriptions for nginx are available on:

* http://nginx.com/products/

[root@lvs1 ~] # systemctl enable nginx

Created symlink from / etc/systemd/system/multi-user.target.wants/nginx.service to / usr/lib/systemd/system/nginx.service.

[root@lvs1 ~] # systemctl status nginx

● nginx.service-nginx-high performance web server

Loaded: loaded (/ usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)

Active: inactive (dead)

Docs: http://nginx.org/en/docs/

[root@lvs1 ~] # systemctl start nginx

[root@lvs1 ~] # systemctl status nginx

● nginx.service-nginx-high performance web server

Loaded: loaded (/ usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)

Active: active (running) since Sat 2020-02-01 14:41:09 CST; 11s ago

Docs: http://nginx.org/en/docs/

Process: 1681 ExecStart=/usr/sbin/nginx-c / etc/nginx/nginx.conf (code=exited, status=0/SUCCESS)

Main PID: 1682 (nginx)

CGroup: / system.slice/nginx.service

├─ 1682 nginx: master process / usr/sbin/nginx-c / etc/nginx/nginx.conf

└─ 1683 nginx: worker process

Feb 01 14:41:09 lvs1 systemd [1]: Starting nginx-high performance web server...

Feb 01 14:41:09 lvs1 systemd [1]: Started nginx-high performance web server.

[root@lvs1 ~] # yum install keepalived-y

Loaded plugins: fastestmirror

Determining fastest mirrors

* base: mirrors.aliyun.com

* extras: mirrors.cn99.com

* updates: mirrors.aliyun.com

Base | 3.6 kB 00:00:00

Extras | 2.9 kB 00:00:00

Updates | 2.9 kB 00:00:00

(1x2): extras/7/x86_64/primary_db | 159 kB 00:00:00

(2x2): updates/7/x86_64/primary_db | 5.9 MB 00:00:01

Resolving Dependencies

-> Running transaction check

-> Package keepalived.x86_64 0RU 1.3.5-16.el7 will be installed

-> Processing Dependency: libnetsnmpmibs.so.31 () (64bit) for package: keepalived-1.3.5-16.el7.x86_64

-> Processing Dependency: libnetsnmpagent.so.31 () (64bit) for package: keepalived-1.3.5-16.el7.x86_64

-> Processing Dependency: libnetsnmp.so.31 () (64bit) for package: keepalived-1.3.5-16.el7.x86_64

-> Running transaction check

-> Package net-snmp-agent-libs.x86_64 1RV 5.7.2-43.el7 will be installed

-> Processing Dependency: libsensors.so.4 () (64bit) for package: 1:net-snmp-agent-libs-5.7.2-43.el7.x86_64

-> Package net-snmp-libs.x86_64 1RV 5.7.2-43.el7 will be installed

-> Running transaction check

-> Package lm_sensors-libs.x86_64 0RU 3.4.0-8.20160601gitf9185e5.el7 will be installed

-> Finished Dependency Resolution

Dependencies Resolved

=

Package Arch Version Repository Size

=

Installing:

Keepalived x8631 64 1.3.5-16.el7 base 331k

Installing for dependencies:

Lm_sensors-libs x8631 64 3.4.0-8.20160601gitf9185e5.el7 base 42k

Net-snmp-agent-libs x86x 64 1purl 5.7.2-43.el7 base 706k

Net-snmp-libs x86x 64 1purl 5.7.2-43.el7 base 750k

Transaction Summary

=

Install 1 Package (+ 3 Dependent packages)

Total download size: 1.8 M

Installed size: 6.0 M

Downloading packages:

(1ap4): lm_sensors-libs-3.4.0-8.20160601gitf9185e5.el7.x86_64.rpm | 42 kB 00:00:00

(2cap 4): net-snmp-agent-libs-5.7.2-43.el7.x86_64.rpm | 706 kB 00:00:00

(3go 4): net-snmp-libs-5.7.2-43.el7.x86_64.rpm | 750kB 00:00:00

(4ap4): keepalived-1.3.5-16.el7.x86_64.rpm | 331 kB 00:00:01

-

Total 1.0 MB/s | 1.8 MB 00:00:01

Running transaction check

Running transaction test

Transaction test succeeded

Running transaction

Warning: RPMDB altered outside of yum.

Installing: 1:net-snmp-libs-5.7.2-43.el7.x86_64 1 Compact 4

Installing: lm_sensors-libs-3.4.0-8.20160601gitf9185e5.el7.x86_64 2 Compact 4

Installing: 1:net-snmp-agent-libs-5.7.2-43.el7.x86_64 3 Compact 4

Installing: keepalived-1.3.5-16.el7.x86_64 4 Compact 4

Verifying: keepalived-1.3.5-16.el7.x86_64 1 Compact 4

Verifying: 1:net-snmp-agent-libs-5.7.2-43.el7.x86_64 2 Compact 4

Verifying: lm_sensors-libs-3.4.0-8.20160601gitf9185e5.el7.x86_64 3 Compact 4

Verifying: 1:net-snmp-libs-5.7.2-43.el7.x86_64 4 Compact 4

Installed:

Keepalived.x86_64 0RO 1.3.5-16.el7

Dependency Installed:

Lm_sensors-libs.x86_64 0VOR 3.4.0-8.20160601gitf9185e5.el7 net-snmp-agent-libs.x86_64 1RU 5.7.2-43.el7 net-snmp-libs.x86_64 1RU 5.7.2-43.el7

Complete!

Main keepalived profile:

[root@lvs1 nginx] # cat / etc/keepalived/keepalived.conf

Global_defs {

Notification_email {

Acassen@firewall.loc

Failover@firewall.loc

Sysadmin@firewall.loc

}

Notification_email_from Alexandre.Cassen@firewall.loc

Smtp_server 127.0.0.1

Smtp_connect_timeout 30

Router_id NGINX_MASTER

}

Vrrp_script check_nginx {

Script "/ etc/keepalived/check_nginx.sh"

}

Vrrp_instance VI_1 {

State MASTER

Interface eth0

Virtual_router_id 51 # VRRP routing ID instances, each instance is unique

Priority 100 # priority, standby server setting 90

Advert_int 1 # specifies the interval between VRRP heartbeat packet advertisements. Default is 1 second.

Authentication {

Auth_type PASS

Auth_pass 1111

}

Virtual_ipaddress {

192.168.1.120

}

Track_script {

Check_nginx

}

}

Main nginx profile:

[root@lvs1 nginx] # cat / etc/nginx/nginx.conf

User nginx

Worker_processes 4

Error_log / var/log/nginx/error.log warn

Pid / var/run/nginx.pid

Events {

Worker_connections 1024

}

Stream {

Log_format main'$remote_addr $upstream_addr-[$time_local] $status $upstream_bytes_sent'

Access_log / var/log/nginx/k8s-access.log main

Upstream k8s-apiserver {

Server 192.168.1.124:6443

Server 192.168.1.125:6443

Server 192.168.1.126:6443

}

Server {

Listen 6443

Proxy_pass k8s-apiserver

}

}

Http {

Include / etc/nginx/mime.types

Default_type application/octet-stream

Log_format main'$remote_addr-$remote_user [$time_local] "$request"'

'$status $body_bytes_sent "$http_referer"'

'"$http_user_agent"$http_x_forwarded_for"'

Access_log / var/log/nginx/access.log main

Sendfile on

# tcp_nopush on

Keepalive_timeout 65

# gzip on

Include / etc/nginx/conf.d/*.conf

}

Prepare the keepalived configuration file

[root@lvs2 keepalived] # cat / etc/keepalived/keepalived.conf

Global_defs {

Notification_email {

Acassen@firewall.loc

Failover@firewall.loc

Sysadmin@firewall.loc

}

Notification_email_from Alexandre.Cassen@firewall.loc

Smtp_server 127.0.0.1

Smtp_connect_timeout 30

Router_id NGINX_BACKUP

}

Vrrp_script check_nginx {

Script "/ etc/keepalived/check_nginx.sh"

}

Vrrp_instance VI_1 {

State BACKUP

Interface eth0

Virtual_router_id 51 # VRRP routing ID instances, each instance is unique

Priority 90 # priority, standby server setting 90

Advert_int 1 # specifies the interval between VRRP heartbeat packet advertisements. Default is 1 second.

Authentication {

Auth_type PASS

Auth_pass 1111

}

Virtual_ipaddress {

192.168.1.120

}

Track_script {

Check_nginx

}

}

From the nginx configuration file:

[root@lvs2 keepalived] # cat / etc/nginx/nginx.conf

User nginx

Worker_processes 4

Error_log / var/log/nginx/error.log warn

Pid / var/run/nginx.pid

Events {

Worker_connections 1024

}

Stream {

Log_format main'$remote_addr $upstream_addr-[$time_local] $status $upstream_bytes_sent'

Access_log / var/log/nginx/k8s-access.log main

Upstream k8s-apiserver {

Server 192.168.1.124:6443

Server 192.168.1.125:6443

Server 192.168.1.126:6443

}

Server {

Listen 6443

Proxy_pass k8s-apiserver

}

}

Http {

Include / etc/nginx/mime.types

Default_type application/octet-stream

Log_format main'$remote_addr-$remote_user [$time_local] "$request"'

'$status $body_bytes_sent "$http_referer"'

'"$http_user_agent"$http_x_forwarded_for"'

Access_log / var/log/nginx/access.log main

Sendfile on

# tcp_nopush on

Keepalive_timeout 65

# gzip on

Include / etc/nginx/conf.d/*.conf

}

Nginx detection script:

Executable permissions are added to the nginx script:

[root@lvs1 nginx] # chmod + x / etc/keepalived/check_nginx.sh

[root@lvs2 nginx] # chmod + x / etc/keepalived/check_nginx.sh

[root@lvs2 keepalived] # cat check_nginx.sh

#! / bin/bash

Count=$ (ps-ef | grep nginx | egrep-cv "grep | $$")

If ["$count"-eq 0]; then

Exit 1

Else

Exit 0

Fi

[root@lvs1 nginx] # systemctl restart keepalived & & systemctl restart nginx

[root@lvs2 nginx] # systemctl restart keepalived & & systemctl restart nginx

Modify the interface address of the apiserver of the node1,node2,node3 node to the IP address of the load balancer, and then restart kubelet and kube-proxy

[root@k8s-node1 cfg] # grep "192.168" *

Bootstrap.kubeconfig: server: https://192.168.1.124:6443

Kubelet.kubeconfig: server: https://192.168.1.124:6443

Kube-proxy.kubeconfig: server: https://192.168.1.124:6443

[root@k8s-node1 cfg] # sed-I "192.168.1.124" 192.168.1.120g "*

[root@k8s-node1 cfg] # grep "192.168" *

Bootstrap.kubeconfig: server: https://192.168.1.120:6443

Kubelet.kubeconfig: server: https://192.168.1.120:6443

Kube-proxy.kubeconfig: server: https://192.168.1.120:6443

[root@k8s-node1 cfg] # systemctl restart kubelet & & systemctl restart kube-proxy

[root@k8s-node2 cfg] # sed-I "192.168.1.124" 192.168.1.120g "*

[root@k8s-node2 cfg] # grep "192.168" *

Bootstrap.kubeconfig: server: https://192.168.1.120:6443

Kubelet.kubeconfig: server: https://192.168.1.120:6443

Kube-proxy.kubeconfig: server: https://192.168.1.120:6443

[root@k8s-node2 cfg] # systemctl restart kubelet & & systemctl restart kube-proxy

[root@k8s-node3 cfg] # sed-I "192.168.1.124" 192.168.1.120g "*

[root@k8s-node3 cfg] # grep "192.168" *

Bootstrap.kubeconfig: server: https://192.168.1.120:6443

Kubelet.kubeconfig: server: https://192.168.1.120:6443

Kube-proxy.kubeconfig: server: https://192.168.1.120:6443

[root@k8s-node3 cfg] # systemctl restart kubelet & & systemctl restart kube-proxy

Command to check the cluster status of K8s, which is still Ready. The cluster is normal. You can also check the log of Nginx to see if it is abnormal.

[root@k8s-master1 k8s] # kubectl get nodes

NAME STATUS ROLES AGE VERSION

K8s-node1 Ready 4h28m v1.16.0

K8s-node2 Ready 4h28m v1.16.0

K8s-node3 Ready 4h28m v1.16.0

[root@lvs1 nginx] # tailf / var/log/nginx/k8s-access.log

192.168.1.129 192.168.1.124 purl 6443-[01/Feb/2020:15:34:19 + 0800] 200 1160

192.168.1.129 192.168.1.124 purl 6443-[01/Feb/2020:15:34:19 + 0800] 200 1159

192.168.1.129 192.168.1.124 purl 6443-[01/Feb/2020:15:34:19 + 0800] 200 1159

192.168.1.129 192.168.1.126 purl 6443-[01/Feb/2020:15:34:19 + 0800] 200 1160

192.168.1.129 192.168.1.126 purl 6443-[01/Feb/2020:15:34:19 + 0800] 200 1159

192.168.1.129 192.168.1.126 purl 6443-[01/Feb/2020:15:34:19 + 0800] 200 1160

192.168.1.129 192.168.1.124 purl 6443-[01/Feb/2020:15:34:19 + 0800] 200 1160

192.168.1.129 192.168.1.125 purl 6443-[01/Feb/2020:15:34:39 + 0800] 200 1611

192.168.1.128 192.168.1.126 purl 6443-[01/Feb/2020:15:34:39 + 0800] 200 1611

192.168.1.127 192.168.1.126 purl 6443-[01/Feb/2020:15:34:39 + 0800] 200 1611

[root@lvs2 keepalived] # tailf / var/log/nginx/k8s-access.log

192.168.1.129 192.168.1.124 purl 6443-[01/Feb/2020:15:33:44 + 0800] 200 1161

192.168.1.127 192.168.1.125 purl 6443-[01/Feb/2020:15:33:44 + 0800] 200 1159

192.168.1.129 192.168.1.124 purl 6443-[01/Feb/2020:15:33:44 + 0800] 200 1160

192.168.1.129 192.168.1.124 purl 6443-[01/Feb/2020:15:33:44 + 0800] 200 1159

192.168.1.129 192.168.1.125 purl 6443-[01/Feb/2020:15:33:44 + 0800] 200 1161

192.168.1.129 192.168.1.126 purl 6443-[01/Feb/2020:15:33:44 + 0800] 200 1161

192.168.1.129 192.168.1.125 purl 6443-[01/Feb/2020:15:33:44 + 0800] 200 1159

192.168.1.128 192.168.1.126 purl 6443-[01/Feb/2020:15:33:44 + 0800] 200 1161

192.168.1.128 192.168.1.125 purl 6443-[01/Feb/2020:15:49:06 + 0800] 200 2269

192.168.1.129 192.168.1.125 purl 6443-[01/Feb/2020:15:51:11 + 0800] 200 2270

192.168.1.127 192.168.1.125 purl 6443-[01/Feb/2020:15:51:47 + 0800] 200 2270

192.168.1.128 192.168.1.124 purl 6443-[01/Feb/2020:15:51:56 + 0800] 200 4352

192.168.1.127 192.168.1.124 purl 6443-[01/Feb/2020:15:52:04 + 0800] 200 5390

192.168.1.129 192.168.1.125 purl 6443-[01/Feb/2020:15:52:07 + 0800] 200 4409

It means that the switch can be done normally, and the OK is built in the K8S cluster.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report