Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the scheme of Kubernetes simulating production environment to build high availability Master nodes in high availability cluster

2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article shows you how Kubernetes simulates the production environment to build a highly available Master node in a high-availability cluster. The content is concise and easy to understand, which will definitely brighten your eyes. I hope you can get something through the detailed introduction of this article.

Note: this high availability scheme applies not only to the high availability of the K8S master node in this article, but also to any business scenario that requires high availability. Haproxy can be implemented using nginx or other load balancers.

We all know that we must adhere to one rule when deploying services in a production environment: no single point of failure is allowed. Our architecture of deploying K8s in the test environment is generally a single master Master node with multiple working Node nodes. To avoid the downtime of the master node, we need to deploy the master node with high availability in order to deploy the K8S cluster in production.

A highly available solution for the master node in the production environment: deploy multiple (more than 3) to the master node, and then deploy multiple (usually more than 2) load balancers (usually choose Nginx or Haproxy) to load the api-server services of the master node to prevent a single point of failure. The following will explain in detail how to make the api-server service of the master node highly available, mainly about the deployment of load balancer configuration values. The detailed construction of the cluster will be described in a later article.

Main work node: 192.168.100.107

Slave node: 192.168.100.108

Virtual IP: 192.168.100.110

I. Environmental description

System environment: CentOS7.7

Keepalived version: 2.0.19

Haproxy version: 2.0.8

2. Install and configure Keepalived service

1. Download the Keepalived source code package

Official website address: https://www.keepalived.org/

Download address: https://www.keepalived.org/software/keepalived-2.0.19.tar.gz

two。 Upload and decompress the Keepalived source code package

Tar-zxvf keepalived-2.0.19.tar.gz

3. Compile Keepalived preparation

Enter the decompression directory: cd keepalived-2.0.19

Prepare to perform compilation:. / configure-- prefix=/work/keepalived

Note: there must be dependencies related to gcc and openssl compilation

4. Compile and install Keepalived

Make & & make install

5. Install and configure Keepalived

When keepalived starts, it looks for the keepalived.conf configuration file from the relevant directory in / etc/keepalived/, so copy the keepalived installation directory / usr/local/keepalived/etc/keepalived.conf to / etc/keepalived/.

Mkdir / etc/keepalived/

Cp / work/keepalived/etc/keepalived/keepalived.conf / etc/keepalived/keepalived.conf

Cp / work/keepalived/etc/sysconfig/keepalived / etc/sysconfig/keepalived

6. Set the Keepalived boot entry

Systemctl enable keepalived

Then you can use systemctl start/stop/status keepalived to manage keepalived

7. Configure the Keepalived service

107 machine configuration information:

Vrrp_script check_haproxy {interval 3 script "/ work/script/check_haproxy.sh"} vrrp_instance kube_master {state master interface ens33 virtual_router_id 110 priority 100 advert_int 3 authentication {auth_type PASS auth_pass kube_master_password} virtual_ipaddress {192.168.100.110 } track_script {check_haproxy}}

108 configuration information of the machine:

Vrrp_script check_haproxy {interval 3 script "/ work/script/check_haproxy.sh"} vrrp_instance kube_master {state backup interface ens33 virtual_router_id 110 priority 90 advert_int 3 authentication {auth_type PASS auth_pass kube_master_password} virtual_ipaddress {192.168.100.110 } track_script {check_haproxy}}

8. Write haproxy service detection scripts

Vi / work/script/check_haproxy.sh

#! / bin/bashactive_status= `netstat-lntp | grep haproxy | wc-l`if [$active_status-gt 0]; then exit 0else exit 1fi

Then give the script execution permission: chmod + x / work/script/check_haproxy.sh

III. Haproxy installation and deployment

1. Download the Haproxy source code package

Official website address: https://www.haproxy.org/

Download address: https://www.haproxy.org/download/2.0/src/haproxy-2.0.8.tar.gz

two。 Upload and decompress the Haproxy source code package

Tar-zxvf haproxy-2.0.8.tar.gz

3. Compile Haproxy

Required dependent library: openssl openssl-devel systemd-deve pcre zlib

Make TARGET=linux-glibc USE_OPENSSL=1 USE_SYSTEMD=1 USE_PCRE=1 USE_ZLIB=1 USE_CRYPT_H=1 USE_LIBCRYPT=1

Determine the kernel version:

Turn on https mode: USE_OPENSSL=1

Specify systemd mode: USE_SYSTEMD=1

Support for pcre libraries: USE_PCRE=1

Support for zlib libraries: USE_ZLIB=1

Support for crypt_ h library: USE_CRYPT_H=1

Support for libcrypt libraries: USE_LIBCRYPT=1

4. Install haproxy

Make install PREFIX=/work/haproxy

Specify the installation directory: PREFIX=/work/haproxy

5. Register with the system service

Vi / usr/lib/systemd/system/haproxy.service

[Unit] Description=HAProxy Load BalancerAfter=syslog.target network.target [service] ExecStartPre=/work/haproxy/sbin/haproxy-f / etc/haproxy/haproxy.cfg-c-qExecStart=/work/haproxy/sbin/haproxy-Ws-f / etc/haproxy/haproxy.cfg-p / var/run/haproxy.pidExecReload=/bin/kill-USR2 $MAINPID [install] WantedBy=multi-user.target

6. Write Haproxy configuration file

Vi / etc/haproxy/haproxy.cfg

Global log 127.0.0.1 local0 chroot / var/lib/haproxy pidfile / var/run/haproxy.pid maxconn 4000 user root group root stats socket / var/lib/haproxy/stats daemonlisten admin_stats stats enable bind *: 8080 mode http option httplog log global maxconn 10 stats refresh 30s stats uri / admin stats Realm haproxy stats auth admin:admin stats hide-version stats admin if TRUE listen kube_cluster_api_server log global bind 192.168.100.110:6443 mode tcp option tcplog timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 balance roundrobin server kube_cluster_master01 192.168.100.111:6443 check inter 5000 rise 2 fall 3 server kube_cluster_master02 192.168.100.112:6443 check inter 5000 rise 2 fall 3 server kube_cluster_master03 192.168.100.113:6443 check inter 5000 rise 2 fall 3

7. Create the required directory

Create / var/lib/haproxy/stats file

Mkdir-p / var/lib/haproxy

Touch / var/lib/haproxy/stats

8. Modify kernel parameters

Vi / etc/sysctl.conf

Add the following:

Net.ipv4.ip_nonlocal_bind = 1 # when starting haproxy, it is allowed to ignore the existence of VIP net.ipv4.ip_forward = 1 # allow forwarding

Execute sysctl-p to save the result and make it effective

If the above kernel parameters are not configured, haproxy will report an error of cannot bind socket when starting.

9. Open the monitoring page port

Iptables-I INPUT-p tcp-- dport 8080-j ACCEPT

IV. Installation verification

After the installation and configuration above have been completed on both machines

1. Start the Keepalived service separately

Systemctl start keepalived

two。 Start the Haproxy service separately

Systemctl start haproxy

Log in to the two machines to view the haproxy service monitoring page:

Check whether the keepalived service of the two machines is normal.

Stop the keepalived service for each of the two machines to check the VIP allocation:

5. Frequently asked questions

1.configure: error: no acceptable C compiler found in $PATH See `config.log' for more details.

Solution: install the GCC library

2.OpenSSL is not properly installed on your system. ! Can not include OpenSSL headers files.

Solution: install openssl openssl-devel

3. WARNING-this build will not support IPVS with IPv6. Please install libnl/libnl-3 dev libraries to support IPv6 with IPVS.

Solution: install libnl libnl-devel

The above is what the Kubernetes simulation production environment is like to build a high availability Master node in a high availability cluster. Have you learned the knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report