In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Introduction at the beginning
Kubernetes has been running in our production environment for nearly a year and is now running steadily. From the construction of the system to the migration of the project, we have encountered a lot of problems. The production environment uses multiple master nodes to achieve high availability of kubernetes, and uses haproxy+keepalived to load balance master. Now take the time to summarize the construction process of the system to help you quickly build your own K8s system.
Here is a screenshot of the operation of my production environment
The iteration of the kubernente version update is very fast. When I built the production environment kubernetes, the latest official version was v1.11, and now it has been officially updated to v1.15. This article gives an overview of the latest version.
2. Introduction to kubernetes
Kubernetes is google's open source container orchestration and scheduling engine based on borg. It is an open source platform for automatic deployment, expansion, operation and maintenance of container clusters. Kubernetes has comprehensive cluster management capabilities, including multi-level security protection and access mechanism, multi-tenant application support capability, transparent service registration and service discovery mechanism, built-in load balancer, fault detection and self-repair capability, service rolling upgrade and online expansion, scalable resource automatic scheduling mechanism, and multi-granularity resource quota management capability. Kubernetes also provides comprehensive management tools, covering development, deployment testing, operation and maintenance monitoring and other aspects. As one of the most important members of CNCF (Cloud Native Computing Foundation), kubernetes aims not only to be an orchestration system, but also to provide a specification that allows you to describe the architecture of the cluster and define the final state of the service. Kubernetes can help you automatically achieve and maintain the system in this state.
3. Kubernetes architecture
In this system architecture diagram, services can be divided into services running on work nodes and services that make up cluster-level control nodes. The kubernetes node has the services necessary to run the application container, which are controlled by master. Docker,docker is run on each node to take care of all specific image downloads and container runs.
Kubernetes mainly consists of the following core components:
Etcd saves the state of the entire cluster
Apiserver provides a unique entry for resource operations and provides mechanisms such as authentication, authorization, access control, API registration and discovery.
Controller manager is responsible for maintaining the status of the cluster, such as fault detection, automatic extension, rolling updates, etc.
Scheduler is responsible for resource scheduling and dispatches Pod to the corresponding machine according to the predetermined scheduling policy.
Kubelet is responsible for maintaining the life cycle of the container, as well as the management of Volume (CVI) and network (CNI).
Container runtime is responsible for image management and the actual operation of Pod and containers (CRI)
Kube-proxy is responsible for providing Service with service discovery and load balancing within cluster.
In addition to the core components, there are some recommended components:
Kube-dns is responsible for providing DNS services for the entire cluster.
Ingress Controller provides public network access for services.
Heapster provides resource monitoring
Dashboard provides GUI
Federation provides clusters across availability zones
Fluentd-elasticsearch provides cluster log collection, storage and query.
4. Building process
Let's start with today's practical information, the process of building a cluster.
4.1 Environmental preparation
Machine name Machine configuration
Machine system IP address role haproxy1
8C16G
Ubuntu16.04192.168.10.1haproxy+keepalived VIP:192.168.10.10haproxy18C16Gubuntu16.04192.168.10.2haproxy+keepalived VIP:192.168.10.10master18C16Gubuntu16.04192.168.10.3 Master Node 1master28C16Gubuntu16.04192.168.10.4 Master Node 2master38C16Gubuntu16.04192.168.10.5 Master Node 3node18C16Gubuntu16.04192.168.10.6 work Node 1node28C16Gubuntu16.04192.168.10.7 work Node 2node38C16Gubuntu16.04192.168.10.8 work Node 3
4.2 description of the environment
In this paper, three master and three node are used to build kubernetes cluster, and two machines are used to build haproxy+keepalived load balancing master to ensure the high availability of master, thus ensuring the high availability of the whole kubernetes. The official requirement is that the machine configuration must be > = 2C2G, operating system > = 16.04.
4.3 Construction process
4.3.1 basic Settings
Modify the hosts file, all 8 machines
Root@haproxy1:~# cat / etc/hosts
192.168.10.1 haproxy1
192.168.10.2 haproxy2
192.168.10.3 master1
192.168.10.4 master2
192.168.10.5 master3
192.168.10.6 node1
192.168.10.7 node2
192.168.10.8 node3
192.168.10.10 kubernetes.haproxy.com
4.3.2 haproxy+keepalived build
Install haproxy
Root@haproxy1:/data# wget https://github.com/haproxy/haproxy/archive/v2.0.0.tar.gz
Root@haproxy1:/data# tar-xf v2.0.0.tar.gz
Root@haproxy1:/data# cd haproxy-2.0.0/
Root@haproxy1:/data/haproxy-2.0.0# make TARGET=linux-glibc
Root@haproxy1:/data/haproxy-2.0.0# make install PREFIX=/data/haproxy
Root@haproxy1:/data/haproxy# mkdir conf
Root@haproxy1:/data/haproxy# vim conf/haproxy.cfg
Global
Log 127.0.0.1 local0 err
Maxconn 50000
User haproxy
Group haproxy
Daemon
Nbproc 1
Pidfile haproxy.pid
Defaults
Mode tcp
Log 127.0.0.1 local0 err
Maxconn 50000
Retries 3
Timeout connect 5s
Timeout client 30s
Timeout server 30s
Timeout check 2s
Listen admin_stats
Mode http
Bind 0.0.0.0:1080
Log 127.0.0.1 local0 err
Stats refresh 30s
Stats uri / haproxy-status
Stats realm Haproxy\ Statistics
Stats auth will:will
Stats hide-version
Stats admin if TRUE
Frontend k8s
Bind 0.0.0.0:8443
Mode tcp
Default_backend k8s
Backend k8s
Mode tcp
Balance roundrobin
Server master1 192.168.10.3:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
Server master2 192.168.10.4:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
Server master3 192.168.10.5:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
Root@haproxy1:/data/haproxy# id-u haproxy & > / dev/null | | useradd-s / usr/sbin/nologin-r haproxy
Root@haproxy1:/data/haproxy# mkdir / usr/share/doc/haproxy
Root@haproxy1:/data/haproxy# wget-qO-https://github.com/haproxy/haproxy/blob/master/doc/configuration.txt | gzip-c > / usr/share/doc/haproxy/configuration.txt.gz
Root@haproxy1:/data/haproxy# vim / etc/default/haproxy
# Defaults file for HAProxy
#
# This is sourced by both, the initscript and the systemd unit file, so do not
# treat it as a shell script fragment.
# Change the config file location if needed
# CONFIG= "/ etc/haproxy/haproxy.cfg"
# Add extra flags here, see haproxy (1) for a few options
# EXTRAOPTS= "- de-m 16"
Root@haproxy1:/data# vim / lib/systemd/system/haproxy.service
[Unit]
Description=HAProxy Load Balancer
Documentation=man:haproxy (1)
Documentation=file:/usr/share/doc/haproxy/configuration.txt.gz
After=network.target syslog.service
Wants=syslog.service
[Service]
Environment=CONFIG=/data/haproxy/conf/haproxy.cfg
EnvironmentFile=-/etc/default/haproxy
ExecStartPre=/data/haproxy/sbin/haproxy-f ${CONFIG}-c-Q
ExecStart=/data/haproxy/sbin/haproxy-W-f ${CONFIG}-p / data/haproxy/conf/haproxy.pid $EXTRAOPTS
ExecReload=/data/haproxy/sbin/haproxy-c-f ${CONFIG}
ExecReload=/bin/kill-USR2 $MAINPID
KillMode=mixed
Restart=always
Type=forking
[Install]
WantedBy=multi-user.target
Root@haproxy2:/data/haproxy# systemctl daemon-reload
Root@haproxy2:/data/haproxy# systemctl start haproxy
Root@haproxy2:/data/haproxy# systemctl status haproxy
Install keepalived
Root@haproxy1:/data# wget https://www.keepalived.org/software/keepalived-2.0.16.tar.gz
Root@haproxy1:/data# tar-xf keepalived-2.0.16.tar.gz
Root@haproxy1:/data# cd keepalived-2.0.16/
Root@haproxy1:/data/keepalived-2.0.16#. / configure-- prefix=/data/keepalived
Root@haproxy1:/data/keepalived-2.0.16#. / configure-- prefix=/data/keepalived
Root@haproxy1:/data/keepalived-2.0.16# make & & make install
Root@haproxy1:/data/keepalived# mkdir conf
Root@haproxy1:/data/keepalived# vim conf/keepalived.conf
! Configuration File for keepalived
Global_defs {
Notification_email {
Root@localhost
}
Notification_email_from keepalived@localhost
Smtp_server 127.0.0.1
Smtp_connect_timeout 30
Router_id haproxy1
}
Vrrp_script chk_haproxy {# HAproxy Service Monitoring script
Script "/ data/keepalived/check_haproxy.sh"
Interval 2
Weight 2
}
Vrrp_instance VI_1 {
State MASTER
Interface ens160
Virtual_router_id 1
Priority 100
Advert_int 1
Authentication {
Auth_type PASS
Auth_pass 1111
}
Track_script {
Chk_haproxy
}
Virtual_ipaddress {
192.168.10.10/24
}
}
Root@haproxy1:/data/keepalived# vim / etc/default/keepalived
# Options to pass to keepalived
# DAEMON_ARGS are appended to the keepalived command-line
DAEMON_ARGS= ""
Root@haproxy1:/data/keepalived# vim / lib/systemd/system/keepalived.service
[Unit]
Description=Keepalive Daemon (LVS and VRRP)
After=network-online.target
Wants=network-online.target
# Only start if there is a configuration file
ConditionFileNotEmpty=/data/keepalived/conf/keepalived.conf
[Service]
Type=forking
KillMode=process
Environment=CONFIG=/data/keepalived/conf/keepalived.conf
# Read configuration variable file if it is present
EnvironmentFile=-/etc/default/keepalived
ExecStart=/data/keepalived/sbin/keepalived-f ${CONFIG}-p / data/keepalived/conf/keepalived.pid $DAEMON_ARGS
ExecReload=/bin/kill-HUP $MAINPID
[Install]
WantedBy=multi-user.target
Root@haproxy1:/data/keepalived# systemctl daemon-reload
Root@haproxy1:/data/keepalived# systemctl start keepalived.service
Root@haproxy1:/data/keepalived# vim / data/keepalived/check_haproxy.sh
#! / bin/bash
A = `ps-C haproxy-- no-header | wc-l`
If [$A-eq 0]; then
Systemctl start haproxy.service
Sleep 3
If [`ps-C haproxy-- no-header | wc-l `- eq 0]; then
Systemctl stop keepalived.service
Fi
Fi
Similarly, haproxy and keepalived are installed on haproxy2 machines.
4.3.3 kubernetes cluster building
Basic settings
Turn off the swap partition. All 6 machines in the kubernetes cluster must be shut down.
Root@master1:~# free-m
Total used free shared buff/cache available
Mem: 16046 128 15727 8 190 15638
Swap: 979 0 979
Root@master1:~# swapoff-a
Root@master1:~# free-m
Total used free shared buff/cache available
Mem: 16046 128 15726 8 191 15638
Swap: 0 0 0
Install docker
All 6 machines need to be installed.
# enable apt to access using https
Root@master1:~# apt-get install-y apt-transport-https ca-certificates curl gnupg-agent software-properties-common
Root@master1:~# curl-fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add-
OK
Root@master1:~# apt-key fingerprint 0EBFCD88
Pub 4096R/0EBFCD88 2017-02-22
Key fingerprint = 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88
Uid Docker Release (CE deb)
Sub 4096R/F273FCD8 2017-02-22
# add docker apt feeds
Root@master1:~# add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release-cs) stable"
# install docker
Root@master1:~# apt-get update
Root@master1:~# apt-get install-y docker-ce docker-ce-cli containerd.io
Root@master1:~# docker-version
Docker version 18.09.6, build 481bc77
Install kubernetes components
# installation of kubeadm,kubelet,kubectl is required for all 6 machines
Root@master1:~# apt-get update
Root@master1:~# apt-get install-y apt-transport-https curl
Root@master1:~# curl-s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add-
OK
Root@master1:~# cat deb https://apt.kubernetes.io/ kubernetes-xenial main
> EOF
Root@master1:~# apt-get update
Root@master1:~# apt-get install-y kubelet kubeadm kubectl
Root@master1:~# apt-mark hold kubelet kubeadm kubectl
Kubelet set on hold.
Kubeadm set on hold.
Kubectl set on hold.
Create a cluster
Control Node 1
Root@master1:~# vim kubeadm-config.yaml
ApiVersion: kubeadm.k8s.io/v1beta2
Kind: ClusterConfigurationkubernetes
Version: stable
ControlPlaneEndpoint: "kubernetes.haproxy.com:8443"
Networking: podSubnet: "10.244.0.0swap 16"
Root@master1:~# kubeadm init-config=kubeadm-config.yaml-upload-certs
After completion, the screenshot is as follows
Root@master1:~# mkdir-p $HOME/.kube
Root@master1:~# cp-I / etc/kubernetes/admin.conf $HOME/.kube/config
Root@master1:~# chown $(id-u): $(id-g) $HOME/.kube/config
# install network components, here using fannel
Root@master1:~# kubectl apply-f https://raw.githubusercontent.com/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/Documentation/kube-flannel.yml
View installation results
Root@master1:~# kubectl get pod-n kube-system-w
Add another control node to the cluster
At that time, our production environment v1.11 required each control node to write the master configuration file and perform a series of operations on each node to join the cluster. Now v1.15 supports kubeadm join joining directly, and the steps are much simpler.
Control Node 2
Root@master2:~# kubeadm join kubernetes.haproxy.com:8443-token a3g3x0.zc6qxcdqu60jgtz1-discovery-token-ca-cert-hash sha256:d48d8e4e7f8bc2c66a815a34b7e6a23809ad53bdae4a600e368a3ff28ad7a7d5-experimental-control-plane-certificate-key a2a84ebc181ba34a943e5003a702b71e2a1e7e236f8d1d687d9a19d2bf803a77
Root@master2:~# cp-I / etc/kubernetes/admin.conf $HOME/.kube/config
Root@master2:~# chown $(id-u): $(id-g) $HOME/.kube/config
View installation results
Root@master2:~# kubectl get nodes
Control Node 3
Root@master3:~# kubeadm join kubernetes.haproxy.com:8443-token a3g3x0.zc6qxcdqu60jgtz1-discovery-token-ca-cert-hash sha256:d48d8e4e7f8bc2c66a815a34b7e6a23809ad53bdae4a600e368a3ff28ad7a7d5-experimental-control-plane-certificate-key a2a84ebc181ba34a943e5003a702b71e2a1e7e236f8d1d687d9a19d2bf803a77
Root@master3:~# mkdir-p $HOME/.kube
Root@master3:~# cp-I / etc/kubernetes/admin.conf $HOME/.kube/config
Root@master3:~# chown $(id-u): $(id-g) $HOME/.kube/config
View installation results
Root@master3:~# kubectl get nodes
Add a work node
Root@node1:~# kubeadm join kubernetes.haproxy.com:8443-token a3g3x0.zc6qxcdqu60jgtz1-discovery-token-ca-cert-hash sha256:d48d8e4e7f8bc2c66a815a34b7e6a23809ad53bdae4a600e368a3ff28ad7a7d5
Root@node2:~# kubeadm join kubernetes.haproxy.com:8443-token a3g3x0.zc6qxcdqu60jgtz1-discovery-token-ca-cert-hash sha256:d48d8e4e7f8bc2c66a815a34b7e6a23809ad53bdae4a600e368a3ff28ad7a7d5
Root@node3:~# kubeadm join kubernetes.haproxy.com:8443-token a3g3x0.zc6qxcdqu60jgtz1-discovery-token-ca-cert-hash sha256:d48d8e4e7f8bc2c66a815a34b7e6a23809ad53bdae4a600e368a3ff28ad7a7d5
The whole cluster has been built to view the results.
Execute on any master
Root@master1:~# kubectl get pods-all-namespaces
Root@master1:~# kubectl get nodes
At this point, the whole high availability cluster has been built.
5. Reference documentation
Https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin
Https://www.kubernetes.org.cn/docs
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.