In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-20 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
This series of documents will cover all the steps of deploying the latest kubernetes v1.14.2 cluster using binaries, rather than using automated methods such as kubeadm. It is mainly suitable for students who have a certain kubernetes foundation and want to learn and understand the relevant configuration and operation principle of the system through step-by-step deployment. It is also suitable for testing or producing self-built kubernetes clusters and other application scenarios.
One environment preparation
Version 1.1 Information:
OS system: Centos 7.6
Kubernetes version: v1.14.2
Etcd database: v3.3.13
Network plug-in: Flanneld 0.11.0
Docker version: 18.09.6-CE
K8s plug-in: CoreDns,Heapster,Influxdb,Grafana,Dashboard,elk,Metrics-server
Docker warehouse: Harbor
1.2 Architecture Overview:
Where:
Load balancing:
Keepalived master mode and haproxy tcp four-tier proxy are adopted to provide high availability architecture for kube apiserver. The internal network uses the private network VIP address as the access address of all components in the cluster, and the external network uses the external network vip address, mainly for Internet clients.
Master Cluster:
Using keepalived+haproxy to deploy apiserver clusters is highly available; other components kube-controller-manager,kube-sheduler uses etcd election mechanism to deploy three nodes to form a high availability cluster on the same node as apiserver components
Etcd Cluster:
All the components of the same master node are deployed on the same host, and three nodes are deployed respectively. TLS is used to enable HTTPS two-way authentication to form an etcd high availability cluster.
Network:
Use Flanneld to connect the network between master nodes and node nodes, so that the nodes in the cluster can communicate with each other (you can also use calico network)
Client:
Kubectl and kubens are deployed on the devops machine, in which kubectl is mainly used to rest the object resources of kubernetes, and kubens is mainly used for ns switching
Docker Image Repository:
Use vmware harboar as the docker private mirror image; provide private image pull,push to manage private images
The relevant server information is shown in figure 1 below: hostname private network IP public network IP server role deployment software or application devops-k8s-n0110.10.10.21192.168.20.21 operation and maintenance control publisher kubectl, kubens, ansible, cffsslvs-ha-n0110.10.10.30192.168.20.30slb load balancer keepalived, haproxy, bindlvs-ha-n0210.10.10.31192.168.20.31slb load balancer keepalived, haproxy All components of bindmaster-k8s-n0110.10.10.22192.168.20.22master node 01master node, flanneld plug-in master-k8s-n0210.10.10.23192.168.20.23master node 02master node related components, flanneld plug-in master-ks8-n0310.10.10.24192.168.20.24master node 03master node related components, flanneld plug-in worker-k8s-n0110.10.10.40192.168.20.40worker node 01worker node related components Flanneld plug-in worker-k8s-n0210.10.10.41192.168.20.41worker node 02worker node related components, flanneld plug-in docker-hub-server10.10.10.20192.168.20.20docker image repository docker, docker-compose, harbor
1.3. Component access policy
1.3.1 kube-apiserver:
High availability based on Keepalived+ Haproxy four-layer transparent proxy
Open non-secure port 8080 and turn off anonymous access based on token access
Receive https requests on secure port 6443
Strict authentication and authorization policies (x509, token, RBAC)
Enable bootstrap token authentication and support kubelet TLS bootstrapping
Use https to access kubelet and etcd to encrypt communications; 1.3.2 kube-controller-manager:
3-node high availability
Open a secure port and receive https requests on secure port 10252
Use kubeconfig to access the secure port of apiserver
Automatic approve kubelet certificate signing request (CSR), automatic rotation after certificate expiration
Each controller uses its own ServiceAccount to access the high availability of apiserver;1.3.3 kube-scheduler:3 nodes; use kubeconfig to access the secure port of apiserver; 1.3.4 kubelet: use kubeadm to dynamically create bootstrap token instead of static configuration in apiserver; use TLS bootstrap mechanism to automatically generate client and server certificates and rotate automatically after expiration; configure main parameters in KubeletConfiguration type JSON files Close the read-only port, receive https requests on secure port 10250, authenticate and authorize requests, deny anonymous and unauthorized access; use kubeconfig to access the secure port of apiserver; 1.3.5 kube-proxy: use kubeconfig to access the secure port of apiserver; configure main parameters in KubeProxyConfiguration-type JSON files; use ipvs proxy mode; two system basic settings 2.1domain name settings
Add the following resolution record to the bind configuration file, where mo9.com is used as the domain name suffix, and the bind service installation is ignored.
Resolution A record field domain name suffix resolution IP remarks lvs-ha-n01mo9.com10.10.10.30slb load balancer 01department ssh host uses lvs-ha-n02mo9.com10.10.10.31slb load balancer 02 Ssh host uses master-k8s-n01mo9.com10.10.10.22 to set master node 01 domain name to facilitate metric data collection master-k8s-n02mo9.com10.10.10.23 sets master node 02 domain name to facilitate metric data collection master-k8s-n03mo9.com10.10.10.24 sets master node 03 domain name to facilitate metric data collection worker-k8s-n01mo9.com10.10.10.40 sets worker node 01 domain name to facilitate metric data acquisition Set worker-k8s-n02mo9.com10.10.10.41 to set worker node 01 domain name to facilitate metric data collection registry-mirrorsmo9.com10.10.10.20docker image private warehouse domain name Private image uploads and downloads use the dev-kube-apimo9.com10.10.10.100apiserver cluster access domain name, through which all components access the apiserverdev-kube-apimo9.com192.168.20.100apiserver cluster public network access domain name, and public network users access the domain name
Note:
Domain name resolution of the relevant hosts needs to be added, because if the hostname value in the kubelet configuration is hostname, the output error of the pod log cannot be seen on the dashboard, indicating that the mechanical node domain name cannot be seen. You can also set this value to ip to avoid being unable to access the port 10250 service. Of course, you can write the above information to the / etc/hosts file on each node through hosts.
2.2 Machine hostname settings and DNS resolution address settings hostname settings
Set the corresponding host hostname information to the hostname shown in Table 1
Hostnamectl set-hostname hostname dns resolution settings for each node host
Set the corresponding DNS resolution server address on each Linux host to the following information
Search mo9.comnameserver 10.10.10.30nameserver 10.10.10.31
Note:
Search mo9.com is set up to facilitate direct login when using ssh + hostname, and dashboard access to the worker node is convenient for parsing the corresponding host to collect data.
2.3 SSH secret-free login configuration
All operations are performed on the devops-k8s-n01 node, and all operations are performed through ansible, so you need to add devops machines to the ssh secret-free login of other nodes. Ssh password login is not described here.
Note:
Due to the basic configuration of the host, such as dns parsing, hostname setting, ssh secret-free login, and the basic optimization of kernel parameters, each Linux host has been configured through system initialization when it is created. So I won't elaborate here!
Host initialization of each node in 3k8s
All the operation commands here need to be executed on all hosts in the kubernetes cluster, which is the basic setting required to install the kubernetes cluster environment.
3.1 install related dependency packages
Yum install-y epel-release conntrack ipvsadm\ ipset jq sysstat curl libseccomp ntpdate ntp wget telnet rsync
Note:
The dependency package here is mainly for installing dependent installation packages for kubelet,kube-proxy,docker and network plug-in components on worker nodes. Other packages are basic network test packages, which are recommended to be executed on machines within the cluster, because the flanneld plug-in requires these dependency packages.
3.2 turn off the firewall
Turn off the firewall on each machine, clean up the firewall rules, and set the default forwarding policy:
Systemctl stop firewalld > > / dev/null 2 > & 1systemctl disable firewalld > > / dev/null 2 > & 1iptables-F & & iptables-X & & iptables-F-t nat & & iptables-X-t natiptables-P FORWARD ACCEPT
3.3 close the swap partition
If swap partitioning is turned on, the kubelet component on the worker node will fail to start (you can ignore swap on by setting the parameter-- fail-swap-on to false), so you need to turn off the swap partition on each machine. At the same time, annotate the corresponding entries in / etc/fstab to prevent swap partitions from being mounted automatically when powered on:
Swapoff-a > > / dev/null 2 > & 1sed-I's Placement.
3.4 turn off SELinux
Close SELinux, otherwise an error Permission denied may be reported when the directory is mounted by K8S:
Setenforce 0 > > / dev/null 2 > & 1sed-I "s / ^ SELINUX = enforcing/SELINUX=disabled/g" / etc/sysconfig/selinux > > / dev/null 2 > & 1sed-I "s / ^ SELINUX = enforcing/SELINUX=disabled/g" / etc/selinux/config > > / dev/null 2 > & 1sed-I "s / ^ SELINUX = permissive/SELINUX=disabled/g" / etc/sysconfig/selinux > > / dev/null 2 > & 1sed-I "s / ^ SELINUX = permissive/SELINUX=disabled/g" / etc/selinux/config > > / dev/null 2 > & 1
3.5 close dnsmasq (optional)
When dnsmasq is enabled on the linux system (such as the GUI environment), set the system DNS Server to 127.0.0.1, which will cause the docker container to fail to resolve the domain name and need to close it:
Systemctl stop dnsmasqsystemctl disable dnsmasq
3.6 load kernel module
The main reason is that kube-proxy components need to forward pods applications to the ip_vs kernel module to achieve endpoints routing.
Sudo modprobe br_netfilter sudo modprobe ip_vssudo modprobe ip_conntrack
3.7 optimize kernel parameters
Sudo sed-I'/ net.ipv4.ip_forward/d' / etc/sysctl.confsudo sed-I'/ net.bridge.bridge-nf-call-iptables/d' / etc/sysctl.confsudo sed-I'/ net.bridge.bridge-nf-call-ip6tables/d' / etc/sysctl.confsudo sed-I'/ net.ipv4.ip_forward/d' / etc/sysctl.confsudo sed-I'/ net.ipv4.tcp_tw_recycle/d' / etc/sysctl. Confsudo sed-I'/ vm.swappiness/d' / etc/sysctl.confsudo sed-I'/ vm.overcommit_memory/d' / etc/sysctl.confsudo sed-I'/ vm.panic_on_oom/d' / etc/sysctl.confsudo sed-I'/ fs.inotify.max_user_watches/d' / etc/sysctl.confsudo sed-I'/ fs.file-max/d' / etc/sysctl.confsudo sed-I'/ fs.nr_open/d' / Etc/sysctl.confsudo sed-I'/ net.ipv6.conf.all.disable_ipv6/d' / etc/sysctl.confsudo sed-I'/ net.netfilter.nf_conntrack_max/d' / etc/sysctl.confcat > / etc/sysctl.d/kubernetes.conf/dev/null 2 > & the 1sed operation is mainly to prevent conflicts between sysctl.conf configuration and new kernel parameters Tcp_tw_recycle must be closed, otherwise conflicts with NAT will result in service disconnection. Close IPV6 to prevent docker BUG from being triggered.
3.8 set the system time zone
# adjust the system TimeZonetimedatectl set-timezone Asia/Shanghai# to write the current UTC time to the hardware clock timedatectl set-local-rtc 0hwclock-w # update service time ntpdate ntp1.aliyun.com# restart the service systemctl restart rsyslog systemctl restart crond that depends on the system time
3.9 shut down unrelated services
Systemctl stop postfix & & systemctl disable postfix
3.10 set up rsyslogd and systemd journald (optional)
Systemd's journald is the default logging tool for Centos 7, which logs all systems, kernels, and Service Unit. Compared to systemd,journald logging, it has the following advantages:
You can record to memory or file system; (default record to memory, corresponding location is / run/log/jounal); you can limit the disk space occupied to ensure the remaining disk space; you can limit the log file size and storage time; journald forwards logs to rsyslog by default, which will cause multiple logs to be written, and / var/log/messages contains too many irrelevant logs, which is not easy to view later, and also affects system performance. Mkdir / var/log/journal # directory where logs are persisted mkdir / etc/systemd/journald.conf.dcat > / etc/systemd/journald.conf.d/99-prophet.conf / dev/null 2 > & 1
3.13 close NUMA
# Disable numa for system.sudo sed-I "slug numa glaze off etc/sysconfig/grubsudo grub2-mkconfig:" / etc/sysconfig/grubsudo sed-I "s:centos/swap rhgb:& numa=off:" / etc/sysconfig/grubsudo grub2-mkconfig-o / boot/grub2/grub.cfg > > / dev/null 2 > & 14 install the ansible tool on the control machine
Since all operations are operated on devops machines, in order to perform command operations on all machines, all need to use ansible tools to batch command operations according to the server role.
4.1 install ansible tools
Yum install ansible-y
4.2 create ansible host files according to server role for grouping
[master_k8s_vgs] master-k8s-n01 ansible_host=10.10.10.22master-k8s-n02 ansible_host=10.10.10.23master-k8s-n03 ansible_host= 10.10.10.24[worker _ k8s_vgs] worker-k8s-n01 ansible_host=10.10.10.40worker-k8s-n02 ansible_host= 10.10.10.41[slb _ ha_vgs] ha-lvs-n01 ansible_host=10.10.10.1ha-lvs-n02 ansible_host=10.10.10.2
4.3 system initialization Settings
Initialize all the commands in the above system optimization to all the servers in the two master_k8s_vgs,worker_k8s_vgs groups. Here, write the above commands into a k8s initialization script and execute the ansible script script model, as follows:
Ansible master_k8s_vgs-m script-a "/ home/gamaxwin/install_k8s_setup.sh"-bansible worker_k8s_vgs-m script-a "/ home/gamaxwin/install_k8s_setup.sh"-b
At this point, the initialization of all node systems in K8s is basically completed. For the security of the cluster, you also need to create a TLS two-way authentication certificate for K8s. Please refer to the second section: kubernetes Cluster installation Guide: create CA certificates and related component certificate keys.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.