In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article is about how to install kubernetes. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.
How to build a k8s highly available cluster with one command without relying on haproxy and keepalived or ansible. Apiserver is load balanced through kernel ipvs with apiserver health detection. The architecture is shown in the following figure:
This project, called sealos, aims to make a simple, clean, lightweight and stable kubernetes installation tool that can well support highly available installations. In fact, it is not difficult to make a thing powerful, but it is more difficult to make it extremely simple and flexible to expand. Therefore, in the implementation, we must follow these principles. Here are the design principles of sealos:
1. Design principle
Features and advantages of sealos:
Offline installation is supported, and tools are separated from resource packages (binary program configuration files, image yaml files, etc.), so that different versions can replace different offline packages.
Certificate extension
Easy to use
Support custom configuration
The kernel load is extremely stable, so troubleshooting is extremely easy because it is simple.
Why not use ansible?
It is true that version 1.0 is implemented in ansible, but users still need to install ansile first, and ansible also needs to install python and some dependencies. In order not to make users so troublesome, ansible is put into a container for users to use. If you don't want to configure keyless user names and passwords, you need ssh-pass, etc., which doesn't satisfy me, and it's not as simple as I think.
So I want to have a binary file tool, without any dependency, file distribution and remote commands are implemented by calling sdk, so I don't rely on anything else, which finally satisfies me as a cleanliness addict.
Why not use keepalived haproxy?
It is not a big problem for haproxy to run with static pod, and it is easy to manage. Most of the open source ansible scripts of keepalived are installed in yum or apt, which is very uncontrollable and has the following disadvantages:
Source inconsistency may lead to different versions, and the version is not always the same as the configuration file. I used to test that the script did not work and could not find the reason, but later I found out that it was the version.
If the system cannot be installed, some environments cannot be installed directly because of the dependent library problem.
After reading many installation scripts on the Internet, many test scripts and weight adjustment methods are wrong, directly to check whether the haproxy process is there, in fact, you should check whether the apiserver is healthz. If the apiserver is dead, even if the haproxy process exists, the cluster will be abnormal, that is, pseudo-high is available.
Management is not convenient. Monitoring the cluster through prometheus can directly monitor static pod, but running with systemd requires separate monitoring, and restart and other things need to be pulled up separately. Not as clean and simple as the unified management of kubelet.
We have also had situations where keepalived filled up CPU.
So in order to solve this problem, I ran keepalived in the container (the image provided by the community is basically unavailable), and there were a lot of problems, but finally solved it.
All in all, I felt tired and didn't like it, so I was wondering if I could get rid of haproxy and keepalived to come up with a simpler and more reliable plan.
Why not use envoy or nginx for local loads?
We solve the problem of high availability through local load.
> Local load: start a load balancer on each node node, and there are three master upstream.
If you use a load balancer such as envoy, you need to run a process on each node, consuming more resources, which I don't want. Ipvs actually runs an extra process lvscare, but lvscare is only responsible for managing ipvs rules. Like kube-proxy, the real traffic comes from a very stable kernel, and there is no need to drop the packet into the user mode to deal with.
There is a problem in architecture implementation that will make it very awkward to use envoy, etc., that is, if the load balancer is not established in join, it will get stuck, and kubelet will not get up. Therefore, you need to start envoy first, which means you cannot use static pod to manage it. With the same problem as the above keepalived host deployment, using static pod will be interdependent and logically deadlocked. The chicken says there should be the egg first, the egg says there should be the chicken first, and finally no one will have it.
Using ipvs is different. I can set up the ipvs rules before join, then go to join, and then guard the rules. Once the apiserver is inaccessible, all corresponding ipvs rules on the node are automatically cleaned up and added back when the master returns to normal.
Why customize kubeadm?
First of all, because kubeadm has written the expiration time of the certificate dead, so we need to customize it to change it to 1999. Although most people can sign a new certificate by themselves, we still don't want to rely on individual tools to change the source code directly.
Secondly, it is most convenient to modify the kubeadm code when doing the local load, because there are two things we need to do in join: the first is to create ipvs rules before join, and the second is to create static pod. If this piece does not customize the kubeadm, it will report the error that already exists in the static pod directory, and it is not elegant to ignore this error. And kubeadm already provides some useful sdk for us to implement this function.
After doing so, the core functions are integrated into kubeadm, and sealos becomes a lightweight tool for distributing and executing upper-level commands, and we can use kubeadm directly when adding nodes.
two。 Install dependencies using tutorials
Install and start docker
Download the kubernetes offline installation package
Download the latest version of sealos
Support for kubernetes 1.14.0 +
Installation
Multi-master HA only needs to execute the following command:
Sealos init-- master 192.168.0.2\-- master 192.168.0.3\-- master 192.168.0.4\-- node 192.168.0.5\-- user root\-- passwd your-server-password\-- version v1.14.1\-- pkg-url / root/kube1.14.1.tar.gz
Then, there is no such thing as then. Yes, your high availability cluster has been installed. Do you feel confused? It's that simple and fast!
Single master and multiple node:
Sealos init-- master 192.168.0.2\-- node 192.168.0.5\-- user root\-- passwd your-server-password\-- version v1.14.1\-- pkg-url / root/kube1.14.1.tar.gz
Use a key-free or key pair:
$sealos init-- master 172.16.198.83\-- node 172.16.198.84\-- pkg-url https://sealyun.oss-cn-beijing.aliyuncs.com/free/kube1.15.0.tar.gz\-- Competition / root/kubernetes.pem # this is your ssh private key file\-- version v1.15.0
Parameter explanation:
-- master master server address list-- node node server address list-- user server ssh user name-- passwd server ssh user password-- pkg-url offline package location, which can be placed in the local directory or on a http server. Sealos will wget to the installation target machine-- version kubernetes version-- compete for ssh private key address. Configure key-free default is / root/.ssh/id_rsa.
Other parameters:
-- kubeadm-config string kubeadm-config.yaml kubeadm configuration file, which can be customized kubeadm configuration file-- vip string virtual ip (default "10.103.97.2") Virtual ip under local load. Modification is not recommended and cannot be accessed outside the cluster.
Check to see if the installation is normal:
$kubectl get nodeNAME STATUS ROLES AGE VERSIONizj6cdqfqw4o4o9tc0q44rz Ready master 2m25s v1.14.1izj6cdqfqw4o4o9tc0q44sz Ready master 119s v1.14.1izj6cdqfqw4o4o9tc0q44tz Ready master 63s v1.14.1izj6cdqfqw4o4o9tc0q44uz Ready 38s v1.14.1$ kubectl get pod-- all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGEkube-system calico-kube-controllers -5cbcccc885-9n2p8 1 Running 0 3m1skube-system calico-node-656zn 1 Running 0 93skube-system calico-node-bv5hn 1 2m54skube-system calico-node-f2vmd 1 Running 0 2m54skube-system calico-node-f2vmd 1 / 1 Running 0 3m1skube-system calico-node-tbd5l 1/1 Running 0 118skube-system coredns-fb8b8dccf-8bnkv 1/1 Running 0 3m1skube-system coredns-fb8b8dccf-spq7r 1/1 Running 0 3m1skube-system Etcd-izj6cdqfqw4o4o9tc0q44rz 1/1 Running 0 2m25skube-system etcd-izj6cdqfqw4o4o9tc0q44sz 1/1 Running 0 2m53skube-system etcd-izj6cdqfqw4o4o9tc0q44tz 1/1 Running 0 118skube-system kube-apiserver-izj6cdqfqw4o4o9tc0q44rz 1/1 Running 0 2m15skube-system kube-apiserver-izj6cdqfqw4o4o9tc0q44sz 1/1 Running 0 2m54skube-system kube-apiserver-izj6cdqfqw4o4o9tc0q44tz 1/1 Running 1 47skube-system kube-controller-manager-izj6cdqfqw4o4o9tc0q44rz 1/1 Running 1 2m43skube-system kube-controller-manager-izj6cdqfqw4o4o9tc0q44sz 1/1 Running 0 2m54skube-system kube-controller-manager-izj6cdqfqw4o4o9tc0q44tz 1/1 Running 0 63skube-system kube-proxy -b9b9z 1 + 1 Running 0 2m54skube-system kube-proxy-nf66n 1 + + 1 Running 0 3m1skube-system kube-proxy-q2bqp 1 + + 1 Running 0 118skube-system kube-proxy-s5g2k 1 Running 1 2m43skube-system kube-scheduler-izj6cdqfqw4o4o9tc0q44sz 1 93skube-system kube-scheduler-izj6cdqfqw4o4o9tc0q44rz 1 Running 0 2m54skube-system kube-scheduler-izj6cdqfqw4o4o9tc0q44tz 1 2m54skube-system kube-scheduler-izj6cdqfqw4o4o9tc0q44tz 1 Running 0 61skube-system kube-sealyun-lvscare-izj6cdqfqw4o4o9tc0q44uz 1 61skube-system kube-sealyun-lvscare-izj6cdqfqw4o4o9tc0q44uz 1 Running 0 86s add nodes
First get the join command and execute it on master:
$kubeadm token create-print-join-command
You can use super kubeadm, but you need to add a-- master parameter to join:
$cd kube/shell & & init.sh$ echo "10.103.97.2 apiserver.cluster.local" > > / etc/hosts # using vip$ kubeadm join 10.103.97.2 etc/hosts 6443-- token 9vr73a.a8uxyaju799qwdjv\-- master 10.103.97.100 etc/hosts 6443\-- master 10.103.97.101 apiserver.cluster.local 6443\-- master 10.103.97.102 apiserver.cluster.local 6443\-- discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866
You can also use the sealos join command:
$sealos join-- master 192.168.0.2\-- master 192.168.0.3\-- master 192.168.0.4\-- vip 10.103.97.2\-- node 192.168.0.5\-- user root\-- passwd your-server-password\-- pkg-url / root/kube1.15.0.tar.gz uses custom kubeadm configuration files
Sometimes you may need to customize the configuration file of kubeadm, such as adding the domain name sealyun.com to the certificate.
First, you need to obtain the profile template:
$sealos config-t kubeadm > > kubeadm-config.yaml.tmpl
Then modify the kubeadm-config.yaml.tmpl to add the sealyun.com to the configuration:
ApiVersion: kubeadm.k8s.io/v1beta1kind: ClusterConfigurationkubernetesVersion: {{.Version}} controlPlaneEndpoint: "apiserver.cluster.local:6443" networking: podSubnet: 100.64.0.0/10apiServer: certSANs:-sealyun.com # this is the new domain name-127.0.0.1-apiserver.cluster.local {{range .Masters -}}-{{.}} End -}-{{.VIP}}-apiVersion: kubeproxy.config.k8s.io/v1alpha1kind: KubeProxyConfigurationmode: "ipvs" ipvs: excludeCIDRs:-"{{.VIP}} / 32"
Note: other parts do not need to be modified, sealos will automatically fill in the contents of the template.
Finally, use-- kubeadm-config to specify the profile template at deployment time:
$sealos init-- kubeadm-config kubeadm-config.yaml.tmpl\-- master 192.168.0.2\-- master 192.168.0.3\-- master 192.168.0.4\-- node 192.168.0.5\-- user root\-- passwd your-server-password\-- version v1.14.1\-- pkg-url / root/kube1.14.1.tar.gz version upgrade
This tutorial takes version 1.14 to 1.15 as an example, other versions have similar principles, you can understand this other reference to the official tutorial.
Upgrade process
Upgrade kubeadm, all nodes import images
Upgrade control node
Upgrade kubelet on master (control node)
Upgrade other master (control node)
Upgrade node
Verify cluster status
Upgrade kubeadm
Copy the offline package to all nodes and execute cd kube/shell & & sh init.sh. Here, the binaries of kubeadm, kubectl, and kubelet will be updated, and high-version images will be imported.
Upgrade control node $kubeadm upgrade plan$ kubeadm upgrade apply v1.15.0
Restart kubelet:
$systemctl restart kubelet
In fact, kubelet upgrade is very simple and rough, we just need to copy the new version of kubelet to / usr/bin and restart kubelet service. If the program is using no overwrite, stop kubelet and copy again. The kubelet bin file is in the conf/bin directory.
Upgrade other control nodes $kubeadm upgrade apply upgrade node
Expel nodes (whether you want to deport or not depends on the situation, it doesn't matter if you like to come rudely directly):
$kubectl drain $NODE-- ignore-daemonsets
Update the kubelet configuration:
$kubeadm upgrade node config-- kubelet-version v1.15.0
Then upgrade kubelet. Also replace the binary and restart kubelet service.
$systemctl restart kubelet
Recall lost love:
$kubectl uncordon $NODE verify $kubectl get nodes
If the version information is correct, the upgrade is basically successful.
What did kubeadm upgrade apply do?
Check whether the cluster can be upgraded
Implement the version upgrade strategy between which versions can be upgraded
Confirm whether the image exists
Perform a control component upgrade and roll back if it fails. These containers are actually apiserver, controller manager, scheduler, etc.
Upgrade kube-dns and kube-proxy
Create a new certificate file and back up the old one if it is longer than 180 days
Source code compilation
Because the netlink library is used, it is recommended to compile in the container with only one command:
$docker run-- rm-v $GOPATH/src/github.com/fanux/sealos:/go/src/github.com/fanux/sealos-w / go/src/github.com/fanux/sealos-it golang:1.12.7 go build
If you are using go mod, you need to specify to compile through vendor:
$go build-mod vendor unloads $sealos clean\-- master 192.168.0.2\-- master 192.168.0.3\-- master 192.168.0.4\-- node 192.168.0.5\-- user root\-- passwd your-server-password3. Sealos implementation principle execution flow
Copy the offline installation package to the target machine (masters and nodes) via sftp or wget.
Execute kubeadm init on master0.
Execute kubeadm join on the other master and set up the control plane, which starts the etcd on the other master and clusters with the etcd of the master0, and starts the components of the control plane (apiserver, controller, etc.).
Join node node, which will configure ipvs rules, / etc/hosts, etc., on node.
> all requests for apiserver are accessed through a domain name, because node needs to connect multiple master through virtual ip. The kubelet of each node is different from the virtual address of kube-proxy to access apiserver, and kubeadm can only specify one address in the configuration file, so a domain name is used but the IP resolved by each node is different. When the IP address changes, only the resolution address needs to be modified.
Local kernel load
Access to masters through local kernel load balancing on each node is implemented in this way:
+-+-+ virturl server: 127.0.0.1 etc/hosts 6443 | mater0 | > / etc/hosts # resolves the address of master0 In order to join normally, $kubeadm join 10.103.97.100 join 6443-- token 9vr73a.a8uxyaju799qwdjv\-- discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866\-- experimental-control-plane\-- certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07 $sed "s Unix 10.103.97.100 Univer g"-I / etc/hosts # parse and change to your own address Otherwise, we all rely on the pseudo-high availability of master0.
Execute the following command on master2 (assuming the vip address is 10.103.97.102):
$echo "10.103.97.100 apiserver.cluster.local" > > / etc/hosts$ kubeadm join 10.103.97.100 etc/hosts$ kubeadm join 6443-- token 9vr73a.a8uxyaju799qwdjv\-- discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866\-- experimental-control-plane\-- certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07 $sed "10.103.97.100 apiserver.cluster.local" 10.103.97.100 apiserver.cluster.local "- I / etc/hosts
Add the-- master parameter to join on node to specify the list of master addresses:
$echo "10.103.97.1 apiserver.cluster.local" > > / etc/hosts # needs to be parsed into a virtual ip$ kubeadm join 10.103.97.1 etc/hosts 6443-- token 9vr73a.a8uxyaju799qwdjv\-- master 10.103.97.100 apiserver.cluster.local 6443\-- master 10.103.97.101 apiserver.cluster.local 6443\-- master 10.103.97.102 apiserver.cluster.local 6443\-- discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 offline packet structure analysis. ── bin # specified version of binaries Only these three are needed. The other components are running in the container │ ├── kubeadm │ ├── kubectl │ └── kubelet ├── conf │ ├── 10-kubeadm.conf # this file is not used, I directly generated it in shell This can detect cgroup driver │ ├── dashboard │ │ ├── dashboard-admin.yaml │ │ └── │ heapster │ │ ├── grafana.yaml │ │ ├── heapster.yaml │ │ ├── influxdb.yaml rbac heapster-rbac.yaml kubeadm. Configuration file for yaml # kubeadm │ ├── kubelet.service # kubelet systemd configuration file │ ├── net │ │ └── calico.yaml │ └── promethus ├── images # all mirror packages │ └── images.tar └── shell ├── init.sh # initialization script └── master.sh # run master script
The init.sh script copies the binaries in the bin directory to $PATH, configures systemd, turns off swap, firewall, and so on, and imports the images needed by the cluster.
Master.sh mainly implements kubeadm init.
The conf directory contains kubeadm configuration files, calico yaml files, and so on.
Sealos invokes the above two scripts, so most of them are compatible. Different versions can be kept compatible with fine-tuning scripts.
Thank you for reading! This is the end of the article on "how to install kubernetes". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it out for more people to see!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.