In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
1. Introduction of installation method. 1. Yum installation
At present, CentOS officials have put the Kubernetes source into their default extras repository and use yum to install. The advantages are simple and the disadvantages are obvious. You need to update the yum source to get the latest version of the software, and the dependence of all software cannot be specified by yourself, especially if your operating system version is low, the version of Kubernetes installed using the yum source will also be limited, usually lower than many official versions. When I installed it, the current official version was 1.12, while the version in the yum source was 1.5.2.
2. Binary installation
Using binaries to install, the advantage is that any version of Kubernetes can be installed, but the disadvantage is that the configuration is more complex, and many software packages cannot be accessed in the mainland for some reasons.
Please check the blog post: https://blog.51cto.com/wzlinux/2322345
3. Kubeadm installation
Kubeadm is a tool provided by Kubernetes to quickly install Kubernetes clusters. With the release of each version of Kubernetes, kubeadm will adjust some practices on cluster configuration. You can learn some new best practices on cluster configuration by experimenting with kubeadm.
For users of Linux's major distributions, Ubuntu Xenial and Red Hat centos7, you can use the familiar apt-get and yum to install Kubernetes directly. For example, version 1.4 introduces the kubeadm command, which simplifies cluster startup to two commands, eliminating the need for complex kube-up scripts.
The official documentation of Kubernetes is updated too fast. We notice that the main features of kubeadm in Kubernetes 1.9 are already in beta state and will enter GA state in 2018, indicating that kubeadm is getting closer and closer to being able to be used in production environment. This is also the installation method we will pay attention to in the future, but in order to understand the installation process. Let's start with the other two installation methods.
Please check the blog post: https://blog.51cto.com/wzlinux/2322616
Here we choose the first way to install.
2. Description of main components 1. Master components
The Master component provides a management control center for the cluster.
The Master component can run on any node in the cluster. But for simplicity, usually all Master components are started on a VM/ machine, and the user container is not run on this VM/ machine
Kube-apiserver
Kube-apiserver is used to expose Kubernetes API. Any resource request / invocation operation is done through the interface provided by kube-apiserver.
Etcd
Etcd is the default storage system provided by Kubernetes, which stores all cluster data and needs to provide a backup plan for etcd data.
Kube-controller-manager
The kube-controller-manager runs the management controller, which is the background thread that handles regular tasks in the cluster. Logically, each controller is a separate process, but to reduce complexity, they are compiled into a single binary and run in a single process.
Kube-scheduler
Kube-scheduler monitors the newly created Pod that is not assigned to the Node and selects a Node for the Pod. 2. Node components
Kubelet
Kubelet is the primary node agent that monitors the pod that has been assigned to the node, with specific functions:
Install the volume required for Pod. Download the Secrets for Pod. The docker (or experimentally,rkt) container running in Pod. Perform container health check regularly. Reports the status of the pod back to the rest of the system, by creating a mirror pod if necessary.Reports the status of the node back to the rest of the system.
Kube-proxy
Kube-proxy implements Kubernetes service abstraction by maintaining network rules on the host and performing connection forwarding.
Docker
Docker is used to run the container.
Flannel
Flannel is an overlay network (Overlay Network) tool designed by the CoreOS team for Kubernetes and requires additional downloads and deployments. We know that when we start Docker, there will be an IP address to interact with the container. If we don't manage it, maybe this IP address is the same on each machine, and it is limited to communicating on the local machine, and we can't access the Docker container on other machines. The purpose of Flannel is to replan the rules for the use of IP addresses for all nodes in the cluster, so that containers on different nodes can obtain IP addresses that belong to the same intranet and do not repeat, and containers belonging to different nodes can communicate directly through the intranet IP. 3. Environment preparation 1, Node preparation IP role main component 172.18.8.200 mastery kuberneteskubernetesMastkubeKuberteverthecomponent Kubefuerfukubefufukubefufujiubefufujie KubeletCorr et cd172.18.8.201node01handkubernetesFlannel172.18.18.8.202node02kinds kubertenesFactnodedockerflyproxyproxyproxyflanel172.18.8.202node02ghkubertenesFactnodedocker
The node and network roadmap are as follows:
2. Kubernetes-master1.5.2-0.7.git269f928.el7kubernetes-node1.5.2-0.7.git269f928.el7CentOS 7.5CentOS Linux release 7.5.1804Dockerdocker-1.13.1-75etcd3.2.22-1.el7flannel0.7.1-4.el73 of the software version currently in the repository, environment preparation
Modify the file / etc/hostname.
Edit the file / etc/hosts by adding the following
172.18.8.200 master.wzlinux.com master172.18.8.201 node01.wzlinux.com node01172.18.8.202 node02.wzlinux.com node02
Turn off the firewall.
Systemctl stop firewalld.servicesystemctl disable firewalld.service
Close SELinux.
Sed-I's setting SELINUXFORCING'/ etc/selinux/configsetenforce 0
Close swap.
Swapoff-ased-I's Compact. Installation swap.pickswap.According swap.installation of master Node 1, Software installation
Install the required software.
Yum install kubernetes-master etcd-y
Modify the shared configuration file / etc/kubernetes/config, modify the master node, because we are all on the same machine, we can do without.
# kubernetes system config## The following values are used to configure various aspects of all# kubernetes services, including## kube-apiserver.service# kube-controller-manager.service# kube-scheduler.service# kubelet.service# kube-proxy.service# logging tostderr means we get it in the systemd journalKUBE_LOGTOSTDERR= "--logtostderr=true" # journal message level, 0 is debugKUBE_LOG_LEVEL= "--vault 0" # Should this cluster be allowed to run privileged docker containersKUBE_ALLOW_PRIV= "--allow-privileged=false" # How the controller-manager, scheduler And proxy find the apiserverKUBE_MASTER= "--master= http://172.18.8.200:8080"2, configuration etcd
Because many of our services use etcd, we configure the etcd service first.
Edit the file / etc/etcd/etcd.conf on the master node and change it to the following content, mainly to modify the listening IP:
[root@master ~] # cat / etc/etcd/etcd.conf# [Member] # ETCD_CORS= "" ETCD_DATA_DIR= "/ var/lib/etcd/default.etcd" # ETCD_WAL_DIR= "" # ETCD_LISTEN_PEER_URLS= "http://localhost:2380"ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"#ETCD_MAX_SNAPSHOTS="5"#ETCD_MAX_WALS="5"ETCD_NAME="default"#ETCD_SNAPSHOT_COUNT= "100000" # ETCD_HEARTBEAT_INTERVAL= "100s" # ETCD_ELECTION_TIMEOUT= "1000" # ETCD_QUOTA_BACKEND_BYTES= "0" # ETCD_MAX_REQUEST_BYTES= "1572864" # ETCD_GRPC_KEEPALIVE_MIN_TIME= "5s" # ETCD_GRPC_KEEPALIVE_INTERVAL= "2h0m0s" # ETCD_GRPC_KEEPALIVE_TIMEOUT= "20s" # # [Clustering] # ETCD_INITIAL_ADVERTISE_PEER_URLS= "http://localhost:2380"ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0. 0RV 2379 "# ETCD_DISCOVERY="# ETCD_DISCOVERY_FALLBACK=" proxy "# ETCD_DISCOVERY_PROXY=" # ETCD_DISCOVERY_SRV= "# ETCD_INITIAL_CLUSTER=" default= http://localhost:2380"#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"#ETCD_INITIAL_CLUSTER_STATE="new"#ETCD_STRICT_RECONFIG_CHECK="true"#ETCD_ENABLE_V2="true"##[Proxy]#ETCD_PROXY="off"#ETCD_PROXY_FAILURE_WAIT="5000"#ETCD_ PROXY_REFRESH_INTERVAL= "30000" # ETCD_PROXY_DIAL_TIMEOUT= "1000" # ETCD_PROXY_WRITE_TIMEOUT= "5000" # ETCD_PROXY_READ_TIMEOUT= "0" # # [Security] # ETCD_CERT_FILE= "" # ETCD_KEY_FILE= ", # ETCD_CLIENT_CERT_AUTH=" false "# ETCD_TRUSTED_CA_FILE=" # ETCD_AUTO_TLS= "false" # ETCD_PEER_CERT_FILE= "" # ETCD_PEER_KEY_FILE= "" # ETCD_PEER_CLIENT_CERT _ AUTH= "false" # ETCD_PEER_TRUSTED_CA_FILE= "# ETCD_PEER_AUTO_TLS=" false "# # [Logging] # ETCD_DEBUG=" false "# ETCD_LOG_PACKAGE_LEVELS="# ETCD_LOG_OUTPUT=" default "# [Unsafe] # ETCD_FORCE_NEW_CLUSTER=" false "# [Version] # ETCD_VERSION=" false "# ETCD_AUTO_COMPACTION_RETENTION=" 0 "# [Profiling] # ETCD_ENABLE_PPROF=" false "# ETCD_METRICS=" basic "# # [Auth] # ETCD_AUTH_TOKEN= "simple"
Start the service.
Systemctl start etcdsystemctl enable etcd
Check the startup status.
[root@master ~] # netstat-tlnp | grep etcdtcp 0 0 127.0.0.1 tlnp 2380 0.0.0.0 LISTEN 1506/etcd tcp6 0 0:: 2379:: * LISTEN 1506/etcd
It is also relatively easy to deploy a multi-node cluster, see https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/clustering.md
[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS: the list of communication addresses of the member nodes throughout the cluster, which are used to transmit cluster data. Therefore, this address must be able to connect to all the members of the cluster.
ETCD_INITIAL_CLUSTER: configure the addresses of all members within the cluster.
3. Configure apiserver service
Edit the file / etc/kubernetes/apiserver, modify it as follows, and pay attention to the KUBE_ADMISSION_CONTROL parameter:
[root@master ~] # cat / etc/kubernetes/apiserver#### kubernetes system config## The following values are used to configure the kube-apiserver## The address on the local server to listen to.#KUBE_API_ADDRESS= "--insecure-bind-address=127.0.0.1" KUBE_API_ADDRESS= "- address=0.0.0.0" # The port on the local server to listen on.KUBE_API_PORT= "- port=8080" # Port minions listen onKUBELET_PORT= "- kubelet-port=10250" # Comma Separated list of nodes in the etcd clusterKUBE_ETCD_SERVERS= "- etcd-servers= http://172.18.8.200:2379"# Address range to use for servicesKUBE_SERVICE_ADDRESSES="-service-cluster-ip-range=10.254.0.0/16 "# default admission control policies#KUBE_ADMISSION_CONTROL="-admission-control=NamespaceLifecycle NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota "KUBE_ADMISSION_CONTROL="-- admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota "# Add your owning KUBE APISAGSs ="
Configure / etc/kubernetes/controller-manager and / etc/kubernetes/scheduler without modification for the time being and start the service.
Systemctl start kube-apiserversystemctl start kube-controller-managersystemctl start kube-schedulersystemctl enable kube-apiserversystemctl enable kube-controller-managersystemctl enable kube-scheduler
View the startup status of each service.
[root@master ~] # netstat-tlnp | grep kube-apiservertcp6 0: 6443: * LISTEN 1622/kube-apiserver tcp6 0: 8080: * LISTEN 1622/kube-apiserver [root@master ~] # netstat-tlnp | grep kube-schedulertcp6 0 0: 10251 :: * LISTEN 1646/kube-scheduler 5 Configure node Node 1 and install the required software yum install kubernetes-node flannel docker-y
Configure docker with a domestic accelerator.
Sudo mkdir-p / etc/dockersudo tee / etc/docker/daemon.json
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.