Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Binary deployment Kubernetes Cluster reference documentation (V1.15.0)

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

I. basic concepts

1. Concept

Kubernetes (usually written as "K8s") Kubernetes is Google's open source container cluster management system. Its design goal is to provide a platform that can be deployed automatically, expandable and operable for application containers between host clusters.

Kubernetes usually works in conjunction with docker container tools and integrates multiple host clusters running docker containers.

2. Functional characteristics

A, automated container deployment b, automatic expansion / reduction of container size c, providing load balancing between containers d, fast update and fast rollback

3. Description of related components

3.1Node components of Master

There are four main components running on the master node: api-server, scheduler, controller-manager, and etcd.

APIServer:APIServer is responsible for providing the Kubernetes API service of RESTful, which is the unified entry of system management instructions. Any operation to add, delete, modify or query resources will be processed by APIServer and then submitted to etcd.

The responsibility of schedule:scheduler is clear: it is responsible for dispatching pod to the appropriate Node. If you think of scheduler as a black box, its input is pod and a list of multiple Node, and its output is Pod and a

The binding of Node, that is, the pod is deployed to the Node. Kubernetes currently provides scheduling algorithms, but it also guarantees the interface, and users can define their own scheduling algorithms according to their own needs.

Controller-manager: if APIServer does the "foreground" job, then controller manager is in charge of the "backstage". Each resource is generally assigned to a controller, and controller manager is

Who is responsible for managing these controllers. For example, we create a pod through APIServer, and when the pod is created successfully, the task of APIServer is complete. After that, it is an important task to ensure that the status of Pod is always the same as we expected.

Controller manager will guarantee it.

Etcd:etcd is a highly available key storage system, which is used by Kubernetes to store the state of each resource, thus implementing the API of Restful.

3.2.The Node node component

Each Node node is mainly composed of three modules: kubelet, kube-proxy and runtime.

Runtime: refers to the container running environment. Currently, Kubernetes supports both docker and rkt containers.

Kube-proxy: this module implements the service discovery and reverse proxy functions in Kubernetes. Reverse proxy: kube-proxy supports TCP and UDP connection forwarding, and forwards client traffic to the Round Robin algorithm by default

A set of backend pod corresponding to service. In terms of service discovery, kube-proxy uses etcd's watch mechanism to monitor dynamic changes in service and endpoint object data in the cluster, and maintains a service to endpoint

, thus ensuring that IP changes in the back-end pod will not affect visitors. In addition, kube-proxy supports session affinity.

Kubelet:Kubelet is the agent of Master on each Node node, and it is the most important module on the Node node. It is responsible for maintaining and managing all containers on the Node, but if the container is not created through Kubernetes

It does not manage. In essence, it is responsible for making the running state of the Pod consistent with the desired state.

3.3 、 pod

Pod is the smallest unit for resource scheduling in K8s. There are one or more closely related business containers running in each Pod. These business containers share the IP and Volume of this Pause container. We use this immortal Pause container

As the root container of Pod, the state of the entire container group is represented by its state. Once a Pod is created, it is stored in Etcd, and then dispatched by Master to a Node binding, which is instantiated by Kubelet on that Node.

Each Pod is assigned a separate Pod IP,Pod IP + ContainerPort to form an Endpoint.

3.4 、 Service

The function of Service exposes applications. Pods has a life cycle and an independent IP address. With the creation and destruction of Pods, an essential task is to ensure that each application can perceive this change. I'm going to mention it.

When it comes to Service, Service is a logical combination defined by YAML or JSON by Pods through some policy. More importantly, the Pods stand-alone IP needs to be exposed to the network through Service.

II. Installation and deployment

There are many ways to deploy, and in this article we deploy in a binary way.

1. Environmental introduction

Hostname IP installation package system version

KubeWhile master192.168.248.65kubeKubeKubeWhen managerGraerRed Hat Enterprise Linux Server release 7.3k8s house node1192.168.248.66etcdNode1192.168.248.66etcdNode1192.168.248.67etcdforce kubelet73k8s color node2192.168.248.68etcdther kubelet Hat Enterprise Linux Server release 7.3k8s color node3192.168.68etcdforce KubeletWork Kubetel Hat Enterprise Linux Server release 7.3

Software deployment version and download link

Version

Kubenetes version v1.15.0

Etcd version v3.3.10

Flannel version v0.11.0

Download link

Kubernetes Web site: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.15.md#v1150

Server side binaries: https://dl.k8s.io/v1.15.0/kubernetes-server-linux-amd64.tar.gz

Node side binaries: https://dl.k8s.io/v1.15.0/kubernetes-node-linux-amd64.tar.gz

Etcd Web site: https://github.com/etcd-io/etcd/releases

Flannel Web site: https://github.com/coreos/flannel/releases

2. Server initialization environment preparation

Synchronize system time

# ntpdate time1.aliyun.com# echo "* / 5 * / usr/sbin/ntpdate-s time1.aliyun.com" > / var/spool/cron/root

Modify hostname

# hostnamectl-static set-hostname k8s-master# hostnamectl-static set-hostname k8s-node1# hostnamectl-static set-hostname k8s-node2# hostnamectl-static set-hostname k8s-node3

Add hosts parsing

[root@k8s-master ~] # cat / etc/hosts192.168.248.65 k8s-master192.168.248.66 k8s-node1192.168.248.67 k8s-node2192.168.248.68 k8s-node3

Turn off and disable firewalld and selinux

# systemctl stop firewalld# systemctl disable firewalld# setenforce 0# vim / etc/sysconfig/selinux SELINUX=disabled

Close swap

# swapoff-a & & sysctl-w vm.swappiness=0# sed-I'/ swap / s / ^\ (. *\) $/ #\ 1According to g'/ etc/fstab

Set system parameters

# cat / etc/sysctl.d/kubernetes.confnet.ipv4.ip_forward=1net.ipv4.tcp_tw_recycle=0vm.swappiness=0vm.overcommit_memory=1vm.panic_on_oom=0fs.inotify.max_user_watches=89100fs.file-max=52706963fs.nr_open=52706963net.ipv6.conf.all.disable_ipv6=1

3. Kubernetes cluster installation and deployment

All node nodes install docker-ce

# wget-P / etc/yum.repos.d/ https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo# yum makecache# yum install docker-ce-18.06.2.ce-3.el7-y # systemctl start docker & & systemctl enable docker

Create an installation directory

# mkdir / data/ {install,ssl_config}-pv# mkdir / data/ssl_config/ {etcd,kubernetes}-pv# mkdir / cloud/k8s/etcd/ {bin,cfg,ssl}-pv# mkdir / cloud/k8s/kubernetes/ {bin,cfg,ssl}-pv

Add environment variabl

Vim / etc/profile#Kubernetes#export PATH=$PATH:/cloud/k8s/etcd/bin/:/cloud/k8s/kubernetes/bin/

4. Create a ssl certificate

Download the certificate generation tool

[root@k8s-master ~] # wget-P / usr/local/bin/ https://pkg.cfssl.org/R1.2/cfssl_linux-amd64[root@k8s-master ~] # wget-P / usr/local/bin/ https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64[root@k8s-master ~] # wget-P / usr/local/bin/ https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 [root@k8s -master ~] # mv / usr/local/bin/cfssl_linux-amd64 / usr/local/bin/cfssl [root@k8s-master ~] # mv / usr/local/bin/cfssljson_linux-amd64 / usr/local/bin/cfssljson [root@k8s-master ~] # mv / usr/local/bin/cfssl-certinfo_linux-amd64 / usr/local/bin/cfssl-certinfo [root@k8s-master ~] # chmod + x / usr/local/bin/*

Create etcd related certificates

# etcd certificate ca configuration [root@k8s-master etcd] # pwd/data/ssl_config/ etcd [root @ k8s-master etcd] # cat ca-config.json {"signing": {"default": {"expiry": "87600h"}, "profiles": {"www": {"expiry": "87600h", "usages": ["signing" "key encipherment", "server auth", "client auth"]} # etcd ca configuration file [root@k8s-master etcd] # cat ca-csr.json {"CN": "etcd CA", "key": {"algo": "rsa", "size": 2048} "names": [{"C": "CN", "L": "Beijing", "ST": "Beijing"}} # etcd server certificate [root@k8s-master etcd] # cat server-csr.json {"CN": "etcd", "hosts": ["k8s-node3", "k8s-node2", "k8s-node1" "192.168.248.66", "192.168.248.67", "192.168.248.68"], "key": {"algo": "rsa", "size": 2048}, "names": [{"C": "CN", "L": "Beijing" "ST": "Beijing"]} # generate etcd ca certificate and private key # cfssl gencert-initca ca-csr.json | cfssljson-bare ca- # cfssl gencert-ca=ca.pem-ca-key=ca-key.pem-config=ca-config.json-profile=www server-csr.json | cfssljson-bare server

Create kubernetes related certificates

# kubernetes Certificate ca configuration [root@k8s-master kubernetes] # pwd/data/ssl_config/ Kubernetes [root @ k8s-master kubernetes] # cat ca-config.json {"signing": {"default": {"expiry": "87600h"}, "profiles": {"kubernetes": {"expiry": "87600h", "usages": ["signing" "key encipherment", "server auth", "client auth"]} # create ca certificate configuration [root@k8s-master kubernetes] # cat ca-csr.json {"CN": "kubernetes", "key": {"algo": "rsa", "size": 2048} "names": [{"C": "CN", "L": "Beijing", "ST": "Beijing", "O": "K8s", "OU": "System"}]} # generate API_SERVER certificate [root@k8s-master kubernetes] # cat server-csr.json {"CN": "kubernetes" "hosts": ["10.0.0.1", "127.0.0.1", "192.168.248.65", "k8s-master", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local"] "key": {"algo": "rsa", "size": 2048}, "names": [{"C": "CN", "L": "Beijing", "ST": "Beijing", "O": "K8s" "OU": "System"}]} # create Kubernetes Proxy certificate [root@k8s-master kubernetes] # cat kube-proxy-csr.json {"CN": "system:kube-proxy", "hosts": [], "key": {"algo": "rsa", "size": 2048}, "names": [{"C": "CN" "L": "Beijing", "ST": "Beijing", "O": "K8s" "OU": "System"}]} # generate ca certificate # cfssl gencert-initca ca-csr.json | cfssljson-bare ca- # generate api-server certificate # cfssl gencert-ca=ca.pem-ca-key=ca-key.pem-config=ca-config.json-profile=kubernetes server-csr.json | cfssljson-bare server# generate kube-proxy certificate # cfssl gencert-ca=ca.pem-ca-key=ca-key.pem-config=ca-config.json-profile=kubernetes kube-proxy-csr.json | cfssljson-bare kube-proxy

5. Deploy etcd cluster (operate on all node nodes)

Extract and configure the etcd package

# tar-xvf etcd-v3.3.10-linux-amd64.tar.gz# cp etcd-v3.3.10-linux-amd64/ {etcd,etcdctl} / cloud/k8s/etcd/bin/

Write etcd configuration file

[root@k8s-node1 ~] # cat / cloud/k8s/etcd/cfg/etcd # [Member] ETCD_NAME= "etcd01" ETCD_DATA_DIR= "/ var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS= "https://192.168.248.66:2380"ETCD_LISTEN_CLIENT_URLS="https://192.168.248.66:2379"#[Clustering]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.248.66:2380"ETCD _ ADVERTISE_CLIENT_URLS= "https://192.168.248.66:2379"ETCD_INITIAL_CLUSTER="etcd01=https://192.168.248.66:2380, Etcd02= https://192.168.248.67:2380, Etcd03= https://192.168.248.68:2380"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_INITIAL_CLUSTER_STATE="new"[root@k8s-node2 ~] # cat / cloud/k8s/etcd/cfg/etcd # [Member] ETCD_NAME= "etcd01" ETCD_DATA_DIR= "/ var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS= "https://192.168.248.67:2380"ETCD_LISTEN_CLIENT_URLS="https://192.168 .248.67 ETCD_INITIAL_ADVERTISE_PEER_URLS= 2379 "# [Clustering] 2379" https://192.168.248.67:2380"ETCD_ADVERTISE_CLIENT_URLS="https://192.168.248.67:2379"ETCD_INITIAL_CLUSTER="etcd01=https://192.168.248.66:2380, Etcd02= https://192.168.248.67:2380, Etcd03= https://192.168.248.68:2380"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_INITIAL_CLUSTER_STATE="new"[root@k8s-node3 ~] # cat / cloud/k8s/etcd/cfg/etcd # [Member] ETCD_NAME= "etcd01" ETCD_DATA_DIR= "/ var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS= "https://192.168.248.68:2380"ETCD_LISTEN_CLIENT_URLS="https://192.168 .248.68 ETCD_INITIAL_ADVERTISE_PEER_URLS= 2379 "# [Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS=" https://192.168.248.68:2380"ETCD_ADVERTISE_CLIENT_URLS="https://192.168.248.68:2379"ETCD_INITIAL_CLUSTER="etcd01=https://192.168.248.66:2380, Etcd02= https://192.168.248.67:2380,etcd03=https://192.168.248.68:2380"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_INITIAL_CLUSTER_STATE="new"

Create an etcd startup file

[root@k8s-node1 ~] # cat / usr/lib/systemd/system/etcd.service [Unit] Description=Etcd ServerAfter=network.targetAfter=network-online.targetWants=network- online.target [service] Type=notifyEnvironmentFile=/cloud/k8s/etcd/cfg/etcdExecStart=/cloud/k8s/etcd/bin/etcd\-name=$ {ETCD_NAME}\-data-dir=$ {ETCD_DATA_DIR}\-listen-peer-urls=$ {ETCD_LISTEN_PEER_URLS}\-listen-client-urls=$ {service} Http://127.0.0.1:2379\-advertise-client-urls=$ {ETCD_ADVERTISE_CLIENT_URLS}\-initial-advertise-peer-urls=$ {ETCD_INITIAL_ADVERTISE_PEER_URLS}\-initial-cluster=$ {ETCD_INITIAL_CLUSTER}\-initial-cluster-token=$ {ETCD_INITIAL_CLUSTER_TOKEN}\-initial-cluster-state=new\-cert-file=/cloud/k8s/etcd/ssl/server.pem\-key-file=/cloud/ K8s/etcd/ssl/server-key.pem\-peer-cert-file=/cloud/k8s/etcd/ssl/server.pem\-peer-key-file=/cloud/k8s/etcd/ssl/server-key.pem\-trusted-ca-file=/cloud/k8s/etcd/ssl/ca.pem\-peer-trusted-ca-file=/cloud/k8s/etcd/ssl/ca.pemRestart=on-failureLimitNOFILE=65536 [Install] WantedBy=multi-user.target

Copy the generated etcd certificate file to all node nodes

[root@k8s-master etcd] # pwd/data/ssl_config/ etcd [root @ k8s-master etcd] # scp * .pem k8s-node1:/cloud/k8s/etcd/ssl/ [root@k8s-master etcd] # scp * .pem k8s-node2:/cloud/k8s/etcd/ssl/ [root@k8s-master etcd] # scp * .pem k8s-node3:/cloud/k8s/etcd/ssl/

Start the etcd cluster service

Systemctl daemon-reloadsystemctl enable etcdsystemctl start etcd

Check the startup status (execute on any node node)

[root@k8s-node1 ssl] # etcdctl-ca-file=/cloud/k8s/etcd/ssl/ca.pem-cert-file=/cloud/k8s/etcd/ssl/server.pem-key-file=/cloud/k8s/etcd/ssl/server-key.pem-endpoints= "https://192.168.248.66:2379,https://192.168.248.67:2379, Https://192.168.248.68:2379" cluster-healthmember 2830381866015ef6 is healthy: got healthy result from https://192.168.248.67:2379member 355a96308320dc2a is healthy: got healthy result from https://192.168.248.66:2379member a9a44d5d05a31ce0 is healthy: got healthy result from https://192.168.248.68:2379cluster is healthy

6. Deploy flannel network (all node nodes)

Write pod network segment information to the etcd cluster (executed on any node node)

Etcdctl-ca-file=/cloud/k8s/etcd/ssl/ca.pem\-cert-file=/cloud/k8s/etcd/ssl/server.pem\-key-file=/cloud/k8s/etcd/ssl/server-key.pem\-endpoints= "https://192.168.248.66:2379,https://192.168.248.67:2379, Https://192.168.248.68:2379"\ set / coreos.com/network/config'{"Network": "172.18.0.0swap 16", "Backend": {"Type": "vxlan"}'

View the network segment information written to the etcd cluster

# etcdctl-ca-file=/cloud/k8s/etcd/ssl/ca.pem\-cert-file=/cloud/k8s/etcd/ssl/server.pem\-key-file=/cloud/k8s/etcd/ssl/server-key.pem\-endpoints= "https://192.168.248.66:2379,https://192.168.248.67:2379, Https://192.168.248.68:2379"\ get / coreos.com/network/config# [root@k8s-node1 ssl] # etcdctl-ca-file=/cloud/k8s/etcd/ssl/ca.pem\ >-cert-file=/cloud/k8s/etcd/ssl/server.pem\ >-key-file=/cloud/k8s/etcd/ssl/server-key.pem\ >-endpoints= "https://192.168.248.66:2379,https://192.168.248.67:2379, Https://192.168.248.68:2379"\ > ls / coreos.com/network/subnets/coreos.com/network/subnets/172.18.95.0-24/coreos.com/network/subnets/172.18.22.0-24/coreos.com/network/subnets/172.18.54.0-24

Extract and configure the flannel network plug-in

# tar xf flannel-v0.11.0-linux-amd64.tar.gz# mv flanneld mk-docker-opts.sh / cloud/k8s/kubernetes/bin/

Configure flannel

[root@k8s-node1 cfg] # cat / cloud/k8s/kubernetes/cfg/flanneld FLANNEL_OPTIONS= "--etcd-endpoints= https://192.168.248.66:2379,https://192.168.248.67:2379,https://192.168.248.68:2379-etcd-cafile=/cloud/k8s/etcd/ssl/ca.pem-etcd-certfile=/cloud/k8s/etcd/ssl/server.pem-etcd-keyfile=/cloud/k8s/etcd/ssl/server-key.pem"

Configure the flanneld startup file

Configure Docker to start the specified subnet segment

[root@k8s-node1 cfg] # cat / usr/lib/systemd/system/docker.service [Unit] Description=Docker Application Container EngineDocumentation= https://docs.docker.comAfter=network-online.target firewalld.serviceWants=network- online.target [service] Type=notifyEnvironmentFile=/run/flannel/subnet.env# the default is not to use systemd for cgroups because the delegate issues still# exists and systemd currently does not support the cgroup feature set required# for containers run by dockerExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONSExecReload=/bin/kill-s HUP $MAINPID# Having non-zero Limit* S causes performance problems due to accounting overhead# in the kernel. We recommend using cgroups to do container-local accounting.LimitNOFILE=infinityLimitNPROC=infinityLimitCORE=infinity# Uncomment TasksMax if your systemd version supports it.# Only systemd 226 and above support this version.#TasksMax=infinityTimeoutStartSec=0# set delegate yes so that systemd does not reset the cgroups of docker containersDelegate=yes# kill only the docker process, not all processes in the cgroupKillMode=process# restart the docker process if it exits prematurelyRestart=on-failureStartLimitBurst=3StartLimitInterval=60s [Install] WantedBy=multi-user.target

Start the service

Systemctl daemon-reloadsystemctl start flanneldsystemctl enable flanneldsystemctl restart docker

Verify the fiannel network configuration

Node nodes ping each other to measure the ip address of the docker0 network card, and the ability to ping indicates that the flanneld network plug-in is deployed successfully.

7. Deploy master node components

Extract the master node installation package

# tar xf kubernetes-server-linux-amd64.tar.gz# cp kubernetes//server/bin/ {kube-scheduler,kube-apiserver,kube-controller-manager,kubectl} / cloud/k8s/kubernetes/bin/

Configure kubernetes related certificates

# cp / data/ssl_config/kubernetes/*.pem / cloud/k8s/kubernetes/ssl/

Deploy kube-apiserver components

Create TLS Bootstrapping Token

[root@k8s-master cfg] # head-c 16 / dev/urandom | od-An-t x | tr-d''# generate random string [root@k8s-master cfg] # pwd/cloud/k8s/kubernetes/cfg [root@k8s-master cfg] # cat token.csv a081e7ba91d597006cbdacfa8ee114actor Kubelettel bootstrap 10001, "system:kubelet-bootstrap"

Apiserver profile

[root@k8s-master cfg] # cat kube-apiserver KUBE_APISERVER_OPTS= "--logtostderr=true\-- etcd-servers= https://192.168.248.66:2379,https://192.168.248.67:2379, Https://192.168.248.68:2379\-bind-address=192.168.248.65\-secure-port=6443\-advertise-address=192.168.248.65\-allow-privileged=true\-service-cluster-ip-range=10.0.0.0/24\-admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction\-authorization-mode=RBAC Node\-enable-bootstrap-token-auth\-token-auth-file=/cloud/k8s/kubernetes/cfg/token.csv\-service-node-port-range=30000-50000\-tls-cert-file=/cloud/k8s/kubernetes/ssl/server.pem\-tls-private-key-file=/cloud/k8s/kubernetes/ssl/server-key.pem\-client-ca-file=/cloud/k8s/kubernetes/ssl/ca.pem\-service-account-key-file=/cloud / k8s/kubernetes/ssl/ca-key.pem\-etcd-cafile=/cloud/k8s/etcd/ssl/ca.pem\-etcd-certfile=/cloud/k8s/etcd/ssl/server.pem\-etcd-keyfile=/cloud/k8s/etcd/ssl/server-key.pem "

Kube-apiserver startup file

[root@k8s-master cfg] # cat / usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API ServerDocumentation= https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/cloud/k8s/kubernetes/cfg/kube-apiserverExecStart=/cloud/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTSRestart=on-failure [Install] WantedBy=multi-user.target

Start the kube-apiserver service

[root@k8s-master cfg] # systemctl daemon-reload [root@k8s-master cfg] # systemctl enable kube-apiserver [root@k8s-master cfg] # systemctl start kube-apiserver [root@k8s-master cfg] # ps-ef | grep kube-apiserverroot 1050 1 4 09:02? 00:25:21 / cloud/k8s/kubernetes/bin/kube-apiserver-- logtostderr=true-- vault 4-- etcd-servers= https://192.168.248.66:2379,https://192.168.248.67:2379, Https://192.168.248.68:2379-bind-address=192.168.248.65-secure-port=6443-advertise-address=192.168.248.65-allow-privileged=true-service-cluster-ip-range=10.0.0.0/24-admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction-authorization-mode=RBAC Node-enable-bootstrap-token-auth-token-auth-file=/cloud/k8s/kubernetes/cfg/token.csv-service-node-port-range=30000-50000-tls-cert-file=/cloud/k8s/kubernetes/ssl/server.pem-tls-private-key-file=/cloud/k8s/kubernetes/ssl/server-key.pem-client-ca-file=/cloud/k8s/kubernetes/ssl/ca.pem-service-account-key-file=/cloud/k8s/kubernetes/ssl/ca -key.pem-etcd-cafile=/cloud/k8s/etcd/ssl/ca.pem-etcd-certfile=/cloud/k8s/etcd/ssl/server.pem-etcd-keyfile=/cloud/k8s/etcd/ssl/server-key.pemroot 1888 1083 0 18:15 pts/0 00:00:00 grep-color=auto kube-apiserve

Deploy kube-scheduler components

Create a kube-scheduler profile

[root@k8s-master cfg] # cat / cloud/k8s/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS= "--logtostderr=true-- master=127.0.0.1:8080-- leader-elect"

Create a kube-scheduler startup file

[root@k8s-master cfg] # cat / usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes SchedulerDocumentation= https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/cloud/k8s/kubernetes/cfg/kube-schedulerExecStart=/cloud/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTSRestart=on-failure [Install] WantedBy=multi-user.target

Start the kube-scheduler service

[root@k8s-master cfg] # systemctl daemon-reload [root@k8s-master cfg] # systemctl enable kube-scheduler.service [root@k8s-master cfg] # systemctl start kube-scheduler.service [root@k8s-master cfg] # ps-ef | grep kube-schedulerroot 1716 10 16:12? 00:00:19 / cloud/k8s/kubernetes/bin/kube-scheduler-- logtostderr=true-- master=127.0.0.1:8080-- leader-electroot 1897 1083 0 18:21 pts/0 00:00:00 grep-color=auto kube-scheduler

Deploy kube-controller-manager components

Create a kube-controller-manager profile

[root@k8s-master cfg] # cat / cloud/k8s/kubernetes/cfg/kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS= "--logtostderr=true\-- vault 4\-- master=127.0.0.1:8080\-- leader-elect=true\-- address=127.0.0.1\-- service-cluster-ip-range=10.0.0.0/24\-- cluster-name=kubernetes\-- cluster-signing-cert-file=/cloud/k8s/kubernetes/ssl/ca.pem\-- cluster-signing- Key-file=/cloud/k8s/kubernetes/ssl/ca-key.pem\-root-ca-file=/cloud/k8s/kubernetes/ssl/ca.pem\-service-account-private-key-file=/cloud/k8s/kubernetes/ssl/ca-key.pem "

Create a kube-controller-manager startup file

[root@k8s-master cfg] # cat / usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller ManagerDocumentation= https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=/cloud/k8s/kubernetes/cfg/kube-controller-managerExecStart=/cloud/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTSRestart=on-failure [Install] WantedBy=multi-user.target

Start the kube-controller-manager service

[root@k8s-master cfg] # systemctl daemon-reload [root@k8s-master cfg] # systemctl enable kube-controller-manager [root@k8s-master cfg] # systemctl start kube-controller-manager [root@k8s-master cfg] # ps-ef | grep kube-controller-managerroot 1709 12 16:12? 00:03:11 / cloud/k8s/kubernetes/bin/kube-controller-manager-- logtostderr=true-- VIP4-- master=127.0.0.1:8080-- leader-elect=true -address=127.0.0.1-service-cluster-ip-range=10.0.0.0/24-cluster-name=kubernetes-cluster-signing-cert-file=/cloud/k8s/kubernetes/ssl/ca.pem-cluster-signing-key-file=/cloud/k8s/kubernetes/ssl/ca-key.pem-root-ca-file=/cloud/k8s/kubernetes/ssl/ca.pem-service-account-private-key-file=/cloud/k8s/kubernetes/ssl/ca-key.pemroot 1907 1083 0 18:29 pts/0 00:00:00 grep-color=auto kube-controller-manager

View cluster status

[root@k8s-master cfg] # kubectl get csNAME STATUS MESSAGE ERRORcontroller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health": "true"} etcd-2 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"}

8. Deploy node node components (all node node operations)

Extract the node node installation package

[root@k8s-node1 install] # tar xf kubernetes-node-linux-amd64.tar.gz [root@k8s-node1 install] # cp kubernetes/node/bin/ {kubelet,kube-proxy} / cloud/k8s/kubernetes/bin/

Create a kubelet bootstrap.kubeconfig file

[root@k8s-master kubernetes] # cat environment.sh # create kubelet bootstrapping kubeconfigBOOTSTRAP_TOKEN=a081e7ba91d597006cbdacfa8ee114acKUBE_APISERVER= "https://192.168.248.65:6443"# sets cluster parameter kubectl config set-cluster kubernetes\-certificate-authority=./ca.pem\-- embed-certs=true\-- server=$ {KUBE_APISERVER}\-- kubeconfig=bootstrap.kubeconfig# sets client authentication parameter kubectl config set-credentials kubelet-bootstrap\-- token=$ {BOOTSTRAP_TOKEN} \-- kubeconfig=bootstrap.kubeconfig# setting context parameter kubectl config set-context default\-- cluster=kubernetes\-- user=kubelet-bootstrap\-- kubeconfig=bootstrap.kubeconfig# setting default context kubectl config use-context default-- kubeconfig=bootstrap.kubeconfig# executes environment.sh generation bootstrap.kubeconfig [object Object]

Create a kubelet.kubeconfig file

[root@k8s-master kubernetes] # cat envkubelet.kubeconfig.sh# create kubelet bootstrapping kubeconfigBOOTSTRAP_TOKEN=a081e7ba91d597006cbdacfa8ee114acKUBE_APISERVER= "https://192.168.248.65:6443"# sets cluster parameter kubectl config set-cluster kubernetes\-certificate-authority=./ca.pem\-- embed-certs=true\-- server=$ {KUBE_APISERVER}\-- kubeconfig=kubelet.kubeconfig# sets client authentication parameter kubectl config set-credentials kubelet\-- token=$ {BOOTSTRAP_TOKEN} \-- kubeconfig=kubelet.kubeconfig# setting context parameter kubectl config set-context default\-- cluster=kubernetes\-- user=kubelet\-- kubeconfig=kubelet.kubeconfig# setting default context kubectl config use-context default-- kubeconfig=kubelet.kubeconfig# executes the envkubelet.kubeconfig.sh script Generate kubelet.kubeconfig [object Object]

Create a kube-proxy.kubeconfig file

[root@k8s-master kubernetes] # cat env_proxy.sh# create kube-proxy kubeconfig file BOOTSTRAP_TOKEN=a081e7ba91d597006cbdacfa8ee114acKUBE_APISERVER= "https://192.168.248.65:6443"kubectl config set-cluster kubernetes\-- certificate-authority=./ca.pem\-- embed-certs=true\-- server=$ {KUBE_APISERVER}\-- kubeconfig=kube-proxy.kubeconfigkubectl config set-credentials kube-proxy\-- client-certificate=./kube-proxy.pem\-- client-key=./ Kube-proxy-key.pem\-- embed-certs=true\-- kubeconfig=kube-proxy.kubeconfigkubectl config set-context default\-- cluster=kubernetes\-- user=kube-proxy\-- kubeconfig=kube-proxy.kubeconfigkubectl config use-context default-- kubeconfig=kube-proxy.kubeconfig# executes the env_proxy.sh script to generate kube-proxy.kubeconfig files

Copy the kubeconfig generated above to all node nodes

[root@k8s-master kubernetes] # scp bootstrap.kubeconfig kubelet.kubeconfig kube-proxy.kubeconfig k8s-node1:/cloud/k8s/kubernetes/cfg/ [root@k8s-master kubernetes] # scp bootstrap.kubeconfig kubelet.kubeconfig kube-proxy.kubeconfig k8s-node2:/cloud/k8s/kubernetes/cfg/ [root@k8s-master kubernetes] # scp bootstrap.kubeconfig kubelet.kubeconfig kube-proxy.kubeconfig k8s-node3:/cloud/k8s/kubernetes/cfg/

All node nodes create kubelet parameter configuration template files

[root@k8s-node1 cfg] # cat kubelet.config kind: KubeletConfigurationapiVersion: kubelet.config.k8s.io/v1beta1address: 192.168.248.66port: 10250readOnlyPort: 10255cgroupDriver: cgroupfsclusterDNS: ["10.0.0.2"] clusterDomain: cluster.local.failSwapOn: falseauthentication: anonymous: enabled: true [root@k8s-node2 cfg] # cat kubelet.config kind: KubeletConfigurationapiVersion: kubelet.config.k8s.io/v1beta1address: 192.168.248.67port: 10250readOnlyPort: 10255cgroupDriver: cgroupfsclusterDNS: ["10.0.0 .2 "] clusterDomain: cluster.local.failSwapOn: falseauthentication: anonymous: enabled: true [root@k8s-node3 cfg] # cat kubelet.config kind: KubeletConfigurationapiVersion: kubelet.config.k8s.io/v1beta1address: 192.168.248.68port: 10250readOnlyPort: 10255cgroupDriver: cgroupfsclusterDNS: [" 10.0.0.2 "] clusterDomain: cluster.local.failSwapOn: falseauthentication: anonymous: enabled: true

Create a kubelet profile

[root@k8s-node1 cfg] # cat / cloud/k8s/kubernetes/cfg/kubeletKUBELET_OPTS= "--logtostderr=true\-- VIP4\-- hostname-override=k8s-node1\-- kubeconfig=/cloud/k8s/kubernetes/cfg/kubelet.kubeconfig\-- bootstrap-kubeconfig=/cloud/k8s/kubernetes/cfg/bootstrap.kubeconfig\-- config=/cloud/k8s/kubernetes/cfg/kubelet.config\-- cert-dir=/cloud/k8s/kubernetes/ssl\-- pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com / google-containers/pause-amd64:3.0 "[root@k8s-node2 cfg] # cat / cloud/k8s/kubernetes/cfg/kubeletKUBELET_OPTS="-- logtostderr=true\-- VIP4\-- hostname-override=k8s-node2\-- kubeconfig=/cloud/k8s/kubernetes/cfg/kubelet.kubeconfig\-- bootstrap-kubeconfig=/cloud/k8s/kubernetes/cfg/bootstrap.kubeconfig\-- config=/cloud/k8s/kubernetes/cfg/kubelet.config\-- cert-dir=/cloud/k8s/kubernetes/ssl\-- pod- Infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 "[root@k8s-node3 cfg] # cat / cloud/k8s/kubernetes/cfg/kubeletKUBELET_OPTS="-- logtostderr=true\-- VIP4\-- hostname-override=k8s-node3\-- kubeconfig=/cloud/k8s/kubernetes/cfg/kubelet.kubeconfig\-- bootstrap-kubeconfig=/cloud/k8s/kubernetes/cfg/bootstrap.kubeconfig\-- config=/cloud/k8s/kubernetes/cfg/kubelet.config\-- cert-dir= / cloud/k8s/kubernetes/ssl\-- pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 "

Create a kubelet startup file

[root@k8s-node1 cfg] # cat / usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes KubeletAfter=docker.serviceRequires= docker.service[ service] EnvironmentFile=/cloud/k8s/kubernetes/cfg/kubeletExecStart=/cloud/k8s/kubernetes/bin/kubelet $KUBELET_OPTSRestart=on-failureKillMode=process [Install] WantedBy=multi-user.target

Bind the kubelet-bootstrap user to the system cluster role (without binding the role, kubelet will not start successfully)

Kubectl create clusterrolebinding kubelet-bootstrap\-clusterrole=system:node-bootstrapper\-user=kubelet-bootstrap

Start the kubelet service (all node nodes)

[root@k8s-node1 cfg] # systemctl daemon-reload [root@k8s-node1 cfg] # systemctl enable kubelet [root@k8s-node1 cfg] # systemctl start kubelet [root@k8s-node1 cfg] # ps-ef | grep kubeletroot 3306 1 2 09:02? 00:14:47 / cloud/k8s/kubernetes/bin/kubelet-- logtostderr=true-- VIP4-- hostname-override=k8s-node1-- kubeconfig=/cloud/k8s/kubernetes/cfg/kubelet.kubeconfig-- bootstrap-kubeconfig=/cloud/k8s/ Kubernetes/cfg/bootstrap.kubeconfig-config=/cloud/k8s/kubernetes/cfg/kubelet.config-cert-dir=/cloud/k8s/kubernetes/ssl-pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0root 87181 12020 0 19:22 pts/0 00:00:00 grep-color=auto kubelet

Approve kubelet CSR request on the master node

Change the status of kubectl get csrkubectl certificate approve $NAMEcsr to Approved,Issued.

View cluster status and node nodes

[root@k8s-master kubernetes] # kubectl get cs NodeNAME STATUS MESSAGE ERRORcomponentstatus/controller-manager Healthy ok componentstatus/scheduler Healthy ok componentstatus/etcd-2 Healthy {"health": "true"} componentstatus/etcd-0 Healthy {"health": "true"} componentstatus/etcd-1 Healthy {"health": "true"} NAME STATUS ROLES AGE VERSIONnode/k8s-node1 Ready 4d2h v1.15.0node/k8s-node2 Ready 4d2h v1.15.0node/k8s-node3 Ready 4d2h v1.15.0

Deploy node kube-proxy components

[root@k8s-node1 cfg] # cat / cloud/k8s/kubernetes/cfg/kube-proxyKUBE_PROXY_OPTS= "--logtostderr=true\-- VIP4\-- hostname-override=k8s-node1\-- cluster-cidr=10.0.0.0/24\-- kubeconfig=/cloud/k8s/kubernetes/cfg/kube-proxy.kubeconfig"

Create a kube-proxy startup file

[root@k8s-node1 cfg] # cat / usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes ProxyAfter= network.target [service] EnvironmentFile=/cloud/k8s/kubernetes/cfg/kube-proxyExecStart=/cloud/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTSRestart=on-failure [Install] WantedBy=multi-user.target

Start the kube-proxy service

[root@k8s-node1 cfg] # systemctl daemon-reload [root@k8s-node1 cfg] # systemctl enable kube-proxy [root@k8s-node1 cfg] # systemctl start kube-proxy [root@k8s-node1 cfg] # ps-ef | grep kube-proxyroot 966 10 09:02? 00:01:20 / cloud/k8s/kubernetes/bin/kube-proxy-- logtostderr=true-- vault 4-- hostname-override=k8s-node1-- cluster-cidr=10.0.0.0/24-- kubeconfig=/ Cloud/k8s/kubernetes/cfg/kube-proxy.kubeconfigroot 87093 12020 0 19:22 pts/0 00:00:00 grep-color=auto kube-proxy

Deploy Coredns components

[root@k8s-master ~] # cat coredns.yaml # Warning: This is a file generated from the base underscore template file: coredns.yaml.baseapiVersion: v1kind: ServiceAccountmetadata: name: coredns namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: kubernetes.io/bootstrapping: rbac-defaults addonmanager.kubernetes.io/mode: Reconcile name: system : corednsrules:- apiGroups:-"" resources:-endpoints-services-pods-namespaces verbs:-list-watch---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults addonmanager.kubernetes.io/mode: EnsureExists name: system:corednsroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:corednssubjects:- Kind: ServiceAccount name: coredns namespace: kube-system---apiVersion: v1kind: ConfigMapmetadata: name: coredns namespace: kube-system labels: addonmanager.kubernetes.io/mode: EnsureExistsdata: Corefile:.: 53 {errors health kubernetes cluster.local in-addr.arpa ip6.arpa {pods insecure upstream fallthrough in-addr.arpa ip6.arpa} prometheus: 9153 proxy. / etc/resolv.conf cache 30 loop reload loadbalance}-apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "CoreDNS" spec: # replicas: not specified here: # 1. In order to make Addon Manager do not reconcile this replicas parameter. # 2. Default is 1. # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on. Replicas: 3 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns annotations: seccomp.security.alpha.kubernetes.io/pod: 'docker/default' spec: serviceAccountName: coredns tolerations:-key: node-role.kubernetes.io/master effect: NoSchedule -key: "CriticalAddonsOnly" operator: "Exists" containers:-name: coredns image: coredns/coredns:1.3.1 imagePullPolicy: IfNotPresent resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi args: ["- conf" "/ etc/coredns/Corefile"] volumeMounts:-name: config-volume mountPath: / etc/coredns readOnly: true ports:-containerPort: 53 name: dns protocol: UDP-containerPort: 53 name: dns-tcp protocol: TCP-containerPort: 9153 name: metrics protocol: TCP LivenessProbe: httpGet: path: / health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 securityContext: allowPrivilegeEscalation: false capabilities: add:-NET_BIND_SERVICE drop:-all ReadOnlyRootFilesystem: true dnsPolicy: Default volumes:-name: config-volume configMap: name: coredns items:-key: Corefile path: Corefile---apiVersion: v1kind: Servicemetadata: name: kube-dns namespace: kube-system annotations: prometheus.io/port: "9153" prometheus.io/scrape: "true" labels: k8s-app Kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "CoreDNS" spec: selector: k8s-app: kube-dns clusterIP: 10.0.0.2 ports:-name: dns port: 53 protocol: UDP-name: dns-tcp port: 53 protocol: TCP [root@k8s-master ~] # kubectl apply-f coredns.yaml serviceaccount/coredns unchangedclusterrole.rbac.authorization.k8s. Io/system:coredns unchangedclusterrolebinding.rbac.authorization.k8s.io/system:coredns unchangedconfigmap/coredns unchangeddeployment.extensions/coredns unchangedservice/kube-dns unchanged [root@k8s-master ~] # kubectl get deployment-n kube-system NAME READY UP-TO-DATE AVAILABLE AGEcoredns 3max 3 33 33h [root@k8s-master ~] # kubectl get deployment-n kube-system-o wideNAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTORcoredns 3 kube-system 3 33 33 h coredns coredns/coredns:1.3.1 k8s-app=kube-dns [root@k8s-master ~] # kubectl get pod-n kube-system-o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATEScoredns-b49c586cf-nwzv6 1 Running 1 33h 172.18.54.3 k8s-node3 coredns-b49c586cf-qv5b9 1 Running 1 33h 172.18.22.3 k8s-node1 coredns-b49c586cf-rcqhc 1 Running 1 33h 172.18.95.2 k8s-node2 [root@k8s-master] # kubectl get svc-n kube-system-o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE SELECTORkube-dns ClusterIP 10.0.0.2 53/UDP 53/TCP 33h k8s-app=kube-dns

To this kubernetes V1.15.0 beggar version deployment is complete.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report