Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Kubernetes binary installation and configuration (1.11.6)

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Environment preparation 1 introduction 1 master node mainly consists of three major components:

1 apiserver provides a unique entry for resource operations and provides mechanisms for authentication, authorization, access control, API registration and discovery

2 scheduler is responsible for resource scheduling, dispatching POD to the corresponding nodes according to the predetermined scheduling policy

3 controller is responsible for maintaining the status of the cluster, such as fault detection, automatic extension, rolling updates, etc.

2 node nodes mainly contain

1 kubelet maintains the declaration cycle of the container, as well as load mounting and network management

2 kube-proxy is responsible for service discovery and load balancing of cluster internal services for service

3 other core components

1 etcd saves the entire cluster state

2 flannel provides a network environment for the cluster

4 kubernetes core plug-in

1 coreDNS is responsible for providing DNS services for the entire cluster

2 ingress controller provides public network access for the service

3 promentheus provides resource monitoring

4 GUI provided by dashboard

2 Experimental environment role IP address related components master1192.168.1.10docker, etcd,kubectl,flannel,kube-apiserver,kube-controller-manager,kube-schedulermaster2192.168.1.20docker,etcd,kubectl,flannel,kube-apiserver,kube-controller-manager,kube-schedulernode1192.168.1.30kubelet,kube-proxy,docker,flannel,etcdnode2192.168.1.40kubelet,kube-proxy,docker,flannelnginx load balancer 192.168.1.100nginx

Note:

1 close selinux

2 firewalled Firewall off (shutdown and self-starting)

3 set up time synchronization server

4 configure domain name resolution between master and node nodes, which can be directly configured in / etc/hosts

5 configure to disable swapping

Echo "vm.swappiness = 0" > > / etc/sysctl.conf

Sysctl-p

Introduction to the basic deployment of etcd (master1,master2,node1) 1

Etcd is a database with key-value storage function, which can realize the leader election function between nodes. All the conversion information of the cluster is stored in etcd, and other etcd service nodes will become follower. In this process, other follower will synchronize the leader data. Because the etcd cluster must be able to select leader to work properly, its deployment must be odd.

Related etcd selection

Considering the reading and writing efficiency and stability of etcd, the basic options are as follows:

Only one or two servers are used as kubernetes service clusters, and only one etcd node needs to be deployed.

Only three or four servers are used as kubernetes service clusters, and only three etcd nodes need to be deployed.

Only five or six servers are used as kubernetes service clusters, and only five etcd nodes need to be deployed.

Etcd intercom uses point-to-point HTTPS communication

External communication is encrypted point-to-point communication, which communicates with each other through interaction with apiserver in the kubernetes cluster

CA certificate is required for communication between etcd, CA certificate for communication between etcd and client, and certificate for api server.

2 generate certificates

1 use cfssl to generate a self-signed certificate and download the corresponding tool

Wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

2 authorize and move

Chmod + x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64 mv cfssl_linux-amd64 / usr/local/bin/cfsslmv cfssljson_linux-amd64 / usr/local/bin/cfssljsonmv cfssl-certinfo_linux-amd64 / usr/local/bin/cfssl-certinfo

3 create a file and generate the corresponding certificate

1 ca-config.json

{"signing": {"default": {"expiry": "87600h"}, "profiles": {"www": {"expiry": "87600h", "usages": ["signing", "key encipherment" "server auth", "client auth"]}}

2 ca-csr.json

{"CN": "etcd CA", "key": {"algo": "rsa", "size": 2048}, "names": [{"C": "CN", "L": "Shaanxi", "ST": "xi'an"}]}

3 server-csr.json

{"CN": "etcd", "hosts": ["192.168.1.10", "192.168.1.20", "192.168.1.30"], "key": {"algo": "rsa", "size": 2048} "names": [{"C": "CN", "L": "Shaanxi", "ST": "xi'an"}]}

4 generate certificates

1 cfssl gencert-initca ca-csr.json | cfssljson-bare ca-

The results are as follows

2 cfssl gencert-ca=ca.pem-ca-key=ca-key.pem-config=ca-config.json-profile=www server-csr.json | cfssljson-bare server

The results are as follows

3 deploy etcd

1 download related software packages

Wget https://github.com/etcd-io/etcd/releases/download/v3.2.12/etcd-v3.2.12-linux-amd64.tar.gz

2 create a directory of related configuration files and extract the relevant configuration

Mkdir / opt/etcd/ {bin,cfg,ssl}-ptar xf etcd-v3.2.12-linux-amd64.tar.gzmv etcd-v3.2.12-linux-amd64/ {etcd,etcdctl} / opt/etcd/bin/

The results are as follows:

3 create a profile etcd

# [Member] ETCD_NAME= "etcd01" ETCD_DATA_DIR= "/ var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS= "https://192.168.1.10:2380" ETCD_LISTEN_CLIENT_URLS=" https://192.168.1.10:2379" # [Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS= "https://192.168.1.10:2380" ETCD_ADVERTISE_CLIENT_URLS=" https://192.168.1.10: 2379 "ETCD_INITIAL_CLUSTER=" etcd01= https://192.168.1.10:2380, Etcd02= https://192.168.1.20:2380,etcd03=https://192.168.1.30:2380" ETCD_INITIAL_CLUSTER_TOKEN= "etcd-cluster" ETCD_INITIAL_CLUSTER_STATE= "new"

Noun parsing:

ETCD_NAME # Node name

ETCD_DATA_DIR # data directory, used to store node ID, cluster ID, etc.

ETCD_LISTEN_PEER_URLS # listens on URL for communication with other nodes (local IP plus port)

ETCD_LISTEN_CLIENT_URLS # client access listening address

ETCD_INITAL_ADVERTISE_PEER_URLS # Cluster advertisement address

ETCD_ADVERTISE_CLIENT_URLS # client advertises the address and tells other nodes to communicate

ETCD_INITIAL_CLUSTER # cluster node address, all node addresses in the cluster

ETCD_INITIAL_CLUSTER_TOKEN # Cluster token

ETCD_INITIAL_CLUSTER_STATE # the current status of joining a cluster. New is a new cluster, and exitsting means joining an existing cluster.

4 create startup file etcd.service

[Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=/opt/etcd/cfg/etcd ExecStart=/opt/etcd/bin/etcd\-name=$ {ETCD_NAME}\-data-dir=$ {ETCD_DATA_DIR}\-listen-peer-urls=$ {ETCD_LISTEN_PEER_URLS}\-listen-client-urls=$ {ETCD_LISTEN_CLIENT_URLS} Http://127.0.0.1:2379\-advertise-client-urls=$ {ETCD_ADVERTISE_CLIENT_URLS}\-initial-advertise-peer-urls=$ {ETCD_INITIAL_ADVERTISE_PEER_URLS}\-initial-cluster=$ {ETCD_INITIAL_CLUSTER}\-initial-cluster-token=$ {ETCD_INITIAL_CLUSTER_TOKEN}\-initial-cluster-state=new\-cert-file=/opt/etcd/ssl/server.pem\-key -file=/opt/etcd/ssl/server-key.pem\-- peer-cert-file=/opt/etcd/ssl/server.pem\-- peer-key-file=/opt/etcd/ssl/server-key.pem\-- trusted-ca-file=/opt/etcd/ssl/ca.pem\-- peer-trusted-ca-file=/opt/etcd/ssl/ca.pem Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target

The results are as follows

Copy the key information to the specified location

Master2

1 create a file

Mkdir / opt/etcd/ {bin,cfg,ssl}-p

2 copy the relevant configuration files to the specified node master2

Modify configuration information

Etcd configuration

# [Member] ETCD_NAME= "etcd02" ETCD_DATA_DIR= "/ var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS= "https://192.168.1.20:2380"ETCD_LISTEN_CLIENT_URLS="https://192.168.1.20:2379" # [Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS=" https://192.168.1.20:2380"ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.20: 2379 "ETCD_INITIAL_CLUSTER=" etcd01= https://192.168.1.10:2380, Etcd02= https://192.168.1.20:2380,etcd03=https://192.168.1.30:2380"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_INITIAL_CLUSTER_STATE="new"

The results are as follows

Node1 node configuration, same as master2

Mkdir / opt/etcd/ {bin,cfg,ssl}-pscp / opt/etcd/cfg/* node1:/opt/etcd/cfg/scp / opt/etcd/bin/* node1:/opt/etcd/bin/scp / opt/etcd/ssl/* node1:/opt/etcd/ssl/

Modify the configuration file

# [Member] ETCD_NAME= "etcd03" ETCD_DATA_DIR= "/ var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS= "https://192.168.1.30:2380"ETCD_LISTEN_CLIENT_URLS="https://192.168.1.30:2379" # [Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS=" https://192.168.1.30:2380"ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.30: 2379 "ETCD_INITIAL_CLUSTER=" etcd01= https://192.168.1.10:2380, Etcd02= https://192.168.1.20:2380,etcd03=https://192.168.1.30:2380"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_INITIAL_CLUSTER_STATE="new"

The configuration results are as follows

4 start the service

Three nodes start and set boot self-boot (start at the same time)

Systemctl start etcd systemctl enable etcd.service

Verify:

/ opt/etcd/bin/etcdctl-ca-file=/opt/etcd/ssl/ca.pem-cert-file=/opt/etcd/ssl/server.pem-key-file=/opt/etcd/ssl/server-key.pem-endpoints= "https://192.168.1.10:2379,https://192.168.1.20:2379,https://192.168.1.30:2379" cluster-health

The results are as follows

* * ETCD extension: https://www.kubernetes.org.cn/5021.html

Three nodes install docker (except load balancer nodes) 1 install dependency package yum install-y yum-utils device-mapper-persistent-data lvm22 install dockeryum source yum-config-manager-- add-repo https://download.docker.com/linux/centos/docker-ce.repo![]3 install docker-ce yum install docker-ce-Y4 configure related source curl-sSL https://get.daocloud.io/daotools/set_mirror.sh | Sh-s http://bc437cce.m.daocloud.io5 restart docker and set to boot systemctl restart dockersystemctl enable docker6 to view the result

The results are as follows

Introduction to the deployment of flannel networks (except load balancing nodes) 1

By default, flannel uses vxlan (supported by the Linux kernel since 3.7.0) as the transport mechanism of the back-end network. It does not support network policies. It is based on Linux TUN/TAP transmission and maintains network allocation with the help of etcd.

Flannel's solution to subnet conflicts is to reserve a network (followed by a network written to etcd), then automatically assign a subnet to the docker container engine of each node and save its assigned information in etcd persistent storage.

There are three modes of flannel:

1 vxlan

2 upgraded version of vxlan (direct routing VXLAN) nodes on the same network use host-gw to communicate, and other communications are realized by vxlan.

3 host-gw: and host gateway, the packet is forwarded directly through the route created on the node to the destination container address. This method requires that each node must be in the same layer 3 network. Host-gw has better forwarding performance and is easy to set. If it passes through multiple networks, more routes will be involved and the performance will be degraded.

4 UDP: tunnel forwarding using ordinary UDP messages with low performance, only in cases where the first two are not supported

2 assign subnets and write to etcd

Flanneld, which is used for etcd to store its own subnet information, so it is necessary to ensure that the etcd can be successfully linked and written to the predetermined subnet network segment.

/ opt/etcd/bin/etcdctl-- ca-file=/opt/etcd/ssl/ca.pem-- cert-file=/opt/etcd/ssl/server.pem-- key-file=/opt/etcd/ssl/server-key.pem-- endpoints= "https://192.168.1.10:2379,https://192.168.1.20:2379,https://192.168.1.30:2379" set / coreos.com/network/config'{" Network ":" 172.17.0.0 and 16 " "Backend": {"Type": "vxlan"}'! [] 3 download and configure related downloads 1 download flannelwget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz2 decompression configuration: tar xf flannel-v0.10.0-linux-amd64.tar.gz 3 create kubernetes configuration files directory mkdir / opt/kubernetes/bin-p4 move binaries to kubernetes directory Record mv flanneld mk-docker-opts.sh / opt/kubernetes/bin/5 create configuration file directory mkdir / opt/kubernetes/cfg6 configuration flannel configuration file FLANNEL_OPTIONS=-- etcd-endpoints= https://192.168.1.10:2379, Https://192.168.1.20:2379,https://192.168.1.30:2379-etcd-cafile=/opt/etcd/ssl/ca.pem-etcd-certfile=/opt/etcd/ssl/server.pem-etcd-keyfile=/opt/etcd/ssl/server-key.pem "7 configure systemd to manage flannel network

Cat / usr/lib/systemd/system/flanneld.service

[Unit] Description=Flanneld overlay address etcd agentAfter=network-online.target network.targetBefore= docker.service[ service] Type=notifyEnvironmentFile=/opt/kubernetes/cfg/flanneldExecStart=/opt/kubernetes/bin/flanneld-- ip-masq $FLANNEL_OPTIONSExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh-k DOCKER_NETWORK_OPTIONS-d / run/flannel/subnet.envRestart=on-failure [Install] WantedBy=multi-user.target

The results are as follows

8 configure docker to support flannel network [Unit] Description=Docker Application Container EngineDocumentation= https://docs.docker.comAfter=network-online.target firewalld.serviceWants=network- online.target [service] Type=notifyEnvironmentFile=/run/flannel/subnet.envExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONSExecReload=/bin/kill-s HUP $MAINPIDLimitNOFILE=infinityLimitNPROC=infinityLimitCORE=infinityTimeoutStartSec=0Delegate=yesKillMode=processRestart=on-failureStartLimitBurst=3StartLimitInterval= 60s [install] WantedBy=multi-user.target

The results are as follows

9 copy and move to each node

1 other nodes create related etcd directories and related kubernetes directories

Mkdir / opt/kubernetes/ {cfg,bin}-pmkdir / opt/etcd/ssl-p

2 copy the relevant configuration files to the specified host

Replication related flannd access

Copy docker configuration information

10 start flannd and restart dockersystemctl daemon-reloadsystemctl start flanneldsystemctl enable flanneld

11 View results

Note: (make sure that docker0 and flannel.1 are on the same network segment, and that each node can communicate with the IP of other node docker0)

The results are as follows

View flannel configuration in etcd

/ opt/etcd/bin/etcdctl-ca-file=/opt/etcd/ssl/ca.pem-cert-file=/opt/etcd/ssl/server.pem-key-file=/opt/etcd/ssl/server-key.pem-endpoints= "https://192.168.1.10:2379,https://192.168.1.20:2379, Https://192.168.1.30:2379" get / coreos.com/network/config / opt/etcd/bin/etcdctl-ca-file=/opt/etcd/ssl/ca.pem-cert-file=/opt/etcd/ssl/server.pem-key-file=/opt/etcd/ssl/server-key.pem-endpoints= "https://192.168.1.10:2379,https://192.168.1.20:2379, Https://192.168.1.30:2379" get / coreos.com/network/subnets/172.17.56.0-24 five master node API-SERVER deployment (master1 Introduction to master2) 1

API-SERVER provides a unique entry for resource operations and provides mechanisms such as authentication, authorization, access control, API registration and discovery.

View related resource logs

Journalctl-exu kube-apiserver

2 generate and configure related keys and certificates 1 create kubernetes directory where certificates are stored mkdir / opt/kubernetes/ssl 2 enter the created directory and create certificates

Cat ca-config.json

{"signing": {"default": {"expiry": "87600h"}, "profiles": {"kubernetes": {"expiry": "87600h", "usages": ["signing", "key encipherment", "server auth", "client auth"]}

Cat ca-csr.json

{"CN": "kubernetes", "key": {"algo": "rsa", "size": 2048}, "names": [{"C": "CN", "L": "Shaanxi", "ST": "xi'an", "O": "K8s" "OU": "System"}]}

Cat server-csr.json

{"CN": "kubernetes", "hosts": ["10.0.0.1", "127.0.0.1", "192.168.1.10", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local"], {"CN": "kubernetes" "hosts": ["10.0.0.1", "127.0.0.1", "192.168.1.10", "192.168.1.20", "192.168.1.100", "kubernetes", "kubernetes.default", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local"] "key": {"algo": "rsa", "size": 2048}, "names": [{"C": "CN", "L": "Shannxi", "ST": "xi'an", "O": "K8s" "OU": "System"}} "key": {"algo": "rsa", "size": 2048}, "names": [{"C": "CN", "L": "Shannxi", "ST": "xi'an", "O": "k8s" "OU": "System"}]}

Kube-proxy-csr.json

{"CN": "system:kube-proxy", "hosts": [], "key": {"algo": "rsa", "size": 2048}, "names": [{"C": "CN", "L": "Shaanxi", "ST": "xi'an", "O": "K8s" "OU": "System"}]} 3 generate apiserver certificate

one

Cfssl gencert-initca ca-csr.json | cfssljson-bare ca-

View

two

Cfssl gencert-ca=ca.pem-ca-key=ca-key.pem-config=ca-config.json-profile=kubernetes server-csr.json | cfssljson-bare server

View the result

three

Generate a certificate

Cfssl gencert-ca=ca.pem-ca-key=ca-key.pem-config=ca-config.json-profile=kubernetes kube-proxy-csr.json | cfssljson-bare kube-proxy

3 download and configure 1 download package and move to the specified directory wget https://storage.googleapis.com/kubernetes-release/release/v1.11.6/kubernetes-server-linux-amd64.tar.gz

Decompress the packet

Tar xf kubernetes-server-linux-amd64.tar.gz

Enter the specified directory

Cd kubernetes/server/bin/

Move the binaries to the specified directory:

Cp kube-apiserver kube-scheduler kube-controller-manager kubectl / opt/kubernetes/bin/

The results are as follows

2 create token (the string is random, as long as this file is consistent with the corresponding name and related directory)

Create a token, which will be used later

674c457d4dcf2eefe4920d7dbb6b0ddc direction kubeletlybootstrap 10001, "system:kubelet-bootstrap"

The format is as follows:

Description:

First column: random string, generated by yourself

Second column: user name

The third column: UID

Fourth column: user groups

The results are as follows:

3. Create the configuration file KUBE_APISERVER_OPTS= for api-server "--logtostderr=true\-- etcd-servers= https://192.168.1.10:2379,https://192.168.1.20:2379,. Https://192.168.1.30:2379\-bind-address=192.168.1.10\-secure-port=6443\-advertise-address=192.168.1.10\-allow-privileged=true\-service-cluster-ip-range=10.0.0.0/24\-enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction\-authorization-mode=RBAC Node\-enable-bootstrap-token-auth\-token-auth-file=/opt/kubernetes/cfg/token.csv\-service-node-port-range=30000-50000\-tls-cert-file=/opt/kubernetes/ssl/server.pem\-tls-private-key-file=/opt/kubernetes/ssl/server-key.pem\-client-ca-file=/opt/kubernetes/ssl/ca.pem\-service-account-key-file=/opt/kubernetes/ssl/ca-key .pem\-- etcd-cafile=/opt/etcd/ssl/ca.pem\-- etcd-certfile=/opt/etcd/ssl/server.pem\-- etcd-keyfile=/opt/etcd/ssl/server-key.pem "

Noun analysis

-- logtostderr enables logging

-- v log level

-- etcd-servers etcd cluster address

-- bind-address listening address

-- secure-port https secure port

-- advertise-address cluster tunnel address

-- allow-privileged user authorization

-- service-cluster-ip-range Service virtual IP address field

-- enable-admission-plugins admission control module

-- authorization-mode authentication authorization, enabling RBAC authorization and node self-management

4 create a startup configuration file [Unit] Description=Kubernetes API ServerDocumentation= https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserverExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTSRestart=on-failure [Install] WantedBy=multi-user.target

The results are as follows:

5 copy the configuration file to master2scp / opt/kubernetes/cfg/kube-apiserver master2:/opt/kubernetes/cfg/scp / opt/kubernetes/bin/* master2:/opt/kubernetes/binscp / usr/lib/systemd/system/kube-apiserver.service master2:/usr/lib/systemd/systemscp / opt/kubernetes/ssl/* master2:/opt/kubernetes/ssl/scp / opt/kubernetes/cfg/token.csv master2:/opt/kubernetes/cfg/

Results:

6 modify master2-related configuration files

/ opt/kubernetes/cfg/kube-apiserver

KUBE_APISERVER_OPTS= "--logtostderr=true\-- Vroom4\-- etcd-servers= https://192.168.1.10:2379,https://192.168.1.20:2379,https://192.168.1.30:2379\-- bind-address=192.168.1.20\-- secure-port=6443\-- advertise-address=192.168.1.20\-- allow-privileged=true\-- service-cluster-ip-range=10.0.0.0/24\-- enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota NodeRestriction\-- authorization-mode=RBAC Node\-enable-bootstrap-token-auth\-token-auth-file=/opt/kubernetes/cfg/token.csv\-service-node-port-range=30000-50000\-tls-cert-file=/opt/kubernetes/ssl/server.pem\-tls-private-key-file=/opt/kubernetes/ssl/server-key.pem\-client-ca-file=/opt/kubernetes/ssl/ca.pem\-service-account-key-file=/opt/kubernetes/ssl/ca-key .pem\-- etcd-cafile=/opt/etcd/ssl/ca.pem\-- etcd-certfile=/opt/etcd/ssl/server.pem\-- etcd-keyfile=/opt/etcd/ssl/server-key.pem "7 start the api-server of master1 and master2 and set the boot systemctl daemon-reloadsystemctl start kube-apiserversystemctl enable kube-apiserver8 query to display the results

A brief introduction to scheduler (master1,master2) 1 of six master nodes

Scheduler is responsible for resource scheduling and dispatches POD to the corresponding nodes according to the predetermined scheduling policy.

2 add profile

/ opt/kubernetes/cfg/kube-scheduler

KUBE_SCHEDULER_OPTS= "--logtostderr=true\-- master=127.0.0.1:8080 4\-- leader-elect"

Parameter description:

-- master links to local apiserver

-- leader-elect "automatic election when this component starts multiple times"

3 configure startup file

/ usr/lib/systemd/system/kube-scheduler

[Unit] Description=Kubernetes SchedulerDocumentation= https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/opt/kubernetes/cfg/kube-schedulerExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTSRestart=on-failure [Install] WantedBy=multi-user.target

Result

4 copy configuration files and related binaries to master2scp / opt/kubernetes/cfg/kube-scheduler master2:/opt/kubernetes/cfg/scp / opt/kubernetes/bin/kube-scheduler master2:/opt/kubernetes/bin/scp / usr/lib/systemd/system/kube-scheduler.service master2:/usr/lib/systemd/system

5 start master1 and master2 and set to boot self-boot systemctl daemon-reloadsystemctl start kube-scheduler.servicesystemctl enable kube-scheduler.service

View the result

Brief introduction of Seven master Node deployment controller-manager (master1,master2) 1

Responsible for maintaining the status of the cluster, such as fault detection, automatic extension, rolling updates, etc.

2 configuration related files 1 configuration files

/ opt/kubernetes/cfg/kube-controller-manager

KUBE_CONTROLLER_MANAGER_OPTS= "--logtostderr=true\-- Vroom4\-- master=127.0.0.1:8080\-- leader-elect=true\-- address=127.0.0.1\-- service-cluster-ip-range=10.0.0.0/24\-- cluster-name=kubernetes\-- cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem\-- cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem\-- root-ca-file=/opt / kubernetes/ssl/ca.pem\-service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem "2 startup file configuration

/ usr/lib/systemd/system/kube-controller-manager.service

[Unit] Description=Kubernetes Controller ManagerDocumentation= https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-managerExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTSRestart=on-failure [Install] WantedBy=multi-user.target3 copy configuration files and related binaries to the master2 node scp / opt/kubernetes/bin/kube-controller-manager master2:/opt/kubernetes/bin/scp / opt/kubernetes/cfg/kube-controller-manager master2:/opt/kubernetes / cfg/scp / usr/lib/systemd/system/kube-controller-manager master2:/usr/lib/systemd/system4 start and view the configuration result 1 start the service systemctl daemon-reloadsystemctl enable kube-controller-managersystemctl restart kube-controller-manager

View the result

2 View the final master startup result

Introduction to configuring related load balancers (nginx) 1:

Used to provide load balancing for master1 and master2 nodes

2 configure nginx yum source

/ etc/yum.repos.d/nginx.repo

[nginx] name=nginx repobaseurl= http://nginx.org/packages/centos/7/x86_64/gpgcheck=0enabled=13 install nginx and configure nginx

Yum-y install nginx

/ etc/nginx/nginx.conf

User nginx;worker_processes 1 errorists log / var/log/nginx/error.log warn;pid / var/run/nginx.pid;events {worker_connections 1024;} http {include / etc/nginx/mime.types; default_type application/octet-stream Log_format main'$remote_addr-$remote_user [$time_local] "$request"'$status $body_bytes_sent "$http_referer"'"$http_user_agent"$http_x_forwarded_for"; access_log / var/log/nginx/access.log main; sendfile on; # tcp_nopush on; keepalive_timeout 65; # gzip on Include / etc/nginx/conf.d/*.conf;} stream {upstream api-server {server 192.168.1.10 upstream api-server; server 192.168.1.20 upstream api-server 6443;} server {listen 6443; proxy_pass api-server;}} 4 check and start nginx

Systemctl start nginx

Systemctl enable nginx

5 View

Brief introduction of Nine node Node deployment component kubelete (node1,node2) 1

Responsible for maintaining the life cycle of the container, as well as mounting and network management

2 create and generate related configuration files 1 master node configuration binds kubelet-bootstrap users to cluster roles cd / opt/kubernetes/bin/./kubectl create clusterrolebinding kubelet-bootstrap\-- clusterrole=system:node-bootstrapper\-- user=kubelet-bootstrap2 to create kubeconfig files

Execute under the directory where the certificate was generated

Cd / opt/kubernetes/ssl/

The script is as follows

Environment.sh

# create the random number under kubelet bootstrapping kubeconfig# for the random number generated above BOOTSTRAP_TOKEN=674c457d4dcf2eefe4920d7dbb6b0ddc# its IP address and port correspond to the virtual IP address of LVS load balancer KUBE_APISERVER= "https://192.168.1.100:6443"# set the cluster parameter kubectl config set-cluster kubernetes\-certificate-authority=./ca.pem\-embed-certs=true\-server=$ {KUBE_APISERVER}\-kubeconfig=bootstrap.kubeconfig# Set client authentication parameter kubectl config set-credentials kubelet-bootstrap\-- token=$ {BOOTSTRAP_TOKEN}\-- kubeconfig=bootstrap.kubeconfig# set context parameter kubectl config set-context default\-- cluster=kubernetes\-- user=kubelet-bootstrap\-- kubeconfig=bootstrap.kubeconfig# set default context kubectl config use-context default-- kubeconfig=bootstrap.kubeconfig#--# create kube-proxy Kubeconfig file kubectl config set-cluster kubernetes\-- certificate-authority=./ca.pem\-- embed-certs=true\-- server=$ {KUBE_APISERVER}\-- kubeconfig=kube-proxy.kubeconfigkubectl config set-credentials kube-proxy\-- client-certificate=./kube-proxy.pem\-- client-key=./kube-proxy-key.pem\-- embed-certs=true\-- kubeconfig=kube-proxy.kubeconfigkubectl config set-context default\-- cluster=kubernetes\-- user=kube-proxy\ -- kubeconfig=kube-proxy.kubeconfigkubectl config use-context default-- kubeconfig=kube-proxy.kubeconfig3 configure kubectl environment variable

/ etc/profile

Export PATH=/opt/kubernetes/bin:$PATH

Source / etc/profile

4 execute script

Sh environment.sh

3 copy related files 1 copy the build configuration file to the node1 and node2 nodes scp-rp bootstrap.kubeconfig kube-proxy.kubeconfig node1:/opt/kubernetes/cfg/scp-rp bootstrap.kubeconfig kube-proxy.kubeconfig node2:/opt/kubernetes/cfg/2 copy the required binaries to the kubernetes configuration file downloaded before entering the specified directory cd / root/kubernetes/server/bin3 replication configuration file scp kubelet kube-proxy node1:/opt / kubernetes/bin/scp kubelet kube-proxy node2:/opt/kubernetes/bin/4 configuration related File 1 node1 Node configuration kubelet profile

/ opt/kubernetes/cfg/kubelet

KUBELET_OPTS= "--logtostderr=true\-- Vroom4\-- hostname-override=192.168.1.30\-- kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig\-- bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig\-- config=/opt/kubernetes/cfg/kubelet.config\-- cert-dir=/opt/kubernetes/ssl\-- pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

Parameter description:

-- the hostname displayed by hostname-override in the cluster

-kuveconfig: specify the location of the kubeconfig file and generate it automatically

-- bootstrap-kubecondig specifies the file configuration

-- where the certificate issued by cert-dir exists

-- pod-infra-container-image manages POD network images

2 configure kubelet.config

/ opt/kubernetes/cfg/kubelet.config

Kind: KubeletConfigurationapiVersion: kubelet.config.k8s.io/v1beta1address: 192.168.1.30port: 10250readOnlyPort: 10255cgroupDriver: cgroupfsclusterDNS: ["10.0.0.10"] clusterDomain: cluster.local.failSwapOn: falseauthentication: anonymous: enabled: true3 service component configuration

/ usr/lib/systemd/system/kubelet.service

[Unit] Description=Kubernetes KubeletAfter=docker.serviceRequires= docker. Service [service] EnvironmentFile=/opt/kubernetes/cfg/kubeletExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTSRestart=on-failureKillMode= process [install] WantedBy=multi-user.target4 copy configuration file to node2 node scp / opt/kubernetes/cfg/kubelet node2:/opt/kubernetes/cfg/scp / opt/kubernetes/cfg/kubelet.config node2:/opt/kubernetes/cfg/scp / usr/lib/systemd/system/kubelet.service node2:/usr/lib/systemd/system5 modify node2 node configuration information

/ opt/kubernetes/cfg/kubelet

KUBELET_OPTS= "--logtostderr=true\-- Vroom4\-- hostname-override=192.168.1.40\-- kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig\-- bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig\-- config=/opt/kubernetes/cfg/kubelet.config\-- cert-dir=/opt/kubernetes/ssl\-- pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

/ opt/kubernetes/cfg/kubelet.config

Kind: KubeletConfigurationapiVersion: kubelet.config.k8s.io/v1beta1address: 192.168.1.40port: 10250readOnlyPort: 10255cgroupDriver: cgroupfsclusterDNS: ["10.0.0.10"] clusterDomain: cluster.local.failSwapOn: falseauthentication: anonymous: enabled: true5 launch and View 1 Startup Service systemctl daemon-reloadsystemctl enable kubeletsystemctl restart kubelet2 View

Introduction to 10 node node deployment of kube-proxy components (node1,node2) 1

Responsible for providing service with service discovery and load balancing within cluster (implemented by creating relevant iptables and ipvs rules)

2 create file 1 create configuration file

/ opt/kubernetes/cfg/kube-proxy

KUBE_PROXY_OPTS= "--logtostderr=true\-- vault 4\-- hostname-override=192.168.1.30\-- cluster-cidr=10.0.0.0/24\-- kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig" 2 create a startup configuration file

/ usr/lib/systemd/system/kube-proxy.service

[Unit] Description=Kubernetes ProxyAfter= network.target [service] EnvironmentFile=-/opt/kubernetes/cfg/kube-proxyExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTSRestart=on- failure [install] WantedBy=multi-user.target3 copy configuration information to node2 node and modify 1 replication configuration information scp / opt/kubernetes/cfg/kube-proxy node2:/opt/kubernetes/cfg/scp / usr/lib/systemd/system/kube-proxy.service node2:/usr/lib/systemd/system2 modify node2 configuration file

/ opt/kubernetes/cfg/kube-proxy

KUBE_PROXY_OPTS= "--logtostderr=true\-- kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig 4\-- hostname-override=192.168.1.40\-- cluster-cidr=10.0.0.0/24\-- kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig" 4 start the service and view the result 1 start the service systemctl daemon-reload systemctl enable kube-proxy.service systemctl start kube-proxy.service2 to view the result

5 add relevant certification

6. Create POD verification kubectl run nginx-- image=nginx:1.14-- replicas=3

Brief introduction of National Day holiday building deployment coredns (master1) 1

Responsible for providing internal DNS parsing for the whole cluster

2 deployment 1 create a directory and download the related yaml file mkdir corednswget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed2 modify the file contents

The above and the IP address segment of service:

The following is modified to the same value as the clusterDNS configuration in / opt/kubernetes/cfg/kubelet.config

3 start service kubectl apply-f coredns.yaml.sed 4 View service kubectl get pods-n kube-system

3 complete configuration file apiVersion: v1kind: ServiceAccountmetadata: name: coredns namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRolemetadata: labels: kubernetes.io/bootstrapping: rbac-defaults name: system:corednsrules:- apiGroups:-"" resources:-endpoints-services-pods-namespaces verbs:-list-watch- apiGroups:-"" resources:-nodes verbs:-get---apiVersion: rbac.authorization.k8s. Io/v1beta1kind: ClusterRoleBindingmetadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:corednsroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:corednssubjects:- kind: ServiceAccount name: coredns namespace: kube-system---apiVersion: v1kind: ConfigMapmetadata: name: coredns namespace: kube-systemdata: Corefile:.: 53 {errors health kubernetes cluster. Local 10.0.0.0amp 24 {# here is the network address range of service pods insecure upstream fallthrough in-addr.arpa ip6.arpa} prometheus: 9153 proxy. / etc/resolv.conf cache 30 loop reload loadbalance}-apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/name: "CoreDNS" spec: replicas: 2 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: K8s-app: kube-dns spec: serviceAccountName: coredns tolerations:-key: "CriticalAddonsOnly" operator: "Exists" nodeSelector: beta.kubernetes.io/os: linux containers:-name: coredns image: coredns/coredns:1.3.0 imagePullPolicy: IfNotPresent resources: limits: memory: 170Mi requests: Cpu: 100m memory: 70Mi args: ["- conf" "/ etc/coredns/Corefile"] volumeMounts:-name: config-volume mountPath: / etc/coredns readOnly: true ports:-containerPort: 53 name: dns protocol: UDP-containerPort: 53 name: dns-tcp protocol: TCP-containerPort: 9153 name: metrics protocol: TCP SecurityContext: allowPrivilegeEscalation: false capabilities: add:-NET_BIND_SERVICE drop:-all readOnlyRootFilesystem: true livenessProbe: httpGet: path: / health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 FailureThreshold: 5 dnsPolicy: Default volumes:-name: config-volume configMap: name: coredns items:-key: Corefile path: Corefile---apiVersion: v1kind: Servicemetadata: name: kube-dns namespace: kube-system annotations: prometheus.io/port: "9153" prometheus.io/scrape: "true" labels: k8s-app : kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS" spec: selector: k8s-app: kube-dns clusterIP: 10.0.0.10 # this is the address of DNS In service's cluster network, ports:-name: dns port: 53 protocol: UDP-name: dns-tcp port: 53 protocol: TCP-name: metrics port: 9153 protocol: TCP4 Verification 1 enters any of the above nginx

2 install digapt-get update

Apt-get install dnsutils

3 check the result dig kubernetes.default.svc.cluster.local @ 10.0.0.10

Introduction to Twelve deployment of dashboard (master1) 1

Used to provide graphical interfaces for clusters

2 deployment service 1 create a folder mkdir dashboard2 to enter and download the configuration file cd dashboard/ wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml3 deployment kubectl apply-f kubernetes-dashboard.yaml 4 view its node kubectl get pods-o wide-n kube-system

5 download the relevant image files on this node and type tagdocker pull mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1docker tag mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.16 to view the results

7 redeploy kubectl delete-f kubernetes-dashboard.yamlkubectl apply-f kubernetes-dashboard.yaml after deletion

View mode

8 modify mode kubectl edit svc-n kube-system kubernetes-dashboard

View the result

9 View https://192.168.1.30:45201/ # it must be https, and secondly, its port is the port number above

Add exception

3 add secret and token certification 1 to create serviceaccount kubectl create serviceaccount dashboard-admin-n kube-system

2 establish a binding relationship between dashborad-admin and the cluster administrator so that they can check kubectl create clusterrolebinding dashborad-cluster-admin-- clusterrole=cluster-admin-- serviceaccount=kube-system:dashboard-admin through RBAC

3 View secret key

4 fill token into the corresponding space

13 install and configure ingress Services (master1) 1 configure deployment ingress1 create related and download configuration file mkdir ingress cd ingress wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.20.0/deploy/mandatory.yaml2 deployment and pull related image kubectl apply-f mandatory.yaml

View the running node

Kubectl get pods-n ingress-nginx-o wide

Pull the mirror image to the specified node

Docker pull registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/nginx-ingress-controller:0.20.0 docker pull registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/defaultbackend-amd64:1.5

Docker tag registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/nginx-ingress-controller:0.20.0 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.20.0 docker tag registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/defaultbackend-amd64:1.5 k8s.gcr.io/defaultbackend-amd64:1.5

Delete redeployment

Kubectl delete-f mandatory.yaml kubectl apply-f mandatory.yaml

View

Kubectl get pods-n ingress-nginx

If it cannot be created here, it may be apiserver authentication. However, you can redeploy by deleting SecurityContextDeny,ServiceAccount in enable-admission-plugins in / opt/kubernetes/cfg/kube-apiserver and restarting apiserver.

2 View and verify 1 create service exposed port apiVersion: v1kind: Servicemetadata: name: nginx-ingress-controller namespace: ingress-nginxspec: type: NodePort clusterIP: 10.0.0.100 ports:-port: 80 name: http nodePort: 30080-port: 443name: https nodePort: 30443 selector: app.kubernetes.io/name: ingress-nginx2 deployment and view

Kubectl apply-f service.yaml

View

Verification

3 configure the default backend and verify it

Configure the default back-end site

# cat ingress/nginx.yaml apiVersion: extensions/v1beta1kind: Ingressmetadata: name: default-backend-nginx namespace: defaultspec: backend: serviceName: nginx servicePort: 80

Deployment

Kubectl apply-f ingress/nginx.yaml

View

14 deployment prometheus system Monitoring (master1) 1 deployment metrics-server1 download relevant configuration file mkdir metrics-servercd metrics-server/yum-y install git git clone https://github.com/kubernetes-incubator/metrics-server.git 2 modify related configuration # metrics-server-deployment.yaml-- apiVersion: v1kind: ServiceAccountmetadata: name: metrics-server namespace: kube-system---apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: metrics-server namespace: kube-system labels: K8s- App: metrics-serverspec: selector: matchLabels: k8s-app: metrics-server template: metadata: name: metrics-server labels: metrics-serverspec: serviceAccountName: metrics-server volumes: # mount in tmp so we can safely use from-scratch images and/or read-only containers-name: tmp-dir emptyDir: {} containers:-name: metrics- Server image: registry.cn-beijing.aliyuncs.com/minminmsn/metrics-server:v0.3.1 imagePullPolicy: Always volumeMounts:-name: tmp-dir mountPath: / tmp3 add service for public network access # service.yamlapiVersion: extensions/v1beta1kind: Ingressmetadata: name: metrics-ingress namespace: kube-system annotations: nginx.ingress.kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/secure-backends: " True "nginx.ingress.kubernetes.io/ssl-passthrough:" true "spec: tls:-hosts:-metrics.minminmsn.com secretName: ingress-secret rules:-host: metrics.minminmsn.com http: paths:-path: / backend: serviceName: metrics-server servicePort: 4432 configure relevant API and restart (both master1 and master2 are configured) # / Opt/kubernetes/cfg/kube-apiserverKUBE_APISERVER_OPTS= "--logtostderr=true\-- etcd-servers= 4\-- etcd-servers= https://192.168.1.10:2379, Https://192.168.1.20:2379,https://192.168.1.30:2379\-bind-address=192.168.1.10\-secure-port=6443\-advertise-address=192.168.1.10\-allow-privileged=true\-service-cluster-ip-range=10.0.0.0/24\-enable-admission-plugins=NamespaceLifecycle,LimitRanger,ResourceQuota,NodeRestriction\-authorization-mode=RBAC Node\-enable-bootstrap-token-auth\-token-auth-file=/opt/kubernetes/cfg/token.csv\-service-node-port-range=30000-50000\-tls-cert-file=/opt/kubernetes/ssl/server.pem\-tls-private-key-file=/opt/kubernetes/ssl/server-key.pem\-client-ca-file=/opt/kubernetes/ssl/ca.pem\-service-account-key-file=/opt/kubernetes/ssl/ca-key .pem\-- etcd-cafile=/opt/etcd/ssl/ca.pem\-- etcd-certfile=/opt/etcd/ssl/server.pem\-- etcd-keyfile=/opt/etcd/ssl/server-key.pem\ # add the following configuration-- requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem\-- requestheader-allowed-names=aggregator\-- requestheader-extra-headers-prefix=X-Remote-Extra-\-- requestheader-group-headers=X-Remote-Group\-- requestheader -username-headers=X-Remote-User\-proxy-client-cert-file=/opt/kubernetes/ssl/kube-proxy.pem\-proxy-client-key-file=/opt/kubernetes/ssl/kube-proxy-key.pem "

Restart apiserver

Systemctl restart kube-apiserver.service3 deployment configuration cd metrics-server/deploy/1.8+/kubectl apply-f.

View configuration

Previously, port 30443 was used to map port 443, which requires https access.

2 deploy prometheus1 download and deploy the namespace git clone https://github.com/iKubernetes/k8s-prom.gitcd k8s-prom/kubectl apply-f namespace.yaml2 deploy node_exportercd node_exporter/kubectl apply-f.

View

Kubectl get pods-n prom

3 deploy prometheus cd.. / prometheus/ # prometheus-deploy.yaml # to remove limit restrictions The results are as follows-apiVersion: apps/v1kind: Deploymentmetadata: name: prometheus-server namespace: app: prometheusspec: replicas: 1 selector: matchLabels: app: prometheus component: server # matchExpressions: #-{key: app, operator: In, values: [prometheus]} #-{key: component, operator: In Values: [server]} template: metadata: labels: app: prometheus component: server annotations: prometheus.io/scrape: 'false' spec: serviceAccountName: prometheus containers:-name: prometheus image: prom/prometheus:v2.2.1 imagePullPolicy: Always command:-prometheus-config.file=/etc/prometheus/ Prometheus.yml-- storage.tsdb.path=/prometheus-- storage.tsdb.retention=720h ports:-containerPort: 9090 protocol: TCP volumeMounts:-mountPath: / etc/prometheus/prometheus.yml name: prometheus-config subPath: prometheus.yml-mountPath: / prometheus/ name: prometheus-storage-volume volumes: -name: prometheus-config configMap: name: prometheus-config items:-key: prometheus.yml path: prometheus.yml mode: 0644-name: prometheus-storage-volume emptyDir: {} # deploy kubectl apply-f.

View

Kubectl get pods-n prom

Verification

4 deploy kube-state-metrics (API Integration) cd.. / kube-state-metrics/ kubectl apply-f.

View deployment nodes

Pull the mirror image to the specified node

Docker pull quay.io/coreos/kube-state-metrics:v1.3.1docker tag quay.io/coreos/kube-state-metrics:v1.3.1 gcr.io/google_containers/kube-state-metrics-amd64:v1.3.1# redeployed kubectl delete-f. Kubectl apply-f.

View

5 deploy k8s-prometheus-adapter

Prepare the certificate

Cd / opt/kubernetes/ssl/ (umask 077 and OpenSSL genrsa-out serving.key 2048) openssl req-new-key serving.key-out serving.csr-subj "/ CN=serving" openssl x509-req-in serving.csr-CA. / kubelet.crt-CAkey. / kubelet.key-CAcreateserial-out serving.crt-days 3650kubectl create secret generic cm-adapter-serving-certs-from-file=serving.crt=./serving.crt-from-file=serving.key=./serving.key-n prom

View

Kubectl get secret-n prom

Deploy resources

Cd k8s-prometheus-adapter/mv custom-metrics-apiserver-deployment.yaml custom-metrics-apiserver-deployment.yaml.bak wget https://raw.githubusercontent.com/DirectXMan12/k8s-prometheus-adapter/master/deploy/manifests/custom-metrics-apiserver-deployment.yaml# modifies the namespace # custom-metrics-apiserver-deployment.yamlapiVersion: apps/v1kind: Deploymentmetadata: labels: app: custom-metrics-apiserver name: custom-metrics-apiserver namespace: promspec: replicas: 1 selector: matchLabels: App: custom-metrics-apiserver template: metadata: labels: app: custom-metrics-apiserver name: custom-metrics-apiserver spec: serviceAccountName: custom-metrics-apiserver containers:-name: custom-metrics-apiserver image: directxman12/k8s-prometheus-adapter-amd64 args:-secure-port=6443-tls-cert-file=/var/run/serving-cert / serving.crt-tls-private-key-file=/var/run/serving-cert/serving.key-logtostderr=true-prometheus-url= http://prometheus.prom.svc:9090/-metrics-relist-interval=1m-vault 10-config=/etc/adapter/config.yaml ports:-- containerPort: 6443 volumeMounts: -mountPath: / var/run/serving-cert name: volume-serving-cert readOnly: true-mountPath: / etc/adapter/ name: config readOnly: true-mountPath: / tmp name: tmp-vol volumes:-name: volume-serving-cert secret: secretName: cm-adapter-serving-certs-name: Config configMap: name: adapter-config-name: tmp-vol emptyDir: {} wget https://raw.githubusercontent.com/DirectXMan12/k8s-prometheus-adapter/master/deploy/manifests/custom-metrics-config-map.yaml# modifies the namespace # custom-metrics-config-map.yamlapiVersion: v1kind: ConfigMapmetadata: name: adapter-config namespace: promdata: config.yaml: | rules:-seriesQuery:'{_ name__=~ "^ container_.*" Containercontainer names = "POD", namespaceboxes = "", podholders nameplates = ""} 'seriesFilters: [] resources: overrides: namespace: resource: namespace pod_name: pod name: matches: ^ container_ (. *) _ seconds_total$ as: "" metricsQuery: sum (rate ({ Containercontainer names = "POD"} [1m]) by ()-seriesQuery:'{_ _ name__=~ "^ container_.*", container container names = "POD", namespaceboxes = "" SeriesFilters:-isNot: ^ container_.*_seconds_total$ resources: overrides: resource: namespace pod_name: resource: pod name: matches: ^ container_ (. *) _ total$ as: "" metricsQuery: sum (rate ({ Containercontainer names = "POD"} [1m]) by ()-seriesQuery:'{_ _ name__=~ "^ container_.*", container container names = "POD", namespaceboxes = "" SeriesFilters:-isNot: ^ container_.*_total$ resources: overrides: resource: namespace pod_name: resource: pod name: matches: ^ container_ (. *) $as: "" metricsQuery: sum ({, containerized names = "POD"}) by ()-seriesQuery:'{namespaceurs = "" _ _ namespace: "^ container_.*"} 'seriesFilters:-isNot:. * _ total$ resources: template: name: matches: "" as: "" metricsQuery: sum ({}) by ()-seriesQuery:' {namespace = "" _ _ namespace: "^ container_.*"} 'seriesFilters:-isNot:. * _ seconds_total resources: template: name: matches: ^ (. *) _ total$ as: "" metricsQuery: sum (rate ({} [1m])) by ()-seriesQuery:' {namespace = " SeriesFilters: [] resources: template: name: matches: ^ (. *) _ seconds_total$ as: "" metricsQuery: sum (rate ({} [1m])) by () resourceRules: cpu: containerQuery: sum (rate (container_cpu_usage_seconds_total {} [1m])) by () NodeQuery: sum (rate (container_cpu_usage_seconds_total { Id='/'} [1m]) by () resources: overrides: instance: resource: node namespace: resource: namespace pod_name: resource: pod containerLabel: containerQuery: sum (container_memory_working_set_bytes {}) by () nodeQuery : sum (container_memory_working_set_bytes { Id='/'}) by () resources: overrides: instance: resource: node namespace: resource: namespace pod_name: resource: pod containerLabel: container_name window: 1m

Deployment

Kubectl apply-f custom-metrics-config-map.yamlkubectl apply-f.

View

3 deploy grafana1 download related configuration file wget https://raw.githubusercontent.com/kubernetes-retired/heapster/master/deploy/kube-config/influxdb/grafana.yaml2 modify configuration file

1 modify the namespace

2 Storage used by modifier by default

3 modify service namespace

4 modify nodeport for public network access

5 the configuration file is as follows:

# grafana.yaml

ApiVersion: extensions/v1beta1kind: Deploymentmetadata: name: monitoring-grafana namespace: promspec: replicas: 1 template: metadata: labels: task: monitoring k8s-app: grafana spec: containers:-name: grafana image: registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-grafana-amd64:v5.0.4 ports:-containerPort: 3000 protocol: TCP volumeMounts: -mountPath: / etc/ssl/certs name: ca-certificates readOnly: true-mountPath: / var name: grafana-storage env:#-name: INFLUXDB_HOST# value: monitoring-influxdb-name: GF_SERVER_HTTP_PORT value: "3000" # The following env variables are required to make Grafana accessible via # the kubernetes api-server proxy. On production clusters, we recommend # removing these env variables, setup auth for grafana, and expose the grafana # service using a LoadBalancer or a public IP. -name: GF_AUTH_BASIC_ENABLED value: "false"-name: GF_AUTH_ANONYMOUS_ENABLED value: "true"-name: GF_AUTH_ANONYMOUS_ORG_ROLE value: Admin-name: GF_SERVER_ROOT_URL # If you're only using the API Server proxy Set this value instead: # value: / api/v1/namespaces/kube-system/services/monitoring-grafana/proxy value: / volumes:-name: ca-certificates hostPath: path: / etc/ssl/certs-name: grafana-storage emptyDir: {}-apiVersion: v1kind: Servicemetadata: labels: # For use as a Cluster add-on (https://github. Com/kubernetes/kubernetes/tree/master/cluster/addons) # If you are NOT using this as an addon You should comment out this line. Kubernetes.io/cluster-service: 'true' kubernetes.io/name: monitoring-grafana name: monitoring-grafana namespace: promspec: # In a production setup, we recommend accessing Grafana through an external Loadbalancer # or through a public IP. # type: LoadBalancer # You could also use NodePort to expose the service at a randomly-generated port # type: NodePort ports:-port: 80 targetPort: 3000 selector: k8s-app: grafana type: NodePort3 deployment

Kubectl apply-f grafana.yaml

View its running node

View its mapped port

View

4 modify the relevant configuration

5 install the plug-in

1 plug-in location and download

Https://grafana.com/dashboards/6417

2 Import plug-in

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report