Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to deploy kubernetes Cluster in kubeadm

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

It is believed that many inexperienced people have no idea about how to deploy kubernetes cluster in kubeadm. Therefore, this paper summarizes the causes and solutions of the problem. Through this article, I hope you can solve this problem.

I. Environmental requirements

RHEL7.5 is used here

Master, etcd:192.168.10.101, hostname: master

Node1:192.168.10.103, hostname: node1

Node2:192.168.10.104, hostname: node2

All machines can communicate based on the hostname and edit the / etc/hosts file for each machine:

192.168.10.101 master

192.168.10.103 node1

192.168.10.104 node2

All machine times should be synchronized.

All machines turn off the firewall and selinux.

Master can log in to all machines without secret access.

[important issues]

Both the initialization of the cluster and the node joining the cluster will download the image from the Google repository. However, we do not have access to Google, so we cannot download the required image. I have uploaded the required images to Aliyun's personal warehouse.

Second, installation steps

1. Etcd cluster, only master nodes

2. Flannel, all nodes of the cluster

3. Configure master for k8s: only master nodes

Kubernetes-master

Service started: kube-apiserver,kube-scheduler,kube-controller-manager

4. Configure each Node node of K8s

Kubernetes-node

Set to start the docker service first

Started k8s service: kube-proxy,kubelet

Kubeadm

1. Master,nodes: install kubelet,kubeadm,docker

2 、 master:kubeadm init

3 、 nodes:kubeadm join

Https://github.com/kubernetes/kubeadm/blob/master/docs/design/design_v1.10.md

3. Cluster installation 1. Master node installation configuration

(1) yum source configuration

Version 1.12.0 is used here. Download address: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md#downloads-for-v1120

Use yum to download here. To configure the yum source, first configure the yum source of docker, and download the repo file of Aliyun directly:

[root@master] # curl-o / etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

Create the yum source file for kubernetes:

[root@master ~] # vim / etc/yum.repos.d/ Kubernetes.repo [Kubernetes] name=Kubernetes Repobaseurl= https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/gpgcheck=1gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpgenabled=1

Copy these two repo files to the / etc/yum.repo.d directory of the other node:

[root@master] # for i in 102 103; do scp / etc/yum.repos.d/ {docker-ce.repo,kubernetes.repo} root@192.168.10.$i:/etc/yum.repos.d/; done

Install the verification key for the Yum source:

[root@master] # ansible all-m shell-a "curl-O https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg & & rpm--import rpm-package-key.gpg"

(2) install docker, kubelet, kubeadm, kubectl

[root@master ~] # yum install docker-ce kubelet kubeadm kubectl-y

(3) modify the firewall

[root@master ~] # echo 1 > / proc/sys/net/bridge/bridge-nf-call-ip6tables [root@master ~] # echo 1 > / proc/sys/net/bridge/bridge-nf-call-iptables [root@master ~] # ansible all- m shell-a "iptables-P FORWARD ACCEPT"

Note: this is a temporary modification, restart the machine parameters will be invalid.

Permanent modification: / usr/lib/sysctl.d/00-system.conf

(4) modify the docker service file and start docker

[root@master ~] # vim / usr/lib/systemd/system/docker.service [Service] Type=notify# the default is not to use systemd for cgroups because the delegate issues still# exists and systemd currently does not support the cgroup feature set required# for containers run by docker#Environment= "HTTPS_PROXY= http://www.ik8s.io:10080"Environment="NO_PROXY=127.0.0.1/8,127.0.0.1/16"

Add the following to the Service section:

Environment= "NO_PROXY=127.0.0.1/8127.0.0.1/16"

Start docker:

[root@master ~] # systemctl daemon-reload [root@master ~] # systemctl start docker [root@master ~] # systemctl enable docker

(5) set kubelet to boot.

[root@master ~] # systemctl enable kubelet

(6) initialization

Edit the configuration file, ignoring some parameters:

[root@master ~] # vim / etc/sysconfig/kubeletKUBELET_EXTRA_ARGS= "--fail-swap-on=false"

Perform initialization:

[root@master] # kubeadm init-kubernetes-version=v1.12.0-pod-network-cidr=10.244.0.0/16-service-cidr=10.96.0.0/12-ignore-preflight-errors=Swap [init] using Kubernetes version: v1.12.0 [preflight] running pre-flight checks [WARNING Swap]: running with swap on is not supported. Please disable swap [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two Depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [preflight] Some fatal errors occurred: [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.12.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 172.96.236.117 Error response from daemon 10080: connect: connection refused Error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.12.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 172.96.236.117:10080: connect: connection refused Error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.12.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 172.96.236.117:10080: connect: connection refused Error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.12.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 172.96.236.117:10080: connect: connection refused Error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 172.96.236.117:10080: connect: connection refused Error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.2.24: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 172.96.236.117:10080: connect: connection refused Error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.2.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 172.96.236.117 exit status 10080: connect: connection refused, error: exit status 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors= .` [root@master ~] #

Unable to download image. Because Google Mirror Warehouse cannot be accessed. You can download the image locally in other ways, and then perform initialization.

Image download script: https://github.com/yanyuzm/k8s_images_script

I have uploaded the relevant image to Aliyun. Execute the following script:

[root@master ~] # vim Pullripple images.shedding raceme binram bashimages= (kube-apiserver:v1.12.0 kube-controller-manager:v1.12.0 kube-scheduler:v1.12.0 kube-proxy:v1.12.0 pause:3.1 etcd:3.2.24 coredns:1.2.2) for ima in ${images [@]} do docker pull registry.cn-shenzhen.aliyuncs.com/lurenjia/$ima docker tag registry.cn-shenzhen.aliyuncs.com/ Lurenjia/$ima k8s.gcr.io/$ima docker rmi-f registry.cn-shenzhen.aliyuncs.com/lurenjia/$imadone [root@master ~] # sh pull-images.sh

The images used are:

[root@master ~] # docker imagesREPOSITORY TAG IMAGE ID CREATED SIZEk8s.gcr.io/kube-controller-manager v1.12.0 07e068033cf2 2 weeks ago 164MBk8s.gcr.io/kube-apiserver v1.12.0 ab60b017e34f 2 weeks ago 194MBk8s.gcr.io / kube-scheduler v1.12.0 5a1527e735da 2 weeks ago 58.3MBk8s.gcr.io/kube-proxy v1.12.0 9c3a9d3f09a0 2 weeks ago 96.6MBk8s.gcr.io/etcd 3.2.24 3cab8e1b9802 3 weeks ago 220MBk8s.gcr.io/coredns 1.2.2 367cdc8433a4 6 weeks ago 39.2MBk8s.gcr.io/pause 3.1 da86e6ba6ca1 9 months ago 742kB [root@master ~] #

Reinitialize:

[root@master] # kubeadm init-- kubernetes-version=v1.12.0-- pod-network-cidr=10.244.0.0/16-- service-cidr=10.96.0.0/12-- ignore-preflight-errors=Swap [init] using Kubernetes version: v1.12.0 [preflight] running pre-flight checks [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two Depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/ var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/ var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.10.101 127.0.0.1:: 1] [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [master localhost] and IPs [127.0.0.1:: 1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated ca certificate And key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.10.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] valid certificates and keys now exist in "/ etc/kubernetes/pki" [certificates] Generated sa key and public key. [kubeconfig] Wrote KubeConfig file to disk: "/ etc/kubernetes/admin .conf "[kubeconfig] Wrote KubeConfig file to disk:" / etc/kubernetes/kubelet.conf "[kubeconfig] Wrote KubeConfig file to disk:" / etc/kubernetes/controller-manager.conf "[kubeconfig] Wrote KubeConfig file to disk:" / etc/kubernetes/scheduler.conf "[controlplane] wrote Static Pod manifest for component kube-apiserver to" / etc/kubernetes/manifests/kube-apiserver.yaml "[controlplane] wrote Static Pod manifest for component kube-controller-manager to" / etc/kubernetes/manifests/kube-controller-manager .yaml "[controlplane] wrote Static Pod manifest for component kube-scheduler to" / etc/kubernetes/manifests/kube-scheduler.yaml "[etcd] Wrote Static Pod manifest for a local etcd instance to" / etc/kubernetes/manifests/etcd.yaml "[init] waiting for the kubelet to boot up the controlplane as Static Pods from directory" / etc/kubernetes/manifests "[init] this might take a minute or longer if the controlplane images have to be pulled [apiclient] All controlplane components are healthy after 71.135592 seconds [uploadconfig] storing the Configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node master as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/ var/run/dockershim.sock" to the Node API Object "master" as an annotation [bootstraptoken] using token: qaqahg.5xbt355fl26wu8tg [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxyYour Kubernetes master has initialized fulfilling to start using your cluster You need to run the following as a regular user: mkdir-p $HOME/.kube sudo cp-I / etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id-u): $(id-g) $HOME/.kube/configYou should now deploy a podnetwork to the cluster.Run "kubectl apply-f [podnetwork] .yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of machines by running the following on Each nodeas root: kubeadm join 192.168.10.101 token qaqahg.5xbt355fl26wu8tg-- discovery-token-ca-cert-hash sha256:654f52a18fa04234c05eb38a001d92b9831982d06272e5a22b7d898bc6280e47 [root@master] #

OK . Initialization succeeded.

Initialization is successful, final hint: very important

Your Kubernetes master has initialized successfully!To start using your cluster You need to run the following as a regular user: mkdir-p $HOME/.kube sudo cp-I / etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id-u): $(id-g) $HOME/.kube/configYou should now deploy a podnetwork to the cluster.Run "kubectl apply-f [podnetwork] .yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of machines by running the following on Each nodeas root: kubeadm join 192.168.10.101 token qaqahg.5xbt355fl26wu8tg-discovery-token-ca-cert-hash sha256:654f52a18fa04234c05eb38a001d92b9831982d06272e5a22b7d898bc6280e47

Master node: follow the prompts to do the following:

[root@master ~] # mkdir-p $HOME/.kube [root@master ~] # cp-I / etc/kubernetes/admin.conf $HOME/.kube/configcp: overwrite'/ root/.kube/config'? Y [root@master ~] # chown $(id-u): $(id-g) $HOME/.kube/config [root@master ~] #

Check it out:

[root@master ~] # kubectl get componentstatusNAME STATUS MESSAGE ERRORcontroller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health": "true"} [root@master ~] # kubectl get csNAME STATUS MESSAGE ERRORscheduler Healthy Ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"} [root@master ~] #

State of health.

View the cluster nodes:

[root@master ~] # kubectl get nodesNAME STATUS ROLES AGE VERSIONmaster NotReady master 110m v1.12.1 [root@master ~] #

There are only master nodes, but in the NotReady state. Because flannel is not deployed.

(7) install flannel

Address: https://github.com/coreos/flannel

Execute the following command:

[root@master ~] # kubectl apply-f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ymlclusterrole.rbac.authorization.k8s.io/flannel createdclusterrolebinding.rbac.authorization.k8s.io/flannel createdserviceaccount/flannel createdconfigmap/kube-flannel-cfg createddaemonset.extensions/kube-flannel-ds-amd64 createddaemonset.extensions/kube-flannel-ds-arm64 createddaemonset.extensions/kube-flannel-ds-arm createddaemonset.extensions/kube-flannel-ds-ppc64le createddaemonset.extensions/kube-flannel-ds-s390x created [root@master ~] #

After the execution is complete, you will have to wait a long time because the flannel image will be downloaded.

[root@master ~] # docker imagesREPOSITORY TAG IMAGE ID CREATED SIZEk8s.gcr.io/kube-controller-manager v1.12.0 07e068033cf2 2 weeks ago 164MBk8s.gcr.io/kube-apiserver v1.12.0 ab60b017e34f 2 weeks ago 194MBk8s.gcr.io / kube-scheduler v1.12.0 5a1527e735da 2 weeks ago 58.3MBk8s.gcr.io/kube-proxy v1.12.0 9c3a9d3f09a0 2 weeks ago 96.6MBk8s.gcr.io/etcd 3.2.24 3cab8e1b9802 3 weeks ago 220MBk8s.gcr.io/coredns 1.2.2 367cdc8433a4 6 weeks ago 39.2MBquay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 8 months ago 44.6MBk8s.gcr.io/pause 3.1 da86e6ba6ca1 9 months ago 742kB [root@master ~] #

Download of OK,flannel image is complete. View the node:

[root@master ~] # kubectl get nodesNAME STATUS ROLES AGE VERSIONmaster Ready master 155m v1.12.1 [root@master ~] #

OK,master is in the Ready state.

If the flannel download is not successful, you can download Aliyun's:

Docker pull registry.cn-shenzhen.aliyuncs.com/lurenjia/flannel:v0.10.0-amd64

After the download is successful, modify the tag of the image:

Docker tag registry.cn-shenzhen.aliyuncs.com/lurenjia/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64

Check out the namespaces:

[root@master] # kubectl get nsNAME STATUS AGEdefault Active 158mkube-public Active 158mkube-system Active 158m [root@master] #

Check the pod of kube-system:

[root@master] # kubectl get pods-n kube-systemNAME READY STATUS RESTARTS AGEcoredns-576cbf47c7-hfvcq 1 + 1 Running 0 158mcoredns-576cbf47c7-xcpgd 1 + 1 Running 0 158metcd-master 1 + 1 Running 6 132mkube-apiserver-master 1 + + 1 Running 9 132mkube-controller-manager-master 1 158mkube-scheduler-master 1 Running 33 132mkube-flannel-ds-amd64-vqc9h 1 Running 3 41mkube-proxy-z9xrw 1 Running 4 158mkube-scheduler-master 1 Running 33 132m [root@master] # 2, Node node installation configuration

1. Install docker-ce, kubelet, kubeadm

[root@node1 ~] # yum install docker-ce kubelet kubeadm-y [root@node2 ~] # yum install docker-ce kubelet kubeadm-y

2. Copy the files of the master node to node

[root@master ~] # scp / etc/sysconfig/kubelet 192.168.10.103:/etc/sysconfig/kubelet 100% 42 45.4KB/s 00:00 [root@master ~] # scp / etc/sysconfig/kubelet 192.168.10.104:/etc/ Sysconfig/kubelet 100% 42 4.0KB/s 00:00 [root@master ~] #

3. Node nodes join the cluster

Start docker, kubelet

[root@node1 ~] # systemctl start docker kubelet [root@node1 ~] # systemctl enable docker kubelet [root@node2 ~] # systemctl start docker kubelet [root@node2 ~] # systemctl enable docker kubelet

The node node joins the cluster:

[root@node1 ~] # kubeadm join 192.168.10.101 token qaqahg.5xbt355fl26wu8tg-- discovery-token-ca-cert-hash sha256:654f52a18fa04234c05eb38a001d92b9831982d06272e5a22b7d898bc6280e47-- ignore-preflight-errors=Swap [preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used Because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map [IP _ vs_wrr: {} ip_vs_sh: {} nf_conntrack_ipv4: {} ip_vs: {} ip_vs_rr: {}] you can solve this problem with following methods: 1. Run 'modprobe -' to load missing kernel modules 2. Provide the missing builtin kernel ipvs support [WARNING Swap]: running with swap on is not supported. Please disable swap [preflight] Some fatal errors occurred: [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: / proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors= .` [root@node1 ~] # echo 1 > / proc/sys/net/bridge/bridge-nf-call-iptables [root@node1 ~] #

If you report an error, just follow the prompt to set it up.

[root@node1 ~] # kubeadm join 192.168.10.101 token qaqahg.5xbt355fl26wu8tg-- discovery-token-ca-cert-hash sha256:654f52a18fa04234c05eb38a001d92b9831982d06272e5a22b7d898bc6280e47-- ignore-preflight-errors=Swap [preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used Because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map [IP _ vs_rr: {} ip_vs_wrr: {} ip_vs_sh: {} nf_conntrack_ipv4: {} ip_vs: {}] you can solve this problem with following methods: 1. Run 'modprobe -' to load missing kernel modules 2. Provide the missing builtin kernel ipvs support [WARNING Swap]: running with swap on is not supported. Please disable swap [discovery] Trying to connect to API Server "192.168.10.101 Please disable swap" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.10.101:6443"[discovery] Requesting info from" https://192.168.10.101:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots Will use API Server "192.168.10.101 kubelet 6443" [discovery] Successfully established connection with API Server "192.168.10.101 Successfully established connection with API Server" [kubelet] Downloading configuration for the kubelet from the "ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file" / var/lib/kubelet/config.yaml "[kubelet] Writing kubelet environment file with flags to file" / var/lib/kubelet/kubeadm-flags.env "[preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet To perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/ var/run/dockershim.sock" to the Node API object "node1" as an annotationThis node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the master to see this node join the cluster. [root@node1 ~] #

OK,node1 joined successfully.

[root@node2] # echo 1 > / proc/sys/net/bridge/bridge-nf-call-iptables [root@node2 ~] # kubeadm join 192.168.10.101 root@node2 6443-- token qaqahg.5xbt355fl26wu8tg-- discovery-token-ca-cert-hash sha256:654f52a18fa04234c05eb38a001d92b9831982d06272e5a22b7d898bc6280e47-- ignore-preflight-errors=Swap [preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used Because the following required kernel modules are not loaded: [ip_vs_rr ip_vs_wrr ip_vs_sh ip_vs] or no builtin kernel ipvs support: map [IP _ vs: {} ip_vs_rr: {} ip_vs_wrr: {} ip_vs_sh: {} nf_conntrack_ipv4: {}] you can solve this problem with following methods: 1. Run 'modprobe -' to load missing kernel modules 2. Provide the missing builtin kernel ipvs support [WARNING Swap]: running with swap on is not supported. Please disable swap [discovery] Trying to connect to API Server "192.168.10.101 Please disable swap" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.10.101:6443"[discovery] Requesting info from" https://192.168.10.101:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots Will use API Server "192.168.10.101 kubelet 6443" [discovery] Successfully established connection with API Server "192.168.10.101 Successfully established connection with API Server" [kubelet] Downloading configuration for the kubelet from the "ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file" / var/lib/kubelet/config.yaml "[kubelet] Writing kubelet environment file with flags to file" / var/lib/kubelet/kubeadm-flags.env "[preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet To perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/ var/run/dockershim.sock" to the Node API object "node2" as an annotationThis node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the master to see this node join the cluster. [root@node2 ~] #

OK,node2 joined successfully.

4. Node manually downloads kube-proxy and pause images

The node node executes the following command:

For ima in kube-proxy:v1.12.0 pause:3.1;do docker pull registry.cn-shenzhen.aliyuncs.com/lurenjia/$ima & & docker tag registry.cn-shenzhen.aliyuncs.com/lurenjia/$ima k8s.gcr.io/$ima & & docker rmi-f registry.cn-shenzhen.aliyuncs.com/lurenjia/$ima; done

5. Go to the master node to check the node:

[root@master ~] # kubectl get nodesNAME STATUS ROLES AGE VERSIONmaster Ready master 3h20m v1.12.1node1 Ready 18m v1.12.1node2 Ready 17m v1.12.1 [root@master ~] #

OK, all in Ready state. If the node node is still abnormal, restart the docker and kubelet services of the node node.

View the pod information of kube-system:

[root@master] # kubectl get pods-n kube-system-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODEcoredns-576cbf47c7-hfvcq 1 Running 0 3h21m 10.244.0.3 master coredns-576cbf47c7-xcpgd 1 Running 0 3h21m 10.244 . 0.2 master etcd-master 1/1 Running 6 165m 192.168.10.101 master kube-apiserver-master 1/1 Running 9 165m 192.168.10.101 master kube-controller-manager-master 1/1 Running 33 165m 192.168.10.101 master kube- Flannel-ds-amd64-bd4d8 1/1 Running 0 21m 192.168.10.103 node1 kube-flannel-ds-amd64-srhb9 1/1 Running 0 20m 192.168.10.104 node2 kube-flannel-ds-amd64-vqc9h 1/1 Running 3 74m 192.168.10.101 master kube-proxy-8bfvt 1/1 Running 1 21m 192.168.10.103 node1 kube-proxy-gz55d 1/1 Running 1 20m 192.168.10.104 node2 kube-proxy-z9xrw 1/1 Running 4 3h21m 192.168.10.101 master kube-scheduler-master 1 Running 33 165m 192.168.10.101 master [root@master ~] #

So far, the cluster has been built successfully. Take a look at the images used for building:

Master node:

[root@master ~] # docker imagesREPOSITORY TAG IMAGE ID CREATED SIZEk8s.gcr.io/kube-controller-manager v1.12.0 07e068033cf2 2 weeks ago 164MBk8s.gcr.io/kube-apiserver v1.12.0 ab60b017e34f 2 weeks ago 194MBk8s.gcr.io / kube-scheduler v1.12.0 5a1527e735da 2 weeks ago 58.3MBk8s.gcr.io/kube-proxy v1.12.0 9c3a9d3f09a0 2 weeks ago 96.6MBk8s.gcr.io/etcd 3.2.24 3cab8e1b9802 3 weeks ago 220MBk8s.gcr.io/coredns 1.2.2 367cdc8433a4 6 weeks ago 39.2MBquay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 8 months ago 44.6MBk8s.gcr.io/pause 3.1 da86e6ba6ca1 9 months ago 742kB [root@master ~] #

Node node:

[root@node1 ~] # docker imagesREPOSITORY TAG IMAGE ID CREATED SIZEk8s.gcr.io/kube-proxy v1.12.0 9c3a9d3f09a0 2 weeks ago 96.6MBquay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 8 months ago 44.6MBk8s.gcr.io/pause 3.1 Da86e6ba6ca1 9 months ago 742kB [root@node1 ~] # [root@node2 ~] # docker imagesREPOSITORY TAG IMAGE ID CREATED SIZEk8s.gcr.io/kube-proxy v1.12.0 9c3a9d3f09a0 2 weeks ago 96.6MBquay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 8 Months ago 44.6MBk8s.gcr.io/pause 3.1 da86e6ba6ca1 9 months ago 742kB [root@node2 ~] # IV. Cluster application

Run a nginx.

[root@master ~] # kubectl run nginx-deploy-- image=nginx-- port=80-- replicas=1deployment.apps/nginx-deploy created [root@master ~] # kubectl get deployNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEnginx-deploy 1 11 110s [root@master] # kubectl get pods-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODEnginx-deploy-8c5fc574c-d8jxj 1 Running 0 18s 10.244.2.4 node2 [root@master ~] #

See if you can access the nginx on the node node:

[root@node1] # curl-I 10.244.2.4HTTP/1.1 200 OKServer: nginx/1.15.5Date: Tue, 16 Oct 2018 12:02:34 GMTContent-Type: text/htmlContent-Length: 612Last-Modified: Tue, 02 Oct 2018 14:49:27 GMTConnection: keep-aliveETag: "5bb38577-264" Accept-Ranges: bytes [root@node1 ~] #

Return 200, the visit is successful.

[root@master] # kubectl expose deployment nginx-deploy-name=nginx-port=80-target-port=80-protocol=TCPservice/nginx exposed [root@master] # kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEkubernetes ClusterIP 10.96.0.1 443/TCP 21hnginx ClusterIP 10.104.88.59 80/TCP 51s [root@master] #

Start a busybox:

[root@master] # kubectl run client-image=busybox-replicas=1-it-restart=NeverIf you don't see a command prompt, try pressing enter./ # / # wget-O-Q http://nginx:80Welcome to nginx! Body {width: 35eme; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif;} Welcome to nginx!

If you see this page, the nginx web server is successfully installed andworking. Further configuration is required.

For online documentation and support please refer tonginx.org.Commercial support is available atnginx.com.

Thank you for using nginx.

/ #

Delete and rebuild:

[root@master ~] # kubectl delete svc nginxservice "nginx" deleted [root@master ~] # kubectl expose deployment nginx-deploy-- name=nginxservice/nginx exposed [root@master] # kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEkubernetes ClusterIP 10.96.0.1 443/TCP 22hnginx ClusterIP 10.110.52.68 80/TCP 8s [root@master ~] #

Create multiple copies:

[root@master] # kubectl run myapp-- image=ikubernetes/myapp:v1-- replicas=2deployment.apps/myapp created [root@master] # [root@master] # kubectl get deploymentNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEmyapp 2 22 2 49snginx-deploy 1 11 1 36m [root@master] # kubectl get pods-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODEclient 1 node2 myapp-6946649ccd-knd8r 1 Running 0 3m49s 10.244.2.6 node2 myapp- 1 Running 0 78s 10.244.2.7 node2 myapp- 6946649ccd-pfl2r 1 Running 0 78s 10.244.1.6 node1 nginx-deploy-8c5fc574c-5bjjm 1 Running 0 12m 10.244.1.5 node1 [root@master ~] #

Create a service for myapp:

[root@master] # kubectl expose deployment myapp-- name=myapp-- port=80service/myapp exposed [root@master ~] # kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEkubernetes ClusterIP 10.96.0.1 443/TCP 22hmyapp ClusterIP 10.110.238.138 80/TCP 11snginx ClusterIP 10.110.52.68 80/TCP 9m37s [root@master ~] #

Expand the myapp to 5:

[root@master ~] # kubectl scale-- replicas=5 deployment myappdeployment.extensions/myapp scaled [root@master ~] # kubectl get podsNAME READY STATUS RESTARTS AGEclient 1 Running 0 5m24smyapp-6946649ccd-6kqxt 1 Running 0 8smyapp-6946649ccd-7xj45 1 Running 0 8smyapp -6946649ccd-8nh9q 1 to 1 Running 0 8smyapp-6946649ccd-knd8r 1 to 1 Running 0 11mmyapp-6946649ccd-pfl2r 1 to 1 Running 0 11mnginx-deploy-8c5fc574c-5bjjm 1 to 1 Running 0 23m [root@master ~] #

Modify myapp:

[root@master ~] # kubectl edit svc myapptype: NodePort

Change type to: NodePort

[root@master] # kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEkubernetes ClusterIP 10.96.0.1 443/TCP 23hmyapp NodePort 10.110.238.138 80:30937/TCP 35mnginx ClusterIP 10.110.52.68 80/TCP 44m [root@master ~] #

Port: 30937, physical machine open: 192.168.10.101pur30937

OK, can be accessed.

V. Cluster resources

1. Resource type

Resources: objects after instantiation, mainly include:

Wordload:Pod,ReplicaSet,Deployment,StatefulSet,DaemonSet,Job,Cronjob .

Service Discovery and Equalization: Service,Ingress.

Configuration and storage: Volume,CSI, especially ConfigMap,Secret;DownwardAPI

Cluster-level resources: Namespace,Node,Role,ClusterRole,RoleBinding,ClusterRoleBinding

Metadata-based resources: HPA,PodTemplate,LimitRange

2. How to create resources:

Apiserver only receives resource definitions in JSON format

Yaml format provides configuration list, which can be automatically converted to json format by apiserver and then submitted.

Configuration list of most resources:

ApiVesion: group/version, which can be viewed using kubectl api-versions

Kind: resource category metadata: metadata (name,namespace,labels,annotations) reference PATH:/api/GROUP/VERSION/namespace/NAMESPACE/TYPE/NAME of each resource, for example: / api/v1/namespaces/default/pods/myapp-6946649ccd-c6m9b spec: expected state, disired state status: current status, current state. This field is maintained by the kubernetes cluster to view the definition of a resource. For example, check podroot@master ~] # kubectl explain podKIND: PodVERSION: v1DESCRIPTION:.

Example of pod resource definition:

[root@master ~] # mkdir maniteste [root@master ~] # vim maniteste/pod-demo.yamlapiVersion: v1kind: Podmetadata: name: pod-demo namespace: default labels: app: myapp tier: frontendspec: containers:-name: myapp image: ikubernetes/myapp:v1-name: busybox image: busybox:latest command:-"/ bin/sh"-"c"-"sleep 5"

Create a resource:

[root@master ~] # kubectl create-f maniteste/pod-demo.yaml [root@master ~] # kubectl describe pods pod-demoName: pod-demoNamespace: defaultPriority: 0PriorityClassName: Node: node2/192.168.10.104Start Time: Wed 17 Oct 2018 19:54:03 + 0800Labels: app=myapp tier=frontendAnnotations: Status: RunningIP: 10.244.2.26

View the log:

[root@master ~] # curl 10.244.2.26Hello MyApp | Version: V1 | Pod Name [root@master ~] # kubectl logs pod-demo myapp 10.244.0.0-- [17/Oct/2018:11:56:49 + 0000] "GET / HTTP/1.1" >

1 pod runs 2 containers.

Delete pod:kubectl delete-f maniteste/pod-demo.yaml

VI. Pod controller

1. View the containers definition information of pod: kubectl explain pods.spec.containers

Resource allocation list:

Autonomous Pod resource list format: first-level field: apiVersion (group/version), kind,metadata (name,namespace,labels,annotations,. ), spec,status (read-only) Pod resource: spec.containers\-name image imagePullPlocy Always | Never | IfNotPresent

2. Label:

Key=value,key consists of letters, numbers, _, -,. Composition. Value: can be empty, can only begin or end with letters or numbers, and can be used in the middle

Label:

[root@master] # kubectl get pods-l app-- show-labelsNAME READY STATUS RESTARTS AGE LABELSpod-demo 0 ContainerCreating 2 ContainerCreating 0 4m46s app=myapp Tier=frontend [root@master ~] # kubectl label pods pod-demo release=pod/pod-demo labeled [root@master ~] # kubectl get pods-l app-- show-labelsNAME READY STATUS RESTARTS AGE LABELSpod-demo 0 ContainerCreating 0 5m27s app=myapp,release=,tier=frontend [root@master ~] #

Look at the pod with a tag:

[root@master ~] # kubectl get pods-l app,releaseNAME READY STATUS RESTARTS AGEpod-demo 0lap 2 ContainerCreating 0 7m43s [root@master ~] #

Tag selector:

Equivalence: =, =,! =

Such as: kubectl get pods-l release=stable

Set relationship: KEY in (VALUE1,VALUE2 … ), KEY notin (VALUE1,VALUE2... ), KEY,! KEY

[root@master] # kubectl get pods-l "release notin (stable,)" NAME READY STATUS RESTARTS AGEclient 0 Error 0 46hmyapp-6946649ccd-2lncx 1 Running 2 46hnginx-deploy-8c5fc574c-5bjjm 1 Running 2 46h [root@master ~] #

Many resources support embedded fields to define the tag selector they use:

MatchLabels: directly give the key value

MatchExpressions: defines the use of tag selector based on the given expression, {key: "KEY", operator: "OPRATOR", values: [VAL1,VAL2,.] }

Operator (operator): the value of the In,NotIn:values field must be a non-empty list and the value of the Exists,NotExists:values field must be an empty list

3. NodeSelector: node tag selector

NodeName

Label a node, such as:

[root@master ~] # kubectl label nodes node1 disktype=ssdnode/node1 labeled [root@master ~] #

Modify the yaml file:

[root@master ~] # vim maniteste/pod-demo.yamlapiVersion: v1kind: Podmetadata: name: pod-demo namespace: default labels: app: myapp tier: frontendspec: containers:-name: myapp image: ikubernetes/myapp:v1 ports:-name: http containerPort: 80-name: busybox image: busybox:latest imagePullPolicy: IfNotPresent command:-"/ bin/sh"-"sleep 5" nodeSelector: disktype: ssd

Recreate:

[root@master ~] # kubectl delete pods pod-demopod "pod-demo" deleted [root@master ~] # kubectl create-f maniteste/pod-demo.yamlpod/pod-demo created [root@master ~] #

4 、 annotations

Unlike label, it cannot be used to pick resource objects, only to provide "metadata" for objects.

Example:

ApiVersion: v1kind: Podmetadata: name: pod-demo namespace: default labels: app: myapp tier: frontend annotations: .com/create_by: "hello world" spec: containers:-name: myapp image: ikubernetes/myapp:v1 ports:-name: http containerPort: 80-name: busybox image: busybox:latest imagePullPolicy: IfNotPresent command:-"/ bin/sh"-"c"-"sleep 3600" nodeSelector: disktype: ssd

5. Pod life cycle

Status: Pending (pending), Running,Failed,Success,Unknown

Important behaviors in the Pod lifecycle: initializing containers, container probes (liveness, readliness)

RestartPolicy:Always, OnFailure,Never. Default to Always

Probe types: ExecAction, TCPSocketAction, HTTPGetAction.

For example, ExecAction:

[root@master ~] # vim liveness-exec.yamlapiVersion: v1kind: Podmetadata: name: liveness-exec-pod namespace: defaultspec: containers:-name: liveness-exec-container image: busybox:latest imagePullPolicy: IfNotPresent command: ["/ bin/sh", "- c", "touch / tmp/healthy; sleep 30; rm-f / tmp/healthy Sleep 3600 "] livenessProbe: exec: command: [" test ","-e "," / tmp/healthy "] initialDelaySeconds: 2 periodSeconds: 3

Create:

[root@master] # kubectl create-f liveness-exec.yaml pod/liveness-exec-pod created [root@master ~] # kubectl get pods-wNAME READY STATUS RESTARTS AGEclient 0 3dliveness-exec-pod 1 Error 0 3dliveness-exec-pod 1 Running 3 3mmyapp-6946649ccd-2lncx 1 Running 4 3dnginx-deploy-8c5fc574c-5bjjm 1/1 Running 4 3dliveness-exec-pod 1/1 Running 4 4m

For example, HTTPGetAction:

[root@master ~] # vim liveness-httpGet.yamlapiVersion: v1kind: Podmetadata: name: liveness-httpget-pod namespace: defaultspec: containers:-name: liveness-httpget-container image: ikubernetes/myapp:v1 imagePullPolicy: IfNotPresent ports:-name: http containerPort: 80 livenessProbe: httpGet: port: http path: / index.html initialDelaySeconds: 1 periodSeconds: 3 [root@master ~] # kubectl create-f liveness-httpGet.yamlpod / liveness-httpget-pod created [root@master ~] #

Readiness:

[root@master ~] # vim readiness-httget.yamlapiVersion: v1kind: Podmetadata: name: readiness-httpget-pod namespace: defaultspec: Podmetadata: name: readiness-httpget-pod namespace: defaultspec: containers:-name: readiness-httpget-container image: ikubernetes/myapp:v1 imagePullPolicy: IfNotPresent ports:-name: http containerPort: 80 readinessProbe: httpGet: port: http path: / index.html initialDelaySeconds: 1 periodSeconds: 3

Container life cycle-poststart example:

[root@master ~] # vim poststart-pod.yamlapiVersion: v1kind: Podmetadata: name: poststart-pod namespace: containers:-name: busybox-httpd image: busybox:latest imagePullPolicy: IfNotPresent lifecycle: postStart: exec: command: ["/ bin/sh", "- c", "echo Home_Page > > / tmp/index.html"] # command: ['/ bin/sh','-c' 'sleep 3600'] command: ["/ bin/httpd"] args: ["- f", "- h / tmp"] [root@master ~] # kubectl create-f poststart-pod.yaml pod/poststart-pod created [root@master ~] #

However, it is certainly not possible to use the / tmp directory as the site directory.

6. Pod controller

There are several types of pod controllers:

ReplicaSet: create a specified number of pod copies on behalf of users to ensure that the number of pod replicas is in line with the expected status, and support scrolling automatic capacity expansion and reduction features.

ReplicaSet consists of three main components:

(1) the number of pod copies expected by the user

(2) tag selector to determine which pod is managed by yourself.

(3) when the number of existing pod is insufficient, it will be created according to the pod resource template.

Help users manage stateless pod resources and accurately reflect the number of user-defined targets, but RelicaSet is not a direct controller, but uses Deployment.

Deployment: works on top of ReplicaSet to manage stateless applications, by far the best controller. Support for rolling updates and rollbacks, as well as declarative configuration.

DaemonSet: used to ensure that each node in the cluster runs only a specific copy of pod, which is typically used to implement system-level background tasks. Such as ELK services

Features: the service is stateless and the service must be a daemon

Job: exit as soon as it's done, no need to restart or rebuild.

Cronjob: periodic task control, no need for continuous background operation

StatefulSet: managing stateful applications

ReplicaSet (rs) example:

[root@master ~] # kubectl get deploymentsNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEmyapp 1 1 1 4dnginx-deploy 1 1 11 4d1h [root@master ~] # kubectl delete deploy myappdeployment.extensions "myapp" deleted [root@master ~] # kubectl delete deploy nginx-deploydeployment.extensions "nginx-deploy" deleted [ Root@master ~] # [root@master ~] # vim rs-demo.yamlapiVersion: apps/v1kind: ReplicaSetmetadata: name: myapp namespace: defaultspec: replicas: 2 selector: matchLabels: app: myapp release: canary template: metadata: name: myapp-pod labels: app: myapp release: canary environment: Qa spec: containers:-name: myapp-conatainer image: ikubernetes/myapp:v1 ports:-name: http containerPort: 80 [root@master ~] # kubectl create-f rs-demo.yamlreplicaset.apps/myapp created

View the label:

[root@master] # kubectl get pods-- show-labels NAME READY STATUS RESTARTS AGE LABELSclient 0 show-labels NAME READY STATUS RESTARTS AGE LABELSclient 1 Error 0 4d run=clientliveness-httpget-pod 1 Running 1 107m myapp-fspr7 1 Running 0 75s app=myapp,environment=qa Release=canarymyapp-ppxrw 1 Running 0 75s app=myapp,environment=qa,release=canarypod-demo 2 Running 0 3s app=myapp,tier=frontendreadiness-httpget-pod 1 Running 086m [root@master ~] #

Give pod-demo a label release=canary:

[root@master ~] # kubectl label pods pod-demo release=canarypod/pod-demo labeled

Deploy example:

[root@master ~] # kubectl delete rs myappreplicaset.extensions "myapp" deleted [root@master ~] # [root@master ~] # vim deploy-demo.yamlapiVersion: apps/v1kind: Deploymentmetadata: name: myapp-deploy namespace: defaultspec: replicas: 2 selector: matchLabels: app: canary template: metadata: labels: app: myapp release: canary spec: containers:-name: myapp image: Ikubernetes/myapp:v1 ports:-name: http containerPort: 80 [root@master ~] # kubectl create-f deploy-demo.yaml deployment.apps/myapp-deploy created [root@master ~] # [root@master ~] # kubectl get podsNAME READY STATUS RESTARTS AGEclient 0 Error 0 4d20hliveness-httpget-pod 1 5x42g 1 Running 2 22hmyapp-deploy-574965d786-5x42g 1 Running 0 70smyapp-deploy-574965d786-dqzpd 1 Running 0 70spod-demo 2 20hreadiness-httpget-pod 1 Running 3 20hreadiness-httpget-pod 1 Running 1 21h [root@master ~] # kubectl get rsNAME DESIRED CURRENT READY AGEmyapp-deploy-574965d786 22 293s [root@master ~] #

If you want to modify the number of copies, edit deploy-demo.yaml to modify the number of copies and execute kubectl apply-f deploy-demo.yaml

Or: kubectl patch deployment myapp-deploy-p'{"spec": {"replicas": 5}}', here are 5 copies modified.

Modify other properties, such as:

[root@master ~] # kubectl patch deployment myapp-deploy-p'{"spec": {"strategy": {"rollingUpdate": {"maxSurge": 1, "maxUnavailable": 0} 'deployment.extensions/myapp-deploy patched [root@master ~] #

Updated version:

[root@master ~] # kubectl set image deployment myapp-deploy myapp=ikubernetes/myapp:v3 & & kubectl rollout pause deployment myapp-deploy deployment.extensions/myapp-deploy image updateddeployment.extensions/myapp-deploy paused [root@master ~] # [root@master ~] # kubectl rollout status deployment myapp-deploy Waiting for deployment "myapp-deploy" rollout to finish: 1 out of 2 new replicas have been updated... [root@master] # kubectl rollout resume deployment myapp-deploy deployment.extensions/myapp-deploy resumed [root@master ~] #

Version rollback:

[root@master] # kubectl rollout undo deployment myapp-deploy-- to-revision=1deployment.extensions/myapp-deploy [root@master ~] #

DaemonSet example:

Node1, node2 execution: docker pull ikubernetes/filebeat:5.6.5-alpine

Edit the yaml file:

[root@master ~] # vim ds-demo.yamlapiVersion: apps/v1kind: DaemonSetmetadata: name: myapp-ds namespace: defaultspec: selector: matchLabels: app: filebeat release: stable template: metadata: labels: app: filebeat release: stable spec: containers:-name: filebeat image: ikubernetes/filebeat:5.6.5-alpine env:-name: REDIS_HOST Value: redis.default.svc.cluster.local-name: REDIS_LOG_LEVEL value: info [root@master ~] # kubectl apply-f ds-demo.yaml daemonset.apps/myapp-ds created [root@master ~] #

Modify the yaml file:

[root@master ~] # vim ds-demo.yamlapiVersion: apps/v1kind: Deploymentmetadata: name: redis namespace: defaultspec: replicas: 1 selector: matchLabels: app: redis role: logstor template: metadata: labels: app: redis role: logstor spec: containers:-name: redis image: redis:4.0-alpine ports:-name: redis containerPort: 6379 -apiVersion: apps/v1kind: DaemonSetmetadata: name: filebeat-ds namespace: defaultspec: selector: matchLabels: app: filebeat release: stable template: metadata: labels: app: filebeat release: stable spec: containers:-name: filebeat image: ikubernetes/filebeat:5.6.5-alpine env:-name: REDIS_HOST value: redis.default. Svc.cluster.local-name: REDIS_LOG_LEVEL value: info [root@master ~] # kubectl delete-f ds-demo.yaml [root@master ~] # kubectl apply-f ds-demo.yaml deployment.apps/redis createddaemonset.apps/filebeat-ds created [root@master ~] #

Expose redis ports:

[root@master ~] # kubectl expose deployment redis-- port=6379service/redis exposed [root@master ~] # kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEkubernetes ClusterIP 10.96.0.1 443/TCP 5d20hmyapp NodePort 10.110.238.138 80:30937/TCP 4d21hnginx ClusterIP 10.110.52.68 80/TCP 4d21hredis ClusterIP 10.97.196.222 6379/TCP 11s [root@master ~] #

Enter redis:

[root@master ~] # kubectl get podsNAME READY STATUS RESTARTS AGEredis-664bbc646b-sg6wk 1 tnlActive Internet connections 1 Running 0 2m55s [root@master ~] # kubectl exec-it redis-664bbc646b-sg6wk-/ bin/sh/data # netstat-tnlActive Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0 LISTEN tcp 6379 0.0.0.0 LISTEN tcp 0 0: 6379: * LISTEN / data # nslookup redis.default.svc.cluster.localnslookup: can't resolve'(null)': Name does not resolveName: redis.default.svc.cluster.localAddress 1: 10.97.196. 222 redis.default.svc.cluster.local/data # / data # redis-cli-h redis.default.svc.cluster.localredis.default.svc.cluster.local:6379 > keys * (empty list or set) redis.default.svc.cluster.local:6379 >

Enter filebeat:

[root@master ~] # kubectl get podsNAME READY STATUS RESTARTS AGEclient 0 4d21hfilebeat-ds-bszfz 1 Error 0 4d21hfilebeat-ds-bszfz 1 Running 0 6m2sfilebeat-ds-w5nzb 1 6m2sredis-664bbc646b-sg6wk 1 Running 0 6m2sredis-664bbc646b-sg6wk 1 Running 0 6m2s [root@master ~] # kubectl exec-it filebeat-ds-bszfz-/ bin/sh/ # printenv/ # nslookup redis.default.svc.cluster.local/ # kill-1 1

Update: [root@master ~] # kubectl set image daemonsets filebeat-ds filebeat=ikubernetes/filebeat:5.6.6-alpine

VII. Service resources

Service is one of the most core resource objects in kubernetes, and Service can be understood as a "micro service" in micro service architecture.

To put it simply, a service is essentially a cluster composed of a group of pod. Service and pod are strung together by Label, and the Label of the pod of the same Service is the same. All pod under the same service are load balanced through kube-proxy, and each service is assigned a globally unique virtual ip, that is, cluster ip. Throughout the life cycle of the service, the cluster ip remains the same, while there is a dns service in kubernetes that responds to service's name and cluster ip.

Working mode: userspace, iptables, ipvs

Type: ExternalName, ClusterIP, NodePort, LoadBalancer

Resource record: SVC_NAME.NS_NAME.DOMAIN.LTD.

Svc.cluster.local. For example: redis.default.svc.cluster.local.

[root@master] # kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEkubernetes ClusterIP 10.96.0.1 443/TCP 5d20hmyapp NodePort 10.110.238.138 80:30937/TCP 4d22hnginx ClusterIP 10.110.52.68 80/TCP 4d22hredis ClusterIP 10.97. 196.222 6379/TCP 29m [root@master ~] # kubectl delete svc redis [root@master ~] # kubectl delete svc nginx [root@master ~] # kubectl delete svc myapp [root@master ~] # vim redis-svc.yamlapiVersion: v1kind: Servicemetadata: name: redis namespace: defaultspec: selector: app: redis role: logstor clusterIP: 10.97.97 type: ClusterIP ports:-port: 6379 targetPort: 6379 [root@master ~] # kubectl apply-f Redis-svc.yamlservice/redis created [root@master ~] #

NodePort:

[root@master ~] # vim myapp-svc.yamlapiVersion: v1kind: Servicemetadata: name: myapp namespace: defaultspec: selector: app: canary clusterIP: 10.99.99.99 type: NodePort ports:-port: 80 targetPort: 80 nodePort: 30080 [root@master ~] # kubectl apply-f myapp-svc.yamlservice/myapp created [root@master ~] # [root@master ~] # kubectl patch svc myapp- p'{"spec": {"sessionAffinity": "ClientIP"} } 'service/myapp patched [root@master ~] #

Do not specify ClusterIP:

[root@master ~] # vim myapp-svc-headless.yamlapiVersion: v1kind: Servicemetadata: name: myapp-svc namespace: defaultspec: selector: app: myapp release: canary clusterIP: "None" ports:-port: 80 targetPort: 80 [root@master ~] # kubectl apply-f myapp-svc-headless.yamlservice/myapp-svc created [root@master ~] # kubectl get svc- n kube-systemNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEkube-dns ClusterIP 10.96.0.10 53/UDP 53/TCP 5d21h [root@master] # dig-t A myapp-svc.default.svc.cluster.local. @ 10.96.0.10; DiG 9.9.4-RedHat-9.9.4-61.el7_5.1-t A myapp-svc.default.svc.cluster.local. @ 10.96.0.10; global options: + cmd;; Got answer:;;-> > HEADER

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report