In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Several official Kubernetes deployment methods minikube
Minikube is a tool that can quickly run a single point of Kubernetes locally, try Kubernetes or be used by daily development users. Cannot be used in a production environment.
Official address: https://kubernetes.io/docs/setup/minikube/
Kubeadm
Kubeadm is also a tool that provides kubeadm init and kubeadm join for rapid deployment of Kubernetes clusters.
Official address: https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/
Binary packet
Download the released binaries from the official version, deploy each component manually, and form a Kubernetes cluster.
Summary:
When deploying Kubernetes clusters in a production environment, only Kubeadm and binary packages are optional. Kubeadm lowers the deployment threshold, but shields a lot of details, so it is difficult to troubleshoot problems encountered. We use binaries to deploy Kubernetes clusters here, and I also recommend that you use this method. Although manual deployment is troublesome, you can learn a lot of working principles, which is more conducive to later maintenance.
Software environment software version operating system CentOS7.5_x64Docker18-ceKubernetes1.12 server role IP component kubewashi master192.168.31.63kubekbeletnode1192.168.31.65kubeletnode1192.168.31.65kubelet.node1192.168.31.66kubelet.node2192.168.31.66kubelet.kubelyproxydock8sflannelsidetcd
architecture diagram
1. Deploy Etcd clusters
To generate a self-signed certificate using cfssl, download the cfssl tool first:
Wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64chmod + x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64mv cfssl_linux-amd64 / usr/local/bin/cfsslmv cfssljson_linux-amd64 / usr/local/bin/cfssljsonmv cfssl-certinfo_linux-amd64 / usr/bin/cfssl-certinfo1.1 generate certificates
Create the following three files:
# cat ca-config.json {"signing": {"default": {"expiry": "87600h"}, "profiles": {"www": {"expiry": "87600h", "usages": ["signing", "key encipherment", "server auth" "client auth"]} # cat ca-csr.json {"CN": "etcd CA", "key": {"algo": "rsa", "size": 2048}, "names": [{"C": "CN", "L": "Beijing" "ST": "Beijing"}]} # cat server-csr.json {"CN": "etcd", "hosts": ["192.168.31.63", "192.168.31.65", "192.168.31.66"], "key": {"algo": "rsa", "size": 2048} "names": [{"C": "CN", "L": "BeiJing", "ST": "BeiJing"}]}
Generate a certificate:
Cfssl gencert-initca ca-csr.json | cfssljson-bare ca- cfssl gencert-ca=ca.pem-ca-key=ca-key.pem-config=ca-config.json-profile=www server-csr.json | cfssljson-bare server# ls * pemca-key.pem ca.pem server-key.pem server.pem
Certificate this piece knows how to generate, how to use, it is recommended that there is no need to study too much for the time being.
1.2 deploy Etcd
Download address of binary package: https://github.com/coreos/etcd/releases/tag/v3.2.12
The following deployment steps are the same for the three planned etcd nodes, except that the server IP in the etcd configuration file writes the current:
Decompress the binary package:
# mkdir / opt/etcd/ {bin,cfg,ssl}-p # tar zxvf etcd-v3.2.12-linux-amd64.tar.gz# mv etcd-v3.2.12-linux-amd64/ {etcd,etcdctl} / opt/etcd/bin/
Create an etcd profile:
# cat / opt/etcd/cfg/etcd # [Member] ETCD_NAME= "etcd01" ETCD_DATA_DIR= "/ var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS= "https://192.168.31.63:2380"ETCD_LISTEN_CLIENT_URLS="https://192.168.31.63:2379"#[Clustering]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.63:2380"ETCD_ADVERTISE_CLIENT_URLS="https: / / 192.168.31.632379 "ETCD_INITIAL_CLUSTER=" etcd01= https://192.168.31.63:2380, Etcd02= https://192.168.31.65:2380, Etcd03= https://192.168.31.66:2380"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_INITIAL_CLUSTER_STATE="new"ETCD_NAME node name ETCD_DATA_DIR data directory ETCD_LISTEN_PEER_URLS cluster communication listening address ETCD_LISTEN_CLIENT_URLS client access listening address ETCD_INITIAL_ADVERTISE_PEER_URLS cluster advertisement address ETCD_ADVERTISE_CLIENT_URLS client advertisement address ETCD_INITIAL_CLUSTER cluster section Point address ETCD_INITIAL_CLUSTER_TOKEN cluster current status of TokenETCD_INITIAL_CLUSTER_STATE joining the cluster New is a new cluster. Existing means to join an existing cluster.
Systemd Management etcd:
# cat / usr/lib/systemd/system/etcd.service [Unit] Description=Etcd ServerAfter=network.targetAfter=network-online.targetWants=network- online.target [service] Type=notifyEnvironmentFile=/opt/etcd/cfg/etcdExecStart=/opt/etcd/bin/etcd\-name=$ {ETCD_NAME}\-data-dir=$ {ETCD_DATA_DIR}\-listen-peer-urls=$ {ETCD_LISTEN_PEER_URLS}\-listen-client-urls=$ {ETCD_LISTEN_CLIENT_URLS} Http://127.0.0.1:2379\-advertise-client-urls=$ {ETCD_ADVERTISE_CLIENT_URLS}\-initial-advertise-peer-urls=$ {ETCD_INITIAL_ADVERTISE_PEER_URLS}\-initial-cluster=$ {ETCD_INITIAL_CLUSTER}\-initial-cluster-token=$ {ETCD_INITIAL_CLUSTER_TOKEN}\-initial-cluster-state=new\-cert-file=/opt/etcd/ssl/server.pem\-key-file=/opt/etcd/ Ssl/server-key.pem\-peer-cert-file=/opt/etcd/ssl/server.pem\-peer-key-file=/opt/etcd/ssl/server-key.pem\-trusted-ca-file=/opt/etcd/ssl/ca.pem\-peer-trusted-ca-file=/opt/etcd/ssl/ca.pemRestart=on-failureLimitNOFILE=65536 [Install] WantedBy=multi-user.target
Copy the certificate you just generated to the location in the configuration file:
# cp ca*pem server*pem / opt/etcd/ssl
Start and set to enable startup:
# systemctl start etcd# systemctl enable etcd
After the deployment is complete, check the status of the etcd cluster:
# / opt/etcd/bin/etcdctl\-ca-file=ca.pem-cert-file=server.pem-key-file=server-key.pem\-endpoints= "https://192.168.31.63:2379,https://192.168.31.65:2379, Https://192.168.31.66:2379"\ cluster-healthmember 18218cfabd4e0dea is healthy: got healthy result from https://192.168.31.63:2379member 541c1c40994c939b is healthy: got healthy result from https://192.168.31.65:2379member a342ea2798d20705 is healthy: got healthy result from https://192.168.31.66:2379cluster is healthy
If you output the above information, the cluster deployment is successful. If you have a problem, the first step is to check the log: / var/log/message or journalctl-u etcd
two。 Install Docker# yum install-y yum-utils device-mapper-persistent-data lvm2# yum-config-manager\-- add-repo\ https://download.docker.com/linux/centos/docker-ce.repo# yum install docker-ce-y # curl-sSL https://get.daocloud.io/daotools/set_mirror.sh in Node | sh-s http://bc437cce.m.daocloud.io# systemctl start docker# systemctl enable docker3. Deploy the Flannel network
How it works:
Falnnel needs to use etcd to store one of its subnet information, so make sure that you can successfully connect to Etcd and write to the predefined subnet segment:
# / opt/etcd/bin/etcdctl\-- ca-file=ca.pem-- cert-file=server.pem-- key-file=server-key.pem\-- endpoints= "https://192.168.31.63:2379,https://192.168.31.65:2379,https://192.168.31.66:2379"\ set / coreos.com/network/config'{" Network ":" 172.17.0.0 Backend 16 "," Backend ": {" Type ":" vxlan "}'
The following deployment steps are performed on each planned node node.
Download the binary package:
# wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz# tar zxvf flannel-v0.9.1-linux-amd64.tar.gz# mv flanneld mk-docker-opts.sh / opt/kubernetes/bin
Configure Flannel:
# cat / opt/kubernetes/cfg/flanneldFLANNEL_OPTIONS= "--etcd-endpoints= https://192.168.31.63:2379,https://192.168.31.65:2379,https://192.168.31.66:2379-etcd-cafile=/opt/etcd/ssl/ca.pem-etcd-certfile=/opt/etcd/ssl/server.pem-etcd-keyfile=/opt/etcd/ssl/server-key.pem"
Systemd Management Flannel:
# cat / usr/lib/systemd/system/ flanneld.service[ Unit] Description=Flanneld overlay address etcd agentAfter=network-online.target network.targetBefore= docker.service[ service] Type=notifyEnvironmentFile=/opt/kubernetes/cfg/flanneldExecStart=/opt/kubernetes/bin/flanneld-- ip-masq $FLANNEL_OPTIONSExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh-k DOCKER_NETWORK_OPTIONS-d / run/flannel/subnet.envRestart=on-failure [Install] WantedBy=multi-user.target
Configure Docker to start the specified subnet segment:
# cat / usr/lib/systemd/system/docker.service [Unit] Description=Docker Application Container EngineDocumentation= https://docs.docker.comAfter=network-online.target firewalld.serviceWants=network- online.target [service] Type=notifyEnvironmentFile=/run/flannel/subnet.envExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONSExecReload=/bin/kill-s HUP $MAINPIDLimitNOFILE=infinityLimitNPROC=infinityLimitCORE=infinityTimeoutStartSec=0Delegate=yesKillMode=processRestart=on-failureStartLimitBurst=3StartLimitInterval=60s [Install] WantedBy=multi-user.target
Restart flannel and docker:
# systemctl daemon-reload# systemctl start flanneld# systemctl enable flanneld# systemctl restart docker
Check whether it is in effect:
# ps-ef | grep dockerroot 20941 1 1 Jun28? 09:15:34 / usr/bin/dockerd-- bip=172.17.34.1/24-- ip-masq=false-- mtu=1450# ip addr3607: flannel.1: mtu 1450 qdisc noqueue state UNKNOWN link/ether 8a:2e:3d:09:dd:82 brd ff:ff:ff:ff:ff:ff inet 172.17.34.0 scope global flannel.1 valid_lft forever preferred_lft forever3608: Docker0: mtu 1450 qdisc noqueue state UP link/ether 02:42:31:8f:d3:02 brd ff:ff:ff:ff:ff:ff inet 172.17.34.1/24 brd 172.17.34.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:31ff:fe8f:d302/64 scope link valid_lft forever preferred_lft forever
Make sure that docker0 and flannel.1 are on the same network segment.
Test the interconnection between different nodes and access another Node node docker0 IP at the current node:
# ping 172.17.58.1PING 172.17.58.1 (172.17.58.1) 56 (84) bytes of data.64 bytes from 172.17.58.1: icmp_seq=1 ttl=64 time=0.263 ms64 bytes from 172.17.58.1: icmp_seq=2 ttl=64 time=0.204 ms
If you can get through, it means that the Flannel deployment is successful. If it fails, check the log: journalctl-u flannel
4. Deploy components on the Master node
Make sure that etcd, flannel, and docker are working properly before deploying Kubernetes, otherwise solve the problem before continuing.
4.1 generate certificates
Create a CA certificate:
# cat ca-config.json {"signing": {"default": {"expiry": "87600h"}, "profiles": {"kubernetes": {"expiry": "87600h", "usages": ["signing", "key encipherment", "server auth" "client auth"]} # cat ca-csr.json {"CN": "kubernetes", "key": {"algo": "rsa", "size": 2048}, "names": [{"C": "CN", "L": "Beijing" "ST": "Beijing", "O": "K8s", "OU": "System"}]} # cfssl gencert-initca ca-csr.json | cfssljson-bare ca-
Generate an apiserver certificate:
# cat server-csr.json {"CN": "kubernetes", "hosts": ["10.0.0.1", "127.0.0.1", "192.168.31.63", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local"] "key": {"algo": "rsa", "size": 2048}, "names": [{"C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "K8s" "OU": "System"}]} cfssl gencert-ca=ca.pem-ca-key=ca-key.pem-config=ca-config.json-profile=kubernetes server-csr.json | cfssljson-bare server
Generate a kube-proxy certificate:
# cat kube-proxy-csr.json {"CN": "system:kube-proxy", "hosts": [], "key": {"algo": "rsa", "size": 2048}, "names": [{"C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "K8s" "OU": "System"}]} # cfssl gencert-ca=ca.pem-ca-key=ca-key.pem-config=ca-config.json-profile=kubernetes kube-proxy-csr.json | cfssljson-bare kube-proxy
Finally, the following certificate files are generated:
# ls * pemca-key.pem ca.pem kube-proxy-key.pem kube-proxy.pem server-key.pem server.pem4.2 deploy apiserver components
Download binary package: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md
Just download this package (kubernetes-server-linux-amd64.tar.gz), which contains all the components you need.
# mkdir / opt/kubernetes/ {bin,cfg,ssl}-p # tar zxvf kubernetes-server-linux-amd64.tar.gz# cd kubernetes/server/bin# cp kube-apiserver kube-scheduler kube-controller-manager kubectl / opt/kubernetes/bin
To create a token file, the purpose will be discussed later:
# cat / opt/kubernetes/cfg/token.csv674c457d4dcf2eefe4920d7dbb6b0ddc,kubelet-bootstrap,10001, "system:kubelet-bootstrap"
First column: random string, which can be generated by yourself
Second column: user name
The third column: UID
Fourth column: user groups
Create an apiserver profile:
# cat / opt/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS= "--logtostderr=true\-- etcd-servers= https://192.168.31.63:2379,https://192.168.31.65:2379, Https://192.168.31.66:2379\-bind-address=192.168.31.63\-secure-port=6443\-advertise-address=192.168.31.63\-allow-privileged=true\-service-cluster-ip-range=10.0.0.0/24\-enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction\-authorization-mode=RBAC Node\-enable-bootstrap-token-auth\-token-auth-file=/opt/kubernetes/cfg/token.csv\-service-node-port-range=30000-50000\-tls-cert-file=/opt/kubernetes/ssl/server.pem\-tls-private-key-file=/opt/kubernetes/ssl/server-key.pem\-client-ca-file=/opt/kubernetes/ssl/ca.pem\-service-account-key-file=/opt/kubernetes/ssl/ca-key .pem\-- etcd-cafile=/opt/etcd/ssl/ca.pem\-- etcd-certfile=/opt/etcd/ssl/server.pem\-- etcd-keyfile=/opt/etcd/ssl/server-key.pem "
Configure the certificate generated earlier to ensure that you can connect to the etcd.
Parameter description:
-- logtostderr enable logging-- v log level-- etcd-servers etcd cluster address-- bind-address listening address-- secure-port https security port-- advertise-address cluster advertisement address-- allow-privileged enable authorization-- service-cluster-ip-range Service virtual IP address field-- enable-admission-plugins admission control module-- authorization-mode authentication authorization, enable RBAC authorization and node self-management-- enable-bootstrap-token-auth enables TLS bootstrap function I'll talk about it later-- token-auth-file token file-- service-node-port-range Service Node type assigns port range by default.
Systemd Management apiserver:
# cat / usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API ServerDocumentation= https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserverExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTSRestart=on-failure [Install] WantedBy=multi-user.target
Start:
# systemctl daemon-reload# systemctl enable kube-apiserver# systemctl restart kube-apiserver4.3 deploying scheduler components
Create a schduler profile:
# cat / opt/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS= "--logtostderr=true\-- master=127.0.0.1:8080 4\-- leader-elect"
Parameter description:
-- master connects to the local apiserver--leader-elect when the component starts more than one, automatic election (HA)
Systemd Management schduler components:
# cat / usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes SchedulerDocumentation= https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/opt/kubernetes/cfg/kube-schedulerExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTSRestart=on-failure [Install] WantedBy=multi-user.target
Start:
# systemctl daemon-reload# systemctl enable kube-scheduler# systemctl restart kube-scheduler4.4 deploying controller-manager components
Create a controller-manager profile:
# cat / opt/kubernetes/cfg/kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS= "--logtostderr=true\-- Vroom4\-- master=127.0.0.1:8080\-- leader-elect=true\-- address=127.0.0.1\-- service-cluster-ip-range=10.0.0.0/24\-- cluster-name=kubernetes\-- cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem\-- cluster-signing-key-file=/opt/kubernetes/ssl/ca- Key.pem\-root-ca-file=/opt/kubernetes/ssl/ca.pem\-service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem "
Systemd Management controller-manager components:
# cat / usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller ManagerDocumentation= https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-managerExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTSRestart=on-failure [Install] WantedBy=multi-user.target
Start:
# systemctl daemon-reload# systemctl enable kube-controller-manager# systemctl restart kube-controller-manager
All components have been started successfully. Check the current cluster component status through the kubectl tool:
# / opt/kubernetes/bin/kubectl get csNAME STATUS MESSAGE ERRORscheduler Healthy ok etcd-0 Healthy {"health": "true"} etcd-2 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"} controller-manager Healthy ok
The output above shows that all the components are normal.
5. Deploy components on the Node node
After Master apiserver enables TLS authentication, if the Node node kubelet component wants to join the cluster, it must use a valid certificate issued by CA to communicate with apiserver. When there are many Node nodes, signing a certificate is a very tedious task. Therefore, with the TLS Bootstrapping mechanism, kubelet will automatically apply for a certificate from apiserver with a low-privilege user, and the certificate of kubelet will be dynamically signed by apiserver.
The general workflow of certification is shown in the figure:
5.1 bind the kubelet-bootstrap user to the system cluster role kubectl create clusterrolebinding kubelet-bootstrap\-- clusterrole=system:node-bootstrapper\-- user=kubelet-bootstrap5.2 to create the kubeconfig file
Execute the following command in the directory where the kubernetes certificate was generated to generate the kubeconfig file:
# create kubelet bootstrapping kubeconfig BOOTSTRAP_TOKEN=674c457d4dcf2eefe4920d7dbb6b0ddcKUBE_APISERVER= "https://192.168.31.63:6443"# set cluster parameter kubectl config set-cluster kubernetes\-certificate-authority=./ca.pem\-- embed-certs=true\-- server=$ {KUBE_APISERVER}\-- kubeconfig=bootstrap.kubeconfig# set client authentication parameter kubectl config set-credentials kubelet-bootstrap\-- token=$ {BOOTSTRAP_TOKEN}\-- kubeconfig=bootstrap.kubeconfig# setting The following parameter kubectl config set-context default\-- cluster=kubernetes\-- user=kubelet-bootstrap\-- kubeconfig=bootstrap.kubeconfig# sets the default context kubectl config use-context default-- kubeconfig=bootstrap.kubeconfig#--# creates the kube-proxy kubeconfig file kubectl config set-cluster kubernetes\-- certificate-authority=./ca.pem\-- embed-certs=true\-- server=$ {KUBE_APISERVER}\ -- kubeconfig=kube-proxy.kubeconfigkubectl config set-credentials kube-proxy\-- client-certificate=./kube-proxy.pem\-- client-key=./kube-proxy-key.pem\-- embed-certs=true\-- kubeconfig=kube-proxy.kubeconfigkubectl config set-context default\-- cluster=kubernetes\-- user=kube-proxy\-kubeconfig=kube-proxy.kubeconfigkubectl config use-context default-- kubeconfig=kube-proxy.kubeconfig# lsbootstrap.kubeconfig kube-proxy.kubeconfig
Copy these two files to the Node node / opt/kubernetes/cfg directory.
5.2 deployment of kubelet components
Copy the kubelet and kube-proxy from the previously downloaded binary package to the / opt/kubernetes/bin directory.
Create a kubelet profile:
# cat / opt/kubernetes/cfg/kubeletKUBELET_OPTS= "--logtostderr=true\-- Vroom4\-- hostname-override=192.168.31.65\-- kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig\-- bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig\-- config=/opt/kubernetes/cfg/kubelet.config\-- cert-dir=/opt/kubernetes/ssl\-- pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
Parameter description:
-- the hostname displayed by hostname-override in the cluster-- kubeconfig specifies the location of the kubeconfig file, which is automatically generated-- bootstrap-kubeconfig specifies the bootstrap.kubeconfig file just generated-- location where the cert-dir issuing certificate is stored-- pod-infra-container-image manages the image of the Pod network.
The / opt/kubernetes/cfg/kubelet.config configuration file is as follows:
Kind: KubeletConfigurationapiVersion: kubelet.config.k8s.io/v1beta1address: 192.168.31.65port: 10250readOnlyPort: 10255cgroupDriver: cgroupfsclusterDNS: ["10.0.0.2"] clusterDomain: cluster.local.failSwapOn: falseauthentication: anonymous: enabled: true
Systemd Management kubelet components:
# cat / usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes KubeletAfter=docker.serviceRequires= docker.service [service] EnvironmentFile=/opt/kubernetes/cfg/kubeletExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTSRestart=on-failureKillMode=process [Install] WantedBy=multi-user.target
Start:
# systemctl daemon-reload# systemctl enable kubelet# systemctl restart kubelet
Approve Node to join the cluster in Master:
Since you have not joined the cluster after startup, you need to allow the node manually.
View the Node requesting signature in the Master node:
# kubectl get csr# kubectl certificate approve XXXXID# kubectl get node5.3 deploying kube-proxy components
Create a kube-proxy profile:
# cat / opt/kubernetes/cfg/kube-proxyKUBE_PROXY_OPTS= "--logtostderr=true\-- vault 4\-- hostname-override=192.168.31.65\-- cluster-cidr=10.0.0.0/24\-- kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
Systemd Management kube-proxy components:
# cat / usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes ProxyAfter= network.target [service] EnvironmentFile=-/opt/kubernetes/cfg/kube-proxyExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTSRestart=on-failure [Install] WantedBy=multi-user.target
Start:
# systemctl daemon-reload# systemctl enable kube-proxy# systemctl restart kube-proxy
Node2 is deployed in the same way.
6. View cluster status # kubectl get nodeNAME STATUS ROLES AGE VERSION192.168.31.65 Ready 1d v1.12.0192.168.31.66 Ready 1d v1.12.cluster kubectl get csNAME STATUS MESSAGE ERRORcontroller-manager Healthy ok scheduler Healthy ok Etcd-2 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"} etcd-0 Healthy {"health": "true"} 7. Run a test example
Create a Nginx Web to test whether the cluster is working properly:
# kubectl run nginx-image=nginx-replicas=3# kubectl expose deployment nginx-port=88-target-port=80-type=NodePort
View Pod,Service:
# kubectl get podsNAME READY STATUS RESTARTS AGEnginx-64f497f8fd-fjgt2 1 to 1 Running 3 1dnginx-64f497f8fd-gmstq 1 to 1 Running 3 1dnginx-64f497f8fd-q6wk9 1 to 1 Running 3 1d# kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEkubernetes ClusterIP 10 .0.0.1 443/TCP 28dnginx NodePort 10.0.0.175 88:38696/TCP 28d
Access the Nginx deployed in the cluster, open the browser and enter: http://192.168.31.66:38696
Free video version: https://ke.qq.com/course/366778 summary has problems first check the log, and then Google think more, comb more logical configuration files, there are many fields you may not know what to do, don't worry, as you use it step by step, you will become familiar with it step by step
If you encounter container problems in container operation and maintenance, you can give me Wechat ↓. Similarly, if you find any mistakes, please correct them at any time, learn from each other and make progress together!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.