Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Kubernetes K8s deployment

2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Cluster Planning centos-test-ip-207-master 192.168.11.207centos-test-ip-208 192.168.11.208centos-test-ip-209 192.168.11.209

Kubernetes 1.10.7

Flannel flannel-v0.10.0-linux-amd64.tar

ETCD etcd-v3.3.8-linux-amd64.tar

CNI cni-plugins-amd64-v0.7.1

Docker 18.03.1-ce

Download the installation package

Etcd: https://github.com/coreos/etcd/releases/

Flannel: https://github.com/coreos/flannel/releases/

Cni: https://github.com/containernetworking/plugins/releases

Kubernetes: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#v1107

Note: installation package kubernetes1.10 Guan Xiao Na link: https://pan.baidu.com/s/1_7EfOMlRkQSybEH_p6NtTw extraction code: 345b mutual analysis, turn off the firewall, turn off the partition, server time (three synchronous)

Analysis

Vim / etc/hosts192.168.11.207 centos-test-ip-207-master192.168.11.208 centos-test-ip-208192.168.11.209 centos-test-ip-209

Firewalls

Systemctl stop firewalldsetenforce 0

Close swap

Swapoff-avim / etc/fstab / / swap setting comment

Synchronization service time zone # time synchronization is negligible

Tzselect

Public key transmission # master slave

Ssh-keygenssh-copy-id installs docker (three sets synchronize)

Uninstall the original version

Yum remove docker docker-common docker-selinux docker-engine

Install drivers that docker depends on

Yum install-y yum-utils device-mapper-persistent-data lvm2

Add yum source # official source pull delay, so choose domestic Ali source

Yum-config-manager-- add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repoyum makecache fast

Select docker version installation

Yum list docker-ce-- showduplicates | sort-r

Choose to install 18.03.1.ce

Yum-y install docker-ce-18.03.1.ce

Start docker

Systemctl start dockersystemctl enable docker installs ETCD cluster

Synchronous operation

Tar xvf etcd-v3.3.8-linux-amd64.tar.gzcd etcd-v3.3.8-linux-amd64cp etcd etcdctl / usr/binmkdir-p / var/lib/etcd / etc/etcd # create related folders

Etcd profile

Main file operations / usr/lib/systemd/system/etcd.service and / etc/etcd/etcd.conf

The master-slave node relationship of etcd cluster is not the same as that of kubernetes cluster.

The etcd cluster elects the master node during startup and operation.

So the three nodes are named etcd-i,etcd-ii,etcd-iii experience relationships.

207-mastercat / usr/lib/systemd/system/ etcd.service [Unit] Description=Etcd ServerAfter= network.target [Service] Type=notifyWorkingDirectory=/var/lib/etcd/EnvironmentFile=/etc/etcd/etcd.confExecStart=/usr/bin/ etcd [install] WantedBy=multi-user.targetcat / etc/etcd/etcd.conf# [member] # Node name ETCD_NAME=etcd-i# data storage location ETCD_DATA_DIR= "/ var/lib/etcd/default.etcd" # listens on the addresses of other Etcd instances ETCD_LISTEN_PEER_URLS= "http://192.168.11.207:2380"# snooping client address ETCD_LISTEN_CLIENT_URLS=" http://192.168.11.207:2379, Http://127.0.0.1:2379"#[cluster]# notifies other Etcd instance addresses ETCD_INITIAL_ADVERTISE_PEER_URLS= "http://192.168.11.207:2380"# initializes node addresses in the cluster ETCD_INITIAL_CLUSTER=" etcd-i= http://192.168.11.207:2380,etcd-ii=http://192.168.11.208:2380,etcd-iii=http://192.168.11.209:2380" # initializes cluster status New means new ETCD_INITIAL_CLUSTER_STATE= "new" # initialize cluster tokenETCD _ INITIAL_CLUSTER_TOKEN= "etcd-cluster-token" # notify client address ETCD_ADVERTISE_CLIENT_URLS= "http://192.168.11.207:2379, Http://127.0.0.1:2379"208cat / usr/lib/systemd/system/ etcd.service [Unit] Description=Etcd ServerAfter= network.target [Service] Type=notifyWorkingDirectory=/var/lib/etcd/EnvironmentFile=/etc/etcd/etcd.confExecStart=/usr/bin/etc d [install] WantedBy=multi-user.targetcat / etc/etcd/etcd.conf# [member] # Node name ETCD_NAME=etcd-ii# data storage location ETCD_DATA_DIR= "/ var/lib/etcd/default.etcd" # Listening on the addresses of other Etcd instances ETCD_LISTEN_PEER_URLS= "http://192.168.11.208:2380"# listening client address ETCD_LISTEN_CLIENT_URLS=" http://192.168.11.208:2379, Http://127.0.0.1:2379"#[cluster]# notifies other Etcd instance addresses ETCD_INITIAL_ADVERTISE_PEER_URLS= "http://192.168.11.208:2380"# initializes node addresses in the cluster ETCD_INITIAL_CLUSTER=" etcd-i= http://192.168.11.207:2380,etcd-ii=http://192.168.11.208:2380,etcd-iii=http://192.168.11.209:2380" # initializes cluster status New means new ETCD_INITIAL_CLUSTER_STATE= "new" # initialize cluster tokenETCD _ INITIAL_CLUSTER_TOKEN= "etcd-cluster-token" # notify client address ETCD_ADVERTISE_CLIENT_URLS= "http://192.168.11.208:2379, Http://127.0.0.1:2379"209cat / usr/lib/systemd/system/ etcd.service [Unit] Description=Etcd ServerAfter= network.target [Service] Type=notifyWorkingDirectory=/var/lib/etcd/EnvironmentFile=/etc/etcd/etcd.confExecStart=/usr/bin/etc d [install] WantedBy=multi-user.targetcat / etc/etcd/etcd.conf# [member] # Node name ETCD_NAME=etcd-iii# data storage location ETCD_DATA_DIR= "/ var/lib/etcd/default.etcd" # Listening on the addresses of other Etcd instances ETCD_LISTEN_PEER_URLS= "http://192.168.11.209:2380"# listening client address ETCD_LISTEN_CLIENT_URLS=" http://192.168.11.209:2379, Http://127.0.0.1:2379"#[cluster]# notifies other Etcd instance addresses ETCD_INITIAL_ADVERTISE_PEER_URLS= "http://192.168.11.209:2380"# initializes node addresses in the cluster ETCD_INITIAL_CLUSTER=" etcd-i= http://192.168.11.207:2380,etcd-ii=http://192.168.11.208:2380,etcd-iii=http://192.168.11.209:2380" # initializes cluster status New means to create a new ETCD_INITIAL_CLUSTER_STATE= "new" # initialize the cluster tokenETCD _ INITIAL_CLUSTER_TOKEN= "etcd-cluster-token" # notify the client address ETCD_ADVERTISE_CLIENT_URLS= "http://192.168.11.209:2379,http://127.0.0.1:2379" start the ETCD cluster

Master → slave sequential operation

Systemctl daemon-reload # # reload configuration file systemctl start etcd.servicesystemctl enable etcd.service

View cluster information

[root@centos-test-ip-207-master ~] # etcdctl member liste8bd2d4d9a7cba8: name=etcd-ii peerURLs= http://192.168.11.208:2380 clientURLs= http://127.0.0.1:2379,http://192.168.11.208:2379 isLeader=true50a675761b915629: name=etcd-i peerURLs= http://192.168.11.207:2380 clientURLs= http://127.0.0.1:2379, Http://192.168.11.207:2379 isLeader=false9a891df60a11686b: name=etcd-iii peerURLs= http://192.168.11.209:2380 clientURLs= http://127.0.0.1:2379, Http://192.168.11.209:2379 isLeader=false [root@centos-test-ip-207-master ~] # etcdctl cluster-healthmember e8bd2d4d9a7cba8 is healthy: got healthy result from http://127.0.0.1:2379member 50a675761b915629 is healthy: got healthy result from http://127.0.0.1:2379member 9a891df60a11686b is healthy: got healthy result from http://127.0.0.1:2379cluster is healthy install flannel

Synchronous operation

Mkdir-p / opt/flannel/bin/tar xvf flannel-v0.10.0-linux-amd64.tar.gz-C / opt/flannel/bin/cat / usr/lib/systemd/system/ flannel.service[ Unit] Description=Flanneld overlay address etcd agentAfter=network.targetAfter=network-online.targetWants=network-online.targetAfter=etcd.serviceBefore= docker.serviceType=notifyExecStart=/opt/flannel/bin/flanneld-etcd-endpoints= http://192.168.11.207:2379,http://192.168.11.208:2379, Http://192.168.11.209:2379-etcd-prefix=coreos.com/networkExecStartPost=/opt/flannel/bin/mk-docker-opts.sh-d / etc/docker/flannel_net.env-cRestart=on-failure [Install] WantedBy=multi-user.targetRequiredBy=docker.service

Set flannel network configuration (network segment division, network segment information can be modified) # main operation

[root@centos-test-ip-207-master ~] # etcdctl mk / coreos.com/network/config'{"Network": "172.18.0.0 coreos.com/network/config 16", "SubnetMin": "172.18.1.0", "SubnetMax": "172.18.254.0", "Backend": {"Type": "vxlan"}} 'modify IP address range: delete: etcdctl rm / coreos.com/network/config, then execute the configuration command to download flannel

Synchronous operation

Flannel service relies on flannel image, so download flannel image first, execute the following command to download from Ali Cloud, and create image tag.

Docker pull registry.cn-beijing.aliyuncs.com/k8s_images/flannel:v0.10.0-amd64docker tag registry.cn-beijing.aliyuncs.com/k8s_images/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0

Note:

Configure docker

There is one item in the flannel configuration

ExecStartPost=/opt/flannel/bin/mk-docker-opts.sh-d / etc/docker/flannel_net.env-c

After flannel starts, execute mk-docker-opts.sh and generate / etc/docker/flannel_net.env file

Flannel modifies the docker network, and flannel_net.env is the docker configuration parameter generated by flannel, so you also need to modify the docker configuration item

Cat / usr/lib/systemd/system/ docker.service [unit] Description=Docker Application Container EngineDocumentation= https://docs.docker.comAfter=network-online.target firewalld.serviceWants=network-online.target [Service] Type=notify# the default is not to use systemd for cgroups because the delegate issues still# exists and systemd currently does not support the cgroup feature set required# for containers run by dockerExecStart=/usr/bin/dockerdEnvironmentFile=/etc/docker/flannel_net.env # add ExecReload=/bin/kill-s HUP $MAINPIDExecStartPost=/usr/sbin/iptables-P FORWARD ACCEPT # add # Having non-zero Limit*s causes performance problems due to accounting overhead# in the kernel. We recommend using cgroups to do container-local accounting.LimitNOFILE=infinityLimitNPROC=infinityLimitCORE=infinity# Uncomment TasksMax if your systemd version supports it.# Only systemd 226 and above support this version.#TasksMax=infinityTimeoutStartSec=0# set delegate yes so that systemd does not reset the cgroups of docker containersDelegate=yes# kill only the docker process, not all processes in the cgroupKillMode=process# restart the docker process if it exits prematurelyRestart=on-failureStartLimitBurst=3StartLimitInterval=60s [Install] WantedBy=multi-user.target

Note:

Start docker after After:flannel is started

EnvironmentFile: configure the startup parameters of docker, generated by flannel

ExecStart: add docker startup parameters

ExecStartPost: executes after docker starts, modifies the iptables routing rules of the host

Start flannel

Synchronous operation

Systemctl daemon-reloadsystemctl start flannel.servicesystemctl enable flannel.servicesystemctl restart docker.service install CNI

Synchronous operation

Mkdir-p / opt/cni/bin / etc/cni/net.dtar xvf cni-plugins-amd64-v0.7.1.tgz-C / opt/cni/bincat / etc/cni/net.d/10-flannel.conflist {"name": "cni0", "cniVersion": "0.3.1", "plugins": [{"type": "flannel", "delegate": {"forceAddress": true, "isDefaultGateway": true}} {"type": "portmap", "capabilities": {"portMappings": true}}]} install K8S cluster CA certificate

Synchronous operation

Mkdir-p / etc/kubernetes/ca207cd / etc/kubernetes/ca/

Production certificate and private key

[root@centos-test-ip-207-master ca] # openssl genrsa-out ca.key 2048 [root@centos-test-ip-207-master ca] # openssl req-x509-new-nodes-key ca.key-subj "/ CN=k8s"-days 5000-out ca.crt

Generate kube-apiserver certificate and private key

[root@centos-test-ip-207-master ca] # cat master_ SSL. Confe [req] req_extensions = v3_reqdistinguished_name = req_distinguished_ name [req _ distinguished_name] [v3_req] basicConstraints = CA:FALSEkeyUsage = nonRepudiation, digitalSignature KeyEnciphermentsubjectAltName = @ alt_ namesalt _ names] DNS.1 = kubernetesDNS.2 = kubernetes.defaultDNS.3 = kubernetes.default.svcDNS.4 = kubernetes.default.svc.cluster.localDNS.5 = k8sIP.1 = 172.18.0.1IP.2 = 192.168.11.207 [root@centos-test-ip-207-master ca] # openssl genrsa-out apiserver-key.pem 2048 [root@centos-test-ip-207-master ca] # openssl req-new-key apiserver-key.pem-out apiserver .csr-subj "/ CN=k8s"-config master_ ssl.confession [root @ centos-test-ip-207-master ca] # openssl x509-req-in apiserver.csr-CA ca.crt-CAkey ca.key-CAcreateserial-out apiserver.pem-days 365-extensions v3_req-extfile master_ssl.conf

Generate kube-controller-manager/kube-scheduler certificate and private key

[root@centos-test-ip-207-master ca] # openssl genrsa-out cs_client.key 2048 [root@centos-test-ip-207-master ca] # openssl req-new-key cs_client.key-subj "/ CN=k8s"-out cs_ client.csr [root @ centos-test-ip-207-master ca] # openssl x509-req-in cs_client.csr-CA ca.crt-CAkey ca.key-CAcreateserial-out cs_client.crt-days 5000

Copy the certificate to 208209

[root@centos-test-ip-207-master ca] # scp ca.crt ca.key centos-test-ip-208:/etc/kubernetes/ca/ [root@centos-test-ip-207-master ca] # scp ca.crt ca.key centos-test-ip-209:/etc/kubernetes/ca/208 certificate configuration

/ CN corresponds to native IP

Cd / etc/kubernetes/ca/

[root@centos-test-ip-208 ca] # openssl genrsa-out kubelet_client.key 2048 [root@centos-test-ip-208 ca] # openssl req-new-key kubelet_client.key-subj "/ CN=192.168.3.193"-out kubelet_ client.csr [root @ centos-test-ip-208 ca] # openssl x509-req-in kubelet_client.csr-CA ca.crt-CAkey ca.key-CAcreateserial-out kubelet_client.crt-days 5000209 certificate configuration

/ CN corresponds to native IP

Cd / etc/kubernetes/ca/

[root@centos-test-ip-209 ca] # openssl genrsa-out kubelet_client.key 2048 [root@centos-test-ip-209 ca] # openssl req-new-key kubelet_client.key-subj "/ CN=192.168.11.209"-out kubelet_ client.csr [root @ centos-test-ip-209 ca] # openssl x509-req-in kubelet_client.csr-CA ca.crt-CAkey ca.key-CAcreateserial-out kubelet_client.crt-days 5000 install k8s207 [root @ centos -test-ip-207-master ~] # tar xvf kubernetes-server-linux-amd64.tar.gz-C / opt [root@centos-test-ip-207-master ~] # cd / opt/kubernetes/server/bin [root@centos-test-ip-207-master bin] # cp-a `ls | egrep-v "* .tar | * _ tag" `/ usr/bin [root@centos-test-ip-207-master bin] # mkdir-p / var/log/kubernetes

Configure kube-apiserver

[root@centos-test-ip-207-master bin] # cat / usr/lib/systemd/system/kube- apiserver.service[ Unit] Description=Kubernetes API ServerDocumentation= https://github.com/GoogleCloudPlatform/kubernetesAfter=etcd.serviceWants=etcd.service[Service]EnvironmentFile=/etc/kubernetes/apiserver.confExecStart=/usr/bin/kube-apiserver $KUBE_API_ARGSRestart=on-failureType=notifyLimitNOFILE= 65536 [install] WantedBy=multi-user.target

Configure apiserver.conf

[root@centos-test-ip-207-master bin] # cat / etc/kubernetes/apiserver.confKUBE_API_ARGS= "\-storage-backend=etcd3\-etcd-servers= http://192.168.11.207:2379,http://192.168.11.208:2379, Http://192.168.11.209:2379\-- bind-address=0.0.0.0\-- secure-port=6443\-- service-cluster-ip-range=172.18.0.0/16\-- service-node-port-range=1-65535\-kubelet-port=10250\-- advertise-address=192.168.11.207\-- allow-privileged=false\-- anonymous-auth=false\-- client-ca-file=/etc / kubernetes/ca/ca.crt\-tls-private-key-file=/etc/kubernetes/ca/apiserver-key.pem\-tls-cert-file=/etc/kubernetes/ca/apiserver.pem\-enable-admission-plugins=NamespaceLifecycle LimitRanger,NamespaceExists,SecurityContextDeny,ServiceAccount,DefaultStorageClass,ResourceQuota\-- logtostderr=true\-- log-dir=/var/log/kubernets\-- vault 2 "

Note:

# explanation

-- etcd-servers # connects to the etcd cluster

-- secure-port # Open secure port 6443

-- client-ca-file,-- tls-private-key-file,-- tls-cert-file configure CA certificate

-- enable-admission-plugins # enable access

-- anonymous-auth=false # does not accept anonymous access. If it is true, it is accepted. It is set to false here to facilitate dashboard access.

Configure kube-controller-manager

[root@centos-test-ip-207-master bin] # cat / etc/kubernetes/kube-controller-config.yamlapiVersion: v1kind: Configusers:- name: controller user: client-certificate: / etc/kubernetes/ca/cs_client.crt client-key: / etc/kubernetes/ca/cs_client.keyclusters:- name: local cluster: certificate-authority: / etc/kubernetes/ca/ca.crtcontexts:- context: cluster: local user: controller name: default-contextcurrent-context: default-context

Configure kube-controller-manager.service

[root@centos-test-ip-207-master bin] # cat / usr/lib/systemd/system/kube-controller- manager. Service [Unit] Description=Kubernetes Controller ManagerDocumentation= https://github.com/GoogleCloudPlatform/kubernetesAfter=kube-apiserver.serviceRequires=kube-apiserver.service[Service]EnvironmentFile=/etc/kubernetes/controller-manager.confExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGSRestart=on-failureLimitNOFILE=65536 [Install] WantedBy=multi-user.target

Configure controller-manager.conf

[root@centos-test-ip-207-master bin] # cat / etc/kubernetes/controller-manager.confKUBE_CONTROLLER_MANAGER_ARGS= "\-master= https://192.168.11.207:6443\-- service-account-private-key-file=/etc/kubernetes/ca/apiserver-key.pem\-- root-ca-file=/etc/kubernetes/ca/ca.crt\-- cluster-signing-cert-file=/etc/kubernetes/ca/ca.crt\ -- cluster-signing-key-file=/etc/kubernetes/ca/ca.key\-- kubeconfig=/etc/kubernetes/kube-controller-config.yaml\-- logtostderr=true\-- log-dir=/var/log/kubernetes\-- vault 2 "

Note:

Master connects to the master node

Service-account-private-key-file, root-ca-file, cluster-signing-cert-file, cluster-signing-key-file configure CA certificates

Kubeconfig is a configuration file

Configure kube-scheduler

[root@centos-test-ip-207-master bin] # cat / etc/kubernetes/kube-scheduler-config.yamlapiVersion: v1kind: Configusers:- name: scheduler user: client-certificate: / etc/kubernetes/ca/cs_client.crt client-key: / etc/kubernetes/ca/cs_client.keyclusters:- name: local cluster: certificate-authority: / etc/kubernetes/ca/ca.crtcontexts:- context: cluster: local user: scheduler name: default-contextcurrent-context : default-context [root@centos-test-ip-207-master bin] # cat / usr/lib/systemd/system/kube- organizer. Service [Unit] Description=Kubernetes SchedulerDocumentation= https://github.com/GoogleCloudPlatform/kubernetesAfter=kube-apiserver.serviceRequires=kube-apiserver.service[Service]User=rootEnvironmentFile=/etc/kubernetes/scheduler.confExecStart=/usr/bin/kube-scheduler $KUBE_SCHEDULER_ARGSRestart=on-failureLimitNOFILE= 65536 [install] WantedBy=multi-user.target [root@centos-test-ip-207-master bin] # cat / etc/kubernetes/scheduler.confKUBE_SCHEDULER_ARGS= "\-- master= https://192.168.11.207:6443\-- kubeconfig=/etc/kubernetes/kube-scheduler-config.yaml\-- logtostderr=true\-- log-dir=/var/log/kubernetes\-- vault 2"

Start master

Systemctl daemon-reloadsystemctl start kube-apiserver.servicesystemctl enable kube-apiserver.servicesystemctl start kube-controller-manager.servicesystemctl enable kube-controller-manager.servicesystemctl start kube-scheduler.servicesystemctl enable kube-scheduler.service

Log view

Journalctl-xeu kube-apiserver-- no-pagerjournalctl-xeu kube-controller-manager-- no-pagerjournalctl-xeu kube-scheduler-- no-pager# real-time view plus-f node deployment K8S

Synchronize operation from (node)

Tar-zxvf kubernetes-server-linux-amd64.tar.gz-C / optcd / opt/kubernetes/server/bincp-a kubectl kubelet kube-proxy / usr/bin/mkdir-p / var/log/kubernetescat / etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = modify kernel parameters, and iptables filtering rules take effect. If not, ignore sysctl-p # configuration effective 208Configuration kubelet [root@centos-test-ip-208 ~] # cat / etc/kubernetes/kubelet-config.yamlapiVersion: v1kind: Configusers:- name: kubelet user: client-certificate: / etc/kubernetes/ca/kubelet_client.crt client-key: / etc/kubernetes/ca/kubelet_client.keyclusters:- cluster: certificate-authority: / etc/kubernetes/ca/ca.crt server: https:/ / 192.168.11.207 localcontexts:- context: cluster: local user: kubelet name: default-contextcurrent-context: default-contextpreferences: {} [root@centos-test-ip-208 ~] # cat / usr/lib/systemd/system/ Kubelet.service [Unit] Description=Kubelet ServerDocumentation= https://github.com/GoogleCloudPlatform/kubernetesAfter=docker.serviceRequires=docker.service[Service]EnvironmentFile=/etc/kubernetes/kubelet.confExecStart=/usr/bin/kubelet $KUBELET_ARGSRestart=on-failure [Install] WantedBy=multi-user.target [root@ Centos-test-ip-208 ~] # cat / etc/kubernetes/kubelet.confKUBELET_ARGS= "\-kubeconfig=/etc/kubernetes/kubelet-config.yaml\-- pod-infra-container-image=registry.aliyuncs.com/archon/pause-amd64:3.0\-- hostname-override=192.168.11.208\-- network-plugin=cni\-- cni-conf-dir=/etc/cni/net.d\-- cni-bin-dir=/opt/cni/ Bin\-- logtostderr=true\-- log-dir=/var/log/kubernetes\-- vault 2 "

Note: #

-- hostname-override # it is recommended to use IP of the node node to configure the node name

#-pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0\

-- pod-infra-container-image # specifies that the basic image of pod is google by default. It is recommended to change it to domestic or FQ.

Or download it locally and rename the image.

Docker pull registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0

Docker tag registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0

-- kubeconfig # is the configuration file

Configure KUBE- proxy

[root@centos-test-ip-208 ~] # cat / etc/kubernetes/proxy-config.yamlapiVersion: v1kind: Configusers:- name: proxy user: client-certificate: / etc/kubernetes/ca/kubelet_client.crt client-key: / etc/kubernetes/ca/kubelet_client.keyclusters:- cluster: certificate-authority: / etc/kubernetes/ca/ca.crt server: https://192.168.11.207:6443 name: localcontexts:- context: cluster: Local user: proxy name: default-contextcurrent-context: default-contextpreferences: {} [root@centos-test-ip-208 ~] # cat / usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kube-Proxy ServerDocumentation= https://github.com/GoogleCloudPlatform/kubernetesAfter=network.targetRequires=network.service[Service]EnvironmentFile=/etc/kubernetes/proxy.confExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGSRestart=on-failureLimitNOFILE=65536 [Install] WantedBy=multi-user.target [root@centos-test-ip-208 ~] # cat / etc / kubernetes/proxy.confKUBE_PROXY_ARGS= "\-- master= https://192.168.11.207:6443\-- hostname-override=192.168.11.208\-- kubeconfig=/etc/kubernetes/proxy-config.yaml\-- logtostderr=true\-- log-dir=/var/log/kubernetes\-- Vroom2" 209 configure kubelet [root@centos-test-ip-209 ~] # cat / etc/kubernetes/kubelet-config.yamlapiVersion: v1kind: Configusers:- Name: kubelet user: client-certificate: / etc/kubernetes/ca/kubelet_client.crt client-key: / etc/kubernetes/ca/kubelet_client.keyclusters:- cluster: certificate-authority: / etc/kubernetes/ca/ca.crt server: https://192.168.11.207:6443 name: localcontexts:- context: cluster: local user: kubelet name: default-contextcurrent-context: default-contextpreferences: {} [root@centos-test-ip-209 ~] # cat / usr/lib/systemd/system/ Kubelet.service [Unit] Description=Kubelet ServerDocumentation= https://github.com/GoogleCloudPlatform/kubernetesAfter=docker.serviceRequires=docker.service[Service]EnvironmentFile=/etc/kubernetes/kubelet.confExecStart=/usr/bin/kubelet $KUBELET_ARGSRestart=on-failure [Install] WantedBy=multi-user.target [root@centos-test-ip-209 ~] # cat / etc/kubernetes/kubelet.confKUBELET_ARGS= "\-- kubeconfig=/etc/kubernetes/kubelet-config.yaml\-- pod-infra-container-image=registry. Aliyuncs.com/archon/pause-amd64:3.0\-- hostname-override=192.168.11.209\-- network-plugin=cni\-- cni-conf-dir=/etc/cni/net.d\-- cni-bin-dir=/opt/cni/bin\-- logtostderr=true\-- log-dir=/var/log/kubernetes\-- vault 2 "

Note:

#

-- hostname-override # it is recommended to use IP of the node node to configure the node name

#-pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0\

-- pod-infra-container-image # specifies that the basic image of pod is google by default. It is recommended to change it to domestic or FQ.

Or download it locally and rename the image.

Docker pull registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0

Docker tag registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0

-- kubeconfig # is the configuration file

Configure KUBE- proxy

[root@centos-test-ip-209 ~] # cat / etc/kubernetes/proxy-config.yamlapiVersion: v1kind: Configusers:- name: proxy user: client-certificate: / etc/kubernetes/ca/kubelet_client.crt client-key: / etc/kubernetes/ca/kubelet_client.keyclusters:- cluster: certificate-authority: / etc/kubernetes/ca/ca.crt server: https://192.168.3.121:6443 name: localcontexts:- context: cluster: Local user: proxy name: default-contextcurrent-context: default-contextpreferences: {} [root@centos-test-ip-209 ~] # cat / usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kube-Proxy ServerDocumentation= https://github.com/GoogleCloudPlatform/kubernetesAfter=network.targetRequires=network.service[Service]EnvironmentFile=/etc/kubernetes/proxy.confExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGSRestart=on-failureLimitNOFILE=65536 [Install] WantedBy=multi-user.target [root@centos-test-ip-209 ~] # cat / Etc/kubernetes/proxy.confKUBE_PROXY_ARGS= "\-- master= https://192.168.11.207:6443\-- hostname-override=192.168.11.209\-- kubeconfig=/etc/kubernetes/proxy-config.yaml\-- logtostderr=true\-- log-dir=/var/log/kubernetes\-- vault 2"

Note:

-- hostname-override # configure the node name, which corresponds to kubelet. If kubelet is configured, kube-proxy should also be configured.

-- master # connects to the master service

-- kubeconfig # is the configuration file

Start the node, log view # Note: be sure to close the swap partition

Synchronize operation from (node)

Systemctl daemon-reloadsystemctl start kubelet.servicesystemctl enable kubelet.servicesystemctl start kube-proxy.servicesystemctl enable kube-proxy.servicejournalctl-xeu kubelet-- no-pagerjournalctl-xeu kube-proxy-- no-pager# real-time view plus-fmaster view node [root@centos-test-ip-207-master ~] # kubectl get nodesNAME STATUS ROLES AGE VERSION192.168.11.208 Ready 1d v1.10.7192.168.11.209 Ready 1D v1.10.7 cluster test

Configure the nginx test file (master)

[root@centos-test-ip-207-master bin] # cat nginx-rc.yamlapiVersion: v1kind: ReplicationControllermetadata: name: nginx-rc labels: name: replicas: 2 selector: name: nginx-pod template: metadata: labels: name: nginx-pod spec: containers:-name: nginx image: nginx imagePullPolicy: IfNotPresent ports:-containerPort: 80 [root@centos- Test-ip-207-master bin] # cat nginx-svc.yaml apiVersion: v1kind: Servicemetadata: name: nginx-service labels: name: nginx-servicespec: type: NodePort ports:-port: 80 protocol: TCP targetPort: 80 nodePort: 30081 selector: name: nginx-pod start

Master (207C)

Kubectl create-f nginx-rc.yamlkubectl create-f nginx-svc.yaml

# View the creation of pod

[root@centos-test-ip-207-master bin] # kubectl get pod-o wideNAME READY STATUS RESTARTS AGE IP NODEnginx-rc-d9kkc 1 192.168.11.209nginx-rc-l9ctn 1 Running 0 1d 172.18.30.2 192.168.11.209nginx-rc-l9ctn 1 Running 0 1d 172.18.101.2 192.168.11.208

Note: http:// node: 30081 / nginx interface configuration completed

Delete service and nginx deployment # there is a problem with the configuration file the following command can delete the re-operation

Kubectl delete-f nginx-svc.yamlkubectl delete-f nginx-rc.yaml interface UI download deployment (master)

(master) 207 Operation

Download the dashboard yamlwget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml modification file kubernetes-dashboard.yamlimage where you want to modify. The default address is walled # image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0image: mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.8.3#-Dashboard Service-# kind: ServiceapiVersion: v1metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-systemspec: type: NodePort # add type:NodePort ports:-port: 443 targetPort: 8443 nodePort: 30000 # add nodePort: 30000 selector: k8s-app: kubernetes-dashboard create permission control yamldashboard-admin.yamlapiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard labels: k8s-app: kubernetes-dashboard roleRef : apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects:-kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system create Check out kubectl create-f kubernetes-dashboard.yamlkubectl create-f dashboard-admin.yaml [root@centos-test-ip-207-master ~] # kubectl get pods-- all-namespaces-o wideNAMESPACE NAME READY STATUS RESTARTS AGE IP NODEdefault nginx-rc-d9kkc 1 Running 0 1d 172.18.30.2 192.168.11.209default nginx-rc-l9ctn 1 Running 1 Running 0 1d 172.18.101.2 192.168.11.208kube-system kubernetes-dashboard-66c9d98865-qgbgq 1 Running 0 20h 172.18.30.9 192.168.11.209 access # Firefox can not access google without key interface

Note: HTTPS access

Direct access to https:// nodes: configured port acc

The access prompts you to log in. We use token to log in to kubectl-n kube-system describe secret $(kubectl-n kube-system get secret | grep admin-user | awk'{print $1}') | grep token [root@centos-test-ip-207-master ~] # kubectl-n kube-system describe secret $(kubectl-n kube-system get secret | grep admin-user | awk'{print $1}') | grep tokenName: default-token-t8hblType: kubernetes.io/service-account-tokentoken: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.. (many characters) # copy these characters to the front-end login.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report