Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Set up Kubernetes Cluster tutorial

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Single node cluster

Multi-node cluster note that node operates by connecting loadbalancer and forwarding to apiserver of mateter

Cluster planning:

Role ip component K8S-master1192.168.0.101kube-apiserver kube-controller-manager kube-scheduleretcdK8S-master2192.168.0.102kube-apiserver kube-controller-manager kube-scheduleretcdK8S-node1192.168.0.103kubelet kube-proxy docker etcdK8S-node2192.168.0.104kubelet kube-proxy docker etcdK8S-load-balancer192.168.0.106 (vip) actual IP105Nginx L4

1, system initialization

# # turn off the firewall: systemctl stop firewalldsystemctl disable firewalld## shuts down selinux:setenforce 0 # # temporary sed-I / etc/selinux/config # # permanent # # close swap:swapoff-a # # temporary vim / etc/fstab # # comment out the swap line # # synchronization system time: ntpdate time.windows.com # # synchronization time may need to install ntp server synchronization intranet time ntpdate 192 .168.0.101 # # add hosts:vim / etc/hosts192.168.0.101 k8s-master1192.168.0.102 k8s-master2192.168.0.103 k8s-node1192.168.0.104 k8s-node2## modify hostname: hostnamectl set-hostname k8s-master1

2futher etcd cluster installation

(1) Certificate issuance (note that etcd cluster is a two-way certificate)

# cd TLS/etcd installation cfssl tool: #. / cfssl.sh#curl-L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64-o / usr/local/bin/cfssl#curl-L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64-o / usr/local/bin/cfssljson#curl-L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64-o / usr/local/bin/cfssl- The hosts field in the certinfocp-rf cfssl cfssl-certinfo cfssljson / usr/local/binchmod + x / usr/local/bin/cfssl* modification request file contains all etcd nodes IP:# vi server-csr.json (specific domain name configuration is issued) {"CN": "etcd" "hosts": [192.168.0.101 "," 192.168.0.103 "," 192.168.0.104 "]," key ": {" algo ":" rsa "," size ": 2048}," names ": [{" C ":" CN "," L ":" BeiJing " "ST": "BeiJing"}]}

Generate ca root certificate file

{"CN": "etcd CA", "key": {"algo": "rsa", "size": 2048}, "names": [{"C": "CN", "L": "Beijing", "ST": "Beijing"}]}

Issue etcd two-way certificate configuration file

{"signing": {"default": {"expiry": "876000h"}, "profiles": {"www": {"expiry": "876000h", "usages": ["signing", "key encipherment", "server auth", "client auth"]}

Generate ca root certificate

Cfssl gencert-initca ca-csr.json | cfssljson-bare ca-

Issue etcd certificate

Cfssl gencert-ca=ca.pem-ca-key=ca-key.pem-config=ca-config.json-profile=www server-csr.json | cfssljson-bare server

Files required for etcd installation (note file path)

Etcd.service,/usr/lib/systemd/system,etcd/ssl/ {ca,server,server-key} .pem, / etcd/bin/etcd,/etcd/bin/etcdctl.etcd.config

[Unit] Description=Etcd ServerAfter=network.targetAfter=network-online.targetWants=network- online.target [service] Type=notifyEnvironmentFile=/opt/etcd/cfg/etcd.confExecStart=/opt/etcd/bin/etcd\-name=$ {ETCD_NAME}\-data-dir=$ {ETCD_DATA_DIR}\-listen-peer-urls=$ {ETCD_LISTEN_PEER_URLS}\-listen-client-urls=$ {service} Http://127.0.0.1:2379\-advertise-client-urls=$ {ETCD_ADVERTISE_CLIENT_URLS}\-initial-advertise-peer-urls=$ {ETCD_INITIAL_ADVERTISE_PEER_URLS}\-initial-cluster=$ {ETCD_INITIAL_CLUSTER}\-initial-cluster-token=$ {ETCD_INITIAL_CLUSTER_TOKEN}\-initial-cluster-state=new\- Cert-file=/opt/etcd/ssl/server.pem\-key-file=/opt/etcd/ssl/server-key.pem\-peer-cert-file=/opt/etcd/ssl/server.pem\-peer-key-file=/opt/etcd/ssl/server-key.pem\-trusted-ca-file=/opt/etcd/ssl/ca.pem\-peer-trusted-ca-file=/opt/ Etcd/ssl/ca.pemRestart=on-failureLimitNOFILE=65536 [Install] WantedBy=multi-user.target

Etcd.config

# [Member] ETCD_NAME= "etcd-1" # # Node name in the cluster (unique) ETCD_DATA_DIR= "/ var/lib/etcd/default.etcd" # # Storage path ETCD_LISTEN_PEER_URLS= "https://192.168.0.101:2380" # # Internal communication listening port ETCD_LISTEN_CLIENT_URLS=" https://192.168.0.101:2379" # # external communication listening port For example, for apiserver# [Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS= "https://192.168.0.101:2380" # # Cluster internal communication port ETCD_ADVERTISE_CLIENT_URLS=" https://192.168.0.101:2379" # # cluster external communication port ETCD_INITIAL_CLUSTER= "etcd-1= https://192.168.0.101:2380, Etcd-2= https://192.168.0.103:2380, Etcd-3= https://192.168.0.104:2380" # # other node names and addresses and listening ports in the cluster ETCD_INITIAL_CLUSTER_TOKEN= "etcd-cluster" # # the authentication password string in the cluster can be changed at will, but the status of the ETCD_INITIAL_CLUSTER_STATE= "new" # # cluster new is new exsiting indicates that the existing cluster is then added.

All etcd nodes

Systemctl daemon-reloadsystemctl restart etcdsystemctl enable etcd## will wait for other nodes to join when it starts, and will not start until all the nodes get up. If there is any problem, query / var/log/message system boot log # # verify whether the node is running healthily and normally / opt/etcd/bin/etcdctl-- ca-file=/opt/etcd/ssl/ca.pem-- cert-file=/opt/etcd/ssl/server.pem-- key-file=/opt/etcd/ssl/server-key.pem-- endpoints= "https://192.168.0.101:2379,https://192.168.0.103:2379," Https://192.168.0.104:2379" cluster-health

3The installation of the master node

Self-signed api ssl certificate (note that it is not the same set of ca as etcd at this time)

Ca root certificate

Vim ca-config.json {"signing": {"default": {"expiry": "876000h"}, "profiles": {"kubernetes": {"expiry": "876000h", "usages": ["signing", "key encipherment", "server auth" "client auth"]} vim ca-csr.json {"CN": "kubernetes", "key": {"algo": "rsa", "size": 2048}, "names": [{"C": "CN", "L": "Beijing" "ST": "Beijing", "O": "K8s", "OU": "System"}]}

Api-server

Server-csr.json {"CN": "kubernetes", # # K8S certificate officially specifies the use of the default field name "hosts": ["10.0.0.1", # # the first IP address of service internal cluster communication "127.0.0.1" "kubernetes", # # officially required to add the names of "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local", "192.168.0.100" to the entry certificate # # master api server address includes itself The load balance address that you already need to access (you don't need to add the node address through lb) "192.168.0.101", "192.168.0.102", "192.168.0.103", "192.168.0.104", "192.168.0.105"], "key": {"algo": "rsa", "size": 2048} "names": [{"C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s" "OU": "System"}]} # # worker-node node kube-proxy certificate Note CN field name kube-proxy.json {"CN": "system:kube-proxy", "hosts": [], "key": {"algo": "rsa", "size": 2048}, "names": [{"C": "CN" "L": "BeiJing", "ST": "BeiJing", "O": "K8s", "OU": "System"}]}

Issue and issue

Cfssl gencert-initca ca-csr.json | cfssljson-bare ca- cfssl gencert-ca=ca.pem-ca-key=ca-key.pem-config=ca-config.json-profile=kubernetes server-csr.json | cfssljson-bare servercfssl gencert-ca=ca.pem-ca-key=ca-key.pem-config=ca-config.json-profile=kubernetes kube-proxy-csr.json | cfssljson-bare kube-proxy## generates apiserver certificate and kube-proxy certificate

Installation startup

Tar zxvf k8s-master.tar.gzcd kubernetescp TLS/k8s/ssl/*.pem sslcp-rf kubernetes/ optcp kube-apiserver.service kube-controller-manager.service kube-scheduler.service / usr/lib/systemd/systemsystemctl start kube-apiserversystemctl start kube-controller-managersystemctl start kube-schedulersystemctl enable kube-apiserversystemctl enable kube-controller-managersystemctl enable kube-scheduler# authorizes cat / opt/kubernetes/cfg/token.csv # # c47ffb939f5ca36231d9e3121a252940 for kubelet TLS Bootstrapping "system:node-bootstrapper" kubectl create clusterrolebinding kubelet-bootstrap\-clusterrole=system:node-bootstrapper\-user=kubelet-bootstrap

Kube-apiserver.conf

KUBE_APISERVER_OPTS= "--logtostderr=false\ # # output log-- log-dir=/opt/kubernetes/logs\ # # output log level-- log-dir=/opt/kubernetes/logs\ # # log location-- etcd-servers= https://192.168.31.61:2379,https://192.168.31.62:2379,https://192.168.31.63:2379\ # # etcd address-- bind-address=192.168.31.61\ # # bound IP You can use the public network address-- secure-port=6443\ # # listening port-- advertise-address=192.168.31.61\ # # to advertise the address. Generally, just like the native IP, tell node which IP to link access through-- allow-privileged=true\ # allows the created container to have Super Admin permissions-- service-cluster-ip-range=10.0.0.0/24\ # service IP range Service assigns the IP address of this IP segment-enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction\ # enables the control plug-in, which belongs to the advanced features of K8s, such as resource quota restrictions, access control, etc.-- authorization-mode=RBAC,Node\ # authorization mode, generally using the rbac role to access-- enable-bootstrap-token-auth=true\ # enable bootstrap, automatically issue certificates for node users, and define specific permission content in token.csv. -- token-auth-file=/opt/kubernetes/cfg/token.csv\-- service-node-port-range=30000-32767\ # # Port exposed by the server service-- kubelet-client-certificate=/opt/kubernetes/ssl/server.pem\ # # kubelet certificate-- kubelet-client-key=/opt/kubernetes/ssl/server-key.pem\-- tls-cert-file=/opt/kubernetes/ssl/server.pem\ # apiserver uses the https certificate-- tls -private-key-file=/opt/kubernetes/ssl/server-key.pem\-client-ca-file=/opt/kubernetes/ssl/ca.pem\-service-account-key-file=/opt/kubernetes/ssl/ca-key.pem\-etcd-cafile=/opt/etcd/ssl/ca.pem\ # etcd certificate-etcd-certfile=/opt/etcd/ssl/server.pem\-etcd-keyfile=/opt/etcd/ssl/server-key.pem\- -audit-log-maxage=30\ # Log audit configuration-- audit-log-maxbackup=3\-- audit-log-maxsize=100\-- audit-log-path=/opt/kubernetes/logs/k8s-audit.log "

Kube-controller-manager.conf profile

KUBE_CONTROLLER_MANAGER_OPTS= "--logtostderr=false\ # # configure log-vault 2\ # configure log level-log-dir=/opt/kubernetes/logs\ # configure log directory-leader-elect=true\ # cluster election, apiserver will be highly available, kube-controller-manager itself will be based on etcd to achieve high availability, enable this option-- master=127.0.0.1:8080\ # apiserver IP, we set the link local, 8080 is the apiserver listening port It opens this port by default-- address=127.0.0.1\ # component listening address, locally, without external-- allocate-node-cidrs=true\ # # allows you to install plug-ins for cni networks-- cluster-cidr=10.244.0.0/16\ # # pod address pool-- service-cluster-ip-range=10.0.0.0/24\ # server IP range, and kube-apiserverIP range is the same # Cluster signed certificate Node joins the cluster to issue kubelet certificates automatically, and kubelet is issued by controller-manager Controller-manager issued by the following configuration-- cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem\-- cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem\ # the private key required to sign the service-account-- root-ca-file=/opt/kubernetes/ssl/ca.pem\-- service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem\ # issued the certificate for node, 10 years-- experimental-cluster-signing-duration=87600h0m0s "

Kube-scheduler.conf

KUBE_SCHEDULER_OPTS= "--logtostderr=false\-- leader-elect 2\-- log-dir=/opt/kubernetes/logs\-- leader-elect\ # multiple scheduler cluster elections-- master=127.0.0.1:8080\ # Link apiserver address-- address=127.0.0.1" # listener local address

4. Deploy node components

Kubelet,kube-proxy,docker

4.1Install docker

Tar zxvf k8s-node.tar.gztar zxvf docker-18.09.6.tgzmv docker/* / usr/binmkdir / etc/dockermv daemon.json / etc/dockermv docker.service / usr/lib/systemd/systemsystemctl start dockersystemctl enable dockerdocker info # # View docker information such as warehouse configuration

4.2. Install kubelet and kube-proxy (pay attention to changing the node name and the deployed masterIP)

The server under bootstrap.kubeconfig should be the IP of master

The server under kube-proxy.kubeconfig should be the IP of master

The hostname-override registered node name under kubelet.conf should be unique.

The hostnameOverride registered node name under kube-proxy-config.yml should be unique.

Meaning of profile suffix

.conf # basic configuration file

.kubeconfig # linked to the configuration file of apiserver

.yml # main profile (dynamic update profile)

Kubernetes/

├── bin

│ ├── kubectl

│ └── kube-proxy

├── cfg

│ ├── bootstrap.kubeconfig # request certificate profile

│ ├── kubelet.conf

│ ├── kubelet-config.yml# dynamically adjusts kubelet configuration

│ ├── kube-proxy.conf

│ ├── kube-proxy-config.yml # dynamically adjusts proxy configuration

│ └── kube-proxy.kubeconfig # is a component that links apiserver

├── logs

└── ssl

Vim kubelet.conf## outputs the following: KUBELET_OPTS= "--logtostderr=false\ # Log-- log-dir=/opt/kubernetes/logs 2\ # Log level-- log-dir=/opt/kubernetes/logs\ # Log Directory-- hostname-override=k8s-node1\ # Node name, which must be unique Each node needs to be changed-- network-plugin=cni\ # enable network plug-in # # specify the configuration file path-- kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig\-- bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig\-- config=/opt/kubernetes/cfg/kubelet-config.yml\-- cert-dir=/opt/kubernetes/ssl\ # specify the certificate storage directory issued for the node-- pod-infra-container-image=lizhenliang/pause-amd64 : 3.0 "# start the image of pod This pod image is mainly used to manage the pod namespace bootstrap.kubeconfig## outputs the following: apiVersion: v1clusterspod-cluster:certificate-authority: / opt/kubernetes/ssl/ca.pemserver: https://192.168.0.101:6443 # master1 server IP (intranet IP) name: kubernetescontexts:- context:cluster: kubernetesuser: kubelet-bootstrapname: defaultcurrent-context: defaultkind: Configpreferences: {} users:- name: kubelet-bootstrapuser:token: c47ffb939f5ca36231d9e3121a252940 # # token to contact / The token in opt/kubernetes/cfg/token.csv is consistent # # k8s to solve the complexity of issuing certificates by kubelet Therefore, the bootstrap mechanism is introduced to automatically issue kubelet certificates for node that will join the cluster, and certificates are required for all apiserver links.

Bootstrap workflow (with tenken authentication, kubelet.kubeconfig will be generated after the request is passed)

Kubelet-config.yml# outputs the following: kind: KubeletConfiguration # using object apiVersion: kubelet.config.k8s.io/v1beta1 # api version address: 0.0.0.0 # listening address port: 10250 # Port of the current kubelet readOnlyPort: 10255 # kubelet exposed port cgroupDriver: cgroupfs # driver The driver to be displayed in docker info is consistent with clusterDNS:-10.0.0.2clusterDomain: cluster.local # Cluster Domain failSwapOn: false # turn off swap# access authorization authentication:anonymous:enabled: falsewebhook:cacheTTL: 2m0senabled: truex509:clientCAFile: / opt/kubernetes/ssl/ca.pem # # pod optimization parameter authorization:mode: Webhookwebhook:cacheAuthorizedTTL: 5m0scacheUnauthorizedTTL: 30sevictionHard:imagefs.available: 15%memory.available: 100Minodefs.available: 10%nodefs.inodesFree: 5%maxOpenFiles: 1000000maxPods: 110kube-proxy.kubeconfig # output the following: apiVersion: v1clusterscluster:certificate-authority: / opt/kubernetes/ssl/ca.pem # specify caserver: https://192.168.0.101:6443 # masterIP address (private network) name: kubernetescontexts:- context:cluster: kubernetesuser: kube-proxyname: defaultcurrent-context: defaultkind: Configpreferences: {} users:- name: kube-proxyuser:client-certificate: / opt/kubernetes/ssl/kube-proxy.pemclient-key: / opt/kubernetes/ssl/kube -proxy-key.pemkube-proxy-config.yml# outputs the following: kind: KubeProxyConfigurationapiVersion: kubeproxy.config.k8s.io/v1alpha1address: 0.0.0.0 # snooping address metricsBindAddress: 0.0.0.0 KubeProxyConfigurationapiVersion 10249 # monitoring metric address ClientConnection:kubeconfig: / opt/kubernetes/cfg/kube-proxy.kubeconfig # read configuration file hostnameOverride: k8s-node1 # Node name registered to K8s unique clusterCIDR: 10.0.0.0/24mode: ipvs # mode Use ipvs (with good performance). The default is IPtablesipvs:scheduler: "rr" iptables:masqueradeAll: true

Installation startup

# # distribute certificates to the node node (3 certificates ca.pem,kube-proxy.pem) on the machine that issued the certificate Kube-proxy.key) cd TLS/k8sscp ca.pem kube-proxy*.pem root@192.168.31.65:/opt/kubernetes/ssl/## in nodejiqi tar zxvf k8s-node.tar.gzmv kubernetes/ optcp kubelet.service kube-proxy.service / usr/lib/systemd/system## modifies the IP address in the following two files: (master address) grep 192 * bootstrap.kubeconfig: server: https://192.168.0.101:6443kube-proxy.kubeconfig: server : https://192.168.0.101:6443## modifies the hostname in the following two files: (corrected to the previously specified hostname) grep hostname * kubelet.conf:--hostname-override=k8s-node1\ kube-proxy-config.yml:hostnameOverride: k8s-node1## startup Set boot systemctl start kubeletsystemctl start kube-proxysystemctl enable kubeletsystemctl enable kube-proxy## after startup kubne-proxy will appear Failed to delete stale service IP 10.0.0.2 connectionsyum-y install yum-y install conntrack solution # after startup on master kubectl get csr # # if there is no problem will display the node node kubelet request to issue a certificate # # allow to issue a certificate to node the content kubectl certificate approve node-csr- displayed in the second half is get csr MYUxbmf_nmPQjmH3LkbZRL2uTO-_FCzDQUoUfTy7YjI## check nodekubectl get node # # (it will show that NotReady does not affect this because it is not a cni network plug-in yet)

4.3 install the cni network plug-in and flannel network

Cni is an interface of K8s. If you need to interface with K8s, you need to follow the cni interface standard. Cni is mainly deployed to connect to the third-party network.

What needs to be done:

Cni is installed on each node node

Flannel is installed to the master node

4.3.1, install cni

# download installation package cniwget https://github.com/containernetworking/plugins/releases/download/v0.8.5/cni-plugins-linux-amd64-v0.8.5.tgz## extract installation package cnimkdir-p / opt/cni/bin # working directory mkdir-p / etc/cni/net.d # configuration file tar-zxvf cni-plugins-linux-amd64-v0.8.5.tgz-C / opt/cni/bin

4.3.2master node installs flannel

Kubectl apply-f kubectl apply-f kube-flannel.yaml # # this flannel only needs to be installed on the master node # # this file needs to be *. Download it to the server and execute kubectl apply-f kube-flannel.yml directly (the image inside requires *). The installation of foreign images will fail. (not recommended) # # the network net-conf.json in yaml should be the same as the cluster-cidr value in cat / opt/kubernetes/cfg/kube-controller-manager.conf # # if flannel is not used, so are other components # # after installation, each node will start a pod

5. Deploy webui

Official deployment:

Https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/## installation in master notice to change the exposed port kubectl apply-f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yamlkubectl apply-f recommended.yaml

Modify the deployment:

Kubectl apply-f dashboard.yamlkubectl get pods-n kubernetes-dashboard# output the following content: NAME READY STATUS RESTARTS AGEdashboard-metrics-scraper-76585494d8-sbzjv 1 Running 0 2m6skubernetes-dashboard-5996555fd8-fc7zf 1 2m6s# 1 Running 2 2m6s# View port kubectl get pods Svc-n kubernetes-dashboard# output the following NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEservice/dashboard-metrics-scraper ClusterIP 10.0.0.8 8000/TCP 16mservice/kubernetes-dashboard NodePort 10.0.0.88 443:30001/TCP# access to https://nodeip:300001 using any node node IP+ port

We use token to log in, create a service account, and bind the default cluster-admin administrator cluster role

Kubectl apply-f dashboard-adminuser.yaml# get tokenkubectl-n kubernetes-dashboard describe secret $(kubectl-n kubernetes-dashboard get secret | grep admin-user | awk'{print $1}')

Due to certificate problems, some browsers cannot log in, such as chorm, so it is necessary to issue a self-signed certificate to dashboard to support multiple browsers.

Mkdir key & & cd key# generate certificate openssl genrsa-out dashboard.key 2048openssl req-new-out dashboard.csr-key dashboard.key-subj'/ CN=kubernetes-dashboard-certs'openssl x509-req-in dashboard.csr-signkey dashboard.key-out dashboard.crt# delete the original certificate secretkubectl delete secret kubernetes-dashboard-certs-n kubernetes-dashboard# create a new certificate secretkubectl create secret generic kubernetes-dashboard-certs-- from-file=dashboard.key-- from-file=dashboard.crt-n kube-system# View dashboard pod V2.0 is-n kubernetes-dashboardkubectl get pod-n kube-system# restart dashboard pod,v2.0 is-n kubernetes-dashboardkubectl delete pod-n kube-system

Deploy DNS (DNS provides parsing for service in kubectl get svc)

Function: you can access the svc through the name of svc and forward it to the corresponding pod by svc

Kubectl apply-f coredns.yaml## Note: the clusterIP should be consistent with the clusterDNS of the cat / opt/kubernetes/cfg/kubelet.conf of the node node, otherwise the pod will fail parsing # # check the dnskubectl get pods-n kube-system## output the following: coredns-6d8cfdd59d-mw47j 1 Running 0 5m45s## test # # install the busybox tool kubectl apply-f bs.yaml## to view the podkubectl get pods## we launched and enter the kubectl exec-t busybox sh## inside the container to test ping kubernetes

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report