Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

The kubernetes of "Advanced Chapter" docker builds a cluster to add authentication and authorization (part two) (39)

2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Original articles, welcome to reprint. Reprint please indicate: reproduced from IT Story Association, thank you!

Original link address: "Advanced article" docker kubernetes to build a cluster to add authentication and authorization (part two) (39)

After the last continued certification version of the K8s build.

Kubectl preparation certificate # kubectl certificate is put here, because kubectl is equivalent to a system administrator, the old man uses admin named mkdir-p / etc/kubernetes/ca/admin# to prepare admin certificate configuration-kubectl only needs client certificate Therefore, the hosts field in the certificate request can issue the admin certificate for empty cp ~ / kubernetes-starter/target/ca/admin/admin-csr.json / etc/kubernetes/ca/admin/cd / etc/kubernetes/ca/admin/# using the root certificate (ca.pem) cfssl gencert\-ca=/etc/kubernetes/ca/ca.pem\-ca-key=/etc/kubernetes/ca/ca-key.pem\-config=/etc/kubernetes/ca / ca-config.json\-profile=kubernetes admin-csr.json | cfssljson-bare admin# Brother finally wants the admin-key.pem and admin.pemlsadmin.csr admin-csr.json admin-key.pem admin.pem8.2 configuration kubectl# to specify the address and certificate location of the apiserver (modified by ip) kubectl config set-cluster kubernetes\-- certificate-authority=/etc/kubernetes/ca/ca.pem\-- embed-certs=true\ -- server= https://192.168.68.101:6443# sets client authentication parameters Specify admin certificate and key kubectl config set-credentials admin\-- client-certificate=/etc/kubernetes/ca/admin/admin.pem\-- embed-certs=true\-- client-key=/etc/kubernetes/ca/admin/admin-key.pem# associated user and cluster kubectl config set-context kubernetes\-- cluster=kubernetes-- user=admin# setting current context kubectl config use-context kubernetes# setting result is a configuration file You can take a look at the content cat ~ / .kube/config

Verify the master node

# you can use the newly configured kubectl to check the component status kubectl get componentstatus

Calico-node (the master node generates the certificate, and 102103 copies it through scp) to prepare the certificate

Later, you can see that the calico certificate is used in four places:

Calico/node this docker container runtime access etcd using certificate cni configuration file, cni plug-in needs to access etcd using certificate calicoctl to operate cluster network access etcd using certificate calico/kube-controllers synchronize cluster network policy access etcd use certificate # calico certificate is placed here to prepare calico certificate configuration-calico only requires client certificate Therefore, the hosts field in the certificate request can issue the calico certificate for empty cp ~ / kubernetes-starter/target/ca/calico/calico-csr.json / etc/kubernetes/ca/calico/cd / etc/kubernetes/ca/calico/# using the root certificate (ca.pem) cfssl gencert\-ca=/etc/kubernetes/ca/ca.pem\-ca-key=/etc/kubernetes/ca/ca-key.pem\-config=/etc/kubernetes/ca/ca-config.json \-profile=kubernetes calico-csr.json | cfssljson-bare calico# Brothers ultimately want calico-key.pem and calico.pemls

Copy the primary node certificate calico

Since the calico service needs to be started by all nodes, you need to copy these files to each server

* * copy to 102103 two machines through the master node

The passwords of # root are all vagrantscp-r / etc/kubernetes/ca/ root@192.168.68.102:/etc/kubernetes/ca/scp-r / etc/kubernetes/ca/ root@192.168.68.103:/etc/kubernetes/ca/

Make sure the / etc/kubernetes/ca/ of the lower primary node is the same as the directory within 102103.

Update calico service

Cp ~ / kubernetes-starter/target/all-node/kube-calico.service / lib/systemd/system/systemctl daemon-reloadservice kube-calico start# verify calico (just see the list of other nodes) calicoctl node status

Kubelet

Here, the kubelet is asked to authenticate by booting the token, so the authentication method is different from the previous components. Its certificate is not generated manually, but is requested by the worker node TLS BootStrap to the api-server and automatically signed by the controller-manager of the master node.

Create a role binding (master node)

The way to boot token requires the client to tell him your user name and token when initiating a request to api-server, and the user has a specific role: system:node-bootstrapper, so you need to assign the kubelet-bootstrap user in the bootstrap token file to this specific role before kubelet has the right to initiate an authentication request.

Execute the following command on the primary node

# you can query the clusterrole list with the following command kubectl-n kube-system get clusterrole# can review the contents of the token file cat / etc/kubernetes/ca/kubernetes/token.csv# create role binding (bind user kubelet-bootstrap to role system:node-bootstrapper) kubectl create clusterrolebinding kubelet-bootstrap\-- clusterrole=system:node-bootstrapper-- user=kubelet-bootstrap

Create a bootstrap.kubeconfig (102103 worker node)

This configuration is used to complete bootstrap token authentication, saving important authentication information such as user, token and so on. This file can be generated by kubectl command: (you can also write your own configuration)

It's important.

0b1bd95b94caa5534d1d4a7318d51b0e

It explains how this came about.

# set cluster parameters (note to replace ip) kubectl config set-cluster kubernetes\-- certificate-authority=/etc/kubernetes/ca/ca.pem\-- embed-certs=true\-- server= https://192.168.68.101:6443\-- kubeconfig=bootstrap.kubeconfig# set client authentication parameters (note replacement token) kubectl config set-credentials kubelet-bootstrap\-- token=0b1bd95b94caa5534d1d4a7318d51b0e\-- Kubeconfig=bootstrap.kubeconfig# setting context kubectl config set-context default\-- cluster=kubernetes\-- user=kubelet-bootstrap\-- kubeconfig=bootstrap.kubeconfig# selection context kubectl config use-context default-- kubeconfig=bootstrap.kubeconfigmkdir-p / var/lib/kubeletmkdir-p / etc/kubernetesmkdir-p / etc/cni/net.d# move the newly generated file to the appropriate location mv bootstrap.kubeconfig / etc/kubernetes/ to prepare the cni configuration 103 work node)

Copy configuration

Cp ~ / kubernetes-starter/target/worker-node/10-calico.conf / etc/cni/net.d/kubelet service

Update service

Cp ~ / kubernetes-starter/target/worker-node/kubelet.service / lib/systemd/system/systemctl daemon-reloadservice kubelet start

* * Log in to 101 master node and enter a command to view the status

After kubectl get csr# starts kubelet, go to the master node to allow worker to join (approve worker's tls certificate request) #-* execute *-kubectl get csr on the master node | grep 'Pending' | awk' {print $1}'| xargs kubectl certificate approve#--# check log journalctl-f-u kubelet

Add to the primary node. one hundred and two

103 request to join, 102 has joined

Kube-proxy (child node 102103) prepare certificate # proxy certificate is placed here mkdir-p / etc/kubernetes/ca/kube-proxy# prepare proxy certificate configuration-proxy only needs a client certificate, so the hosts field in the certificate request can be blank. # CN specifies that the User of the certificate is system:kube-proxy, and the predefined ClusterRoleBinding system:node-proxy binds User system:kube-proxy to Role system:node-proxier Permission granted to the relevant API calling kube-api-server proxy cp ~ / kubernetes-starter/target/ca/kube-proxy/kube-proxy-csr.json / etc/kubernetes/ca/kube-proxy/cd / etc/kubernetes/ca/kube-proxy/# to issue the calico certificate using the root certificate (ca.pem) cfssl gencert\-ca=/etc/kubernetes/ca/ca.pem\-ca-key=/etc/kubernetes/ca/ca-key.pem\ -config=/etc/kubernetes/ca/ca-config.json\-profile=kubernetes kube-proxy-csr.json | cfssljson-bare kube-proxy# what the old man finally wants is kube-proxy-key.pem and kube-proxy.pemls

Generate kube-proxy.kubeconfig configuration # set cluster parameters (note to replace ip) kubectl config set-cluster kubernetes\-- certificate-authority=/etc/kubernetes/ca/ca.pem\-- embed-certs=true\-- server= https://192.168.68.101:6443\-- kubeconfig=kube-proxy.kubeconfig# set client authentication parameter kubectl config set-credentials kube-proxy\-- client-certificate=/etc / kubernetes/ca/kube-proxy/kube-proxy.pem\-- client-key=/etc/kubernetes/ca/kube-proxy/kube-proxy-key.pem\-- embed-certs=true\-- kubeconfig=kube-proxy.kubeconfig# setting context parameter kubectl config set-context default\-- cluster=kubernetes\-- user=kube-proxy\-- kubeconfig=kube-proxy.kubeconfig# Select context kubectl config use-context default -- kubeconfig=kube-proxy.kubeconfig# moves to the appropriate location mv kube-proxy.kubeconfig / etc/kubernetes/kube-proxy.kubeconfigkube-proxy service

Start the service

Mkdir-p / var/lib/kube-proxycp ~ / kubernetes-starter/target/worker-node/kube-proxy.service / lib/systemd/system/systemctl daemon-reload# installation dependent software yum-y install conntrack# startup service service kube-proxy start# view log journalctl-f-u kube-proxy

12. Kube-dns

Kube-dns is something special because it itself runs in a kubernetes cluster and runs as a kubernetes application. So its authentication and authorization method is different from the previous components. It requires service account authentication and RBAC authorization.

Service account Certification:

Each service account automatically generates its own secret, which is used to include a ca,token and secret for authentication with api-server

RBAC Licensing:

Permissions, roles, and role bindings are all created automatically by kubernetes. All you need to do is create a ServiceAccount called kube-dns, which is already included in the official existing configuration.

Prepare the configuration file

Add variables on the official basis to generate a configuration suitable for our cluster. Just copy is fine.

Cd ~ / kubernetes-starter

Api-server is not set in the new configuration. How does it know the cluster ip of each service and the endpoints of pod without visiting api-server? This is because kubernetes will inject the ip, port and other information of all services in the form of environment variables when starting each service service.

Create kube-dns (master node 101) kubectl create-f ~ / kubernetes-starter/target/services/kube-dns.yaml# to see if the startup is successful kubectl-n kube-system get pods

PS: finally, the secure version of the kubernetes cluster deployment is complete. There are also a lot of details involved, so there are two right blog posts. If each configuration is explained in detail, you will have to write a book. Learn about authentication and authorization from an entry point of view.

The following brothers use the new cluster to review the commands they have learned before, and then learn about some new commands, new parameters, and new functions.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report