In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Preface
Before building a K8S cluster in binary system, we need to sort out one of the most exhausting points, that is, a variety of certificate mechanisms. This step is the most error-prone and difficult to troubleshoot of all the steps to install and configure kubernetes, but it happens to be the first step. Don't be deterred by this difficulty at the beginning.
How many certificates are there?
Official documentation reference: https://kubernetes.io/docs/setup/certificates/
Let's start with Etcd:
1. Etcd provides services to the outside world and requires a set of etcd server certificates
2. Etcd nodes communicate with each other with a set of etcd peer certificates
3. Kube-APIserver needs a set of etcd client certificates to access Etcd.
Then count kubernetes:
4. Kube-APIserver provides services to the outside world and requires a set of kube-apiserver server certificates.
5. Kube-scheduler, kube-controller-manager, kube-proxy, kubelet and other components that may be used need to access kube-APIserver and have a set of kube-APIserver client certificates
6. To generate the service account of the service, kube-controller-manager must have a pair of certificates (CA certificates) used to sign the service account.
7. Kubelet provides services to the outside world and requires a set of kubelet server certificates.
8. Kube-APIserver needs to access kubelet and have a set of kubelet client certificates
It adds up to 8 sets, but we need to understand the meaning of "set" here.
The certificates in the same set must be signed with the same CA, and the CA that signs the certificates in different sets can be the same or different. For example, all etcd server certificates need to be signed by the same CA, and all etcd peer certificates need to be signed by the same CA, while an etcd server certificate and an etcd peer certificate can be signed by two CA institutions and have nothing to do with each other. This is two sets of certificates.
Why must the certificate in the same "suite" be signed by the same CA?
The reason lies in the verification of one end of these certificates. Because only one Root CA can usually be specified on one side of the certificate to be validated. In this way, the verified certificate naturally needs to be signed by the corresponding private key of the same Root CA, otherwise it cannot be authenticated.
In fact, the use of a set of certificates (all use a set of CA to sign) can also build a K8S, can also be produced, but to clarify the relationship between these certificates, because of certificate errors, requests are rejected, it is not impossible to start, and if you do not understand the relationship between certificates, in the maintenance or solution of the problem, rashly changed the certificate, otherwise the entire system will be paralyzed.
TLS bootstrapping simplifies the production of kubelet certificates
The Kubernetes1.4 version introduces a set of API for signing certificates. With the introduction of this set of API, we do not have to prepare the certificates used by kubelet in advance.
The certificate used by each kubelet is unique, because it needs to bind its own IP address, so it is necessary to create a separate certificate for each kubelet. If there is a large amount of business, there will be a lot of node nodes, so the number of kubelet will increase, and there will be frequent changes (plus or minus Node) kubelet certificate production has become a very troublesome thing. You can save a lot of trouble by using TLS bootstrapping.
How it works: when Kubelet starts for the first time, it uses the same bootstrap token as the credential. This token has been set up in advance to belong to the user group system:bootstrappers, and the permissions of this user group are limited to be used only to apply for certificates. After passing the authentication with this bootstrap token, kubelet applies for its own two sets of certificates (kubelet server and kube-apiserver client for kubelet). After the application is successful, it uses its own certificate to do authentication, thus having the authority that kubelet should have. As a result, the process of manually preparing certificates for each kubelet is removed, and kubelet certificates can be automatically rotated to update.
Official documentation reference: https://kubernetes.io/docs/tasks/tls/certificate-rotation/
Why kubelet certificates are different
This is done for the sake of audit and the other for security. Each kubelet is both a server (the kube-apiserver needs to access the kubelet) and a client (the kubelet needs to access the kube-apiserver), so there must be two sets of certificates for the server and the client.
The server certificate needs to be bound to the server address. The address of each kubelet is different. Even the bound domain name is bound to a different domain name, so the server address is different.
The client certificate should not be the same. After the authentication certificate of each kubelet is bound to the IP of the machine where it is located, it can prevent the authentication certificate of one kubelet from being leaked and make the request forged from another machine pass the verification.
In terms of security, if the bootstrap token used to sign the certificate is retained on each node, is it possible to sign the certificate at will after the bootstrap token is leaked? The security risks are very great. Therefore, after the kubelet starts successfully, the local bootstrap token needs to be deleted.
Formal production of certificate
Although you can use multiple sets of certificates, maintaining multiple sets of CA is too complicated, so here you still use one CA to sign all the certificates.
Certificates that need to be prepared:
Admin-key.pem
Admin.pem
Ca-key.pem
Ca.pem
Kube-proxy-key.pem
Kube-proxy.pem
Kubernetes-key.pem
Kubernetes.pem
The components that use the certificate are:
Etcd: using ca.pem, kubernetes-key.pem, kubernetes.pem
Kube-apiserver: using ca.pem, kubernetes-key.pem, kubernetes.pem
Kubelet: using ca.pem
Kube-proxy: using ca.pem, kube-proxy-key.pem, kube-proxy.pem
Kubectl: using ca.pem, admin-key.pem, admin.pem
Kube-controller-manager: using ca-key.pem, ca.pem
We use CFSSL to make certificates, which is an open source PKI tool developed by cloudflare, is a complete CA service system, can sign, revoke certificates, etc., covering the entire life cycle of a certificate, followed by only its command line tools.
Note: in general, the certificate in K8S only needs to be created once. When you add a new node to the cluster, you only need to copy the certificate in the / etc/kubernetes/ssl directory to the new node.
Download and install the cfssl command line tool
[root@dong bin] # mkdir-p / usr/local/bin/cfssl [root@dong bin] # wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 [root@dong bin] # wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 [root@dong bin] # wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 [root@dong bin] # chmod + x cfssl* [root@dong bin] # mv Cfssl_linux-amd64 / usr/local/bin/cfssl [root@dong bin] # mv cfssljson_linux-amd64 / usr/local/bin/cfssljson [root@dong bin] # mv cfssl-certinfo_linux-amd64 / usr/local/bin/cfssl-certinfo [root@dong bin] # export PATH=/usr/local/bin:$PATH
Create a CA certificate
Create a directory for storing certificates
[root@dong bin] # mkdir-p / opt/kubernetes/ssl/ [root@dong bin] # cd / opt/kubernetes/ssl/
Create a certificate profile
[root@dong ssl] # vim ca-config.json {"signing": {"default": {"expiry": "87600h"}, "profiles": {"kubernetes": {"usages": ["signing", "key encipherment", "server auth", "client auth"] "expiry": "87600h"}
Field description:
Ca-config.json: you can define multiple profiles to specify different expiration time, usage scenarios and other parameters. Later, you can use a profile when signing a certificate.
Signing: indicates that the certificate can sign other certificates; CA=TRUE in the generated ca.pem certificate
Server auth: indicates that client can use this CA to verify the certificate provided by server
Client auth: indicates that server can use this CA to verify the certificate provided by client
Expiry: expiration time
Create a CA certificate signing request file
[root@dong ssl] # vim ca-csr.json {"CN": "kubernetes", "key": {"algo": "rsa", "size": 2048}, "names": [{"C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "K8s", "OU": "System"}] "ca": {"expiry": "87600h"}}
Field description:
"CN": Common Name,kube-apiserver extracts this field from the certificate as the requested user name (User Name); browsers use this field to verify that the website is legal
"O": Organization,kube-apiserver extracts this field from the certificate as the group to which the requesting user belongs (Group)
Generate CA certificate and private key
[root@dong ssl] # cfssl gencert-initca ca-csr.json | cfssljson-bare ca [root@dong ssl] # ls | grep caca-config.jsonca.csrca-csr.jsonca-key.pemca.pem
Where ca-key.pem is the private key of ca, ca.csr is a signing request, and ca.pem is the CA certificate, which is the RootCA that will be used by the later kubernetes components.
Create a kubernetes certificate
Create a kubernetes certificate signing request file kubernetes-csr.json
[root@dong ssl] # vim kubernetes-csr.json {"CN": "kubernetes", "hosts": ["127.0.0.1", "192.168.214.88", "192.168.214.89", "192.168.214.90", "192.168.214.200", "192.168.214.201", "192.168.214.202" "10.254.0.1", "192.168.214.210", "192.168.214.1 Compact 24", "kubernetes", "kube-api.wangdong.com", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local"] "key": {"algo": "rsa", "size": 2048}, "names": [{"C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "K8s", "OU": "System"}]}
Field description:
If the hosts field is not empty, you need to specify a list of IP or domain names that are authorized to use the certificate.
Since the certificate is subsequently used by etcd cluster and kubernetes master, enter the IP of etcd and master nodes, as well as the first IP of service network. (usually the first IP of the service-cluster-ip-range segment specified by kube-apiserver, such as 10.254.0.1)
My settings here include a private image repository, three etcd and three master. The IP of the above physical nodes can also be changed.
Generate kubernetes certificate and private key
[root@dong ssl] # cfssl gencert-ca=ca.pem-ca-key=ca-key.pem-config=ca-config.json-profile=kubernetes kubernetes-csr.json | cfssljson-bare kubernetes [root@dong ssl] # ls | grep kuberneteskubernetes.csrkubernetes-csr.jsonkubernetes-key.pemkubernetes.pem
Create an admin certificate
Create an admin certificate signing request file admin-csr.json
[root@dong ssl] # admin-csr.json {"CN": "admin", "hosts": [], "key": {"algo": "rsa", "size": 2048}, "names": [{"C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "system:masters" "OU": "System"}]}
Description:
Subsequent kube-apiserver uses RBAC to authorize client requests (such as kubelet, kube-proxy, Pod)
Kube-apiserver predefines some RoleBindings used by RBAC. For example, cluster-admin binds Group system:masters to Role cluster-admin, and this Role grants permission to all API calling kube-apiserver.
O specify the Group of the certificate as system:masters,kubelet when using the certificate to access kube-apiserver, the certificate is authenticated because the certificate is signed by CA, and because the certificate user group is a pre-authorized system:masters, it is granted access to all API
Note: this admin certificate is used to generate kube config configuration files for administrators in the future. Now we generally recommend using RBAC to control the role permissions of kubernetes. Kubernetes uses the CN field in the certificate as User and the O field as Group.
Generate admin certificate and private key
[root@dong ssl] # cfssl gencert-ca=ca.pem-ca-key=ca-key.pem-config=ca-config.json-profile=kubernetes admin-csr.json | cfssljson-bare admin [root@dong ssl] # ls | grep adminadmin.csradmin-csr.jsonadmin-key.pemadmin.pem
Create a kube-proxy certificate
Create a kube-proxy certificate signing request file kube-proxy-csr.json
[root@dong ssl] # vim kube-proxy-csr.json {"CN": "system:kube-proxy", "hosts": [], "key": {"algo": "rsa", "size": 2048}, "names": [{"C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "K8s" "OU": "System"}]}
Description:
CN specifies that the User of the certificate is system:kube-proxy
Kube-apiserver 's predefined RoleBinding system:node-proxier binds User system:kube-proxy to Role system:node-proxier, which grants permission to call kube-apiserver Proxy-related API
Generate kube-proxy certificate and private key
[root@dong ssl] # cfssl gencert-ca=ca.pem-ca-key=ca-key.pem-config=ca-config.json-profile=kubernetes kube-proxy-csr.json | cfssljson-bare kube-proxy [root@dong ssl] # ls | grep kube-proxykube-proxy.csrkube-proxy-csr.jsonkube-proxy-key.pemkube-proxy.pem
After doing the above, we will use the following files:
[root@dong ssl] # ls | grep pemadmin-key.pemadmin.pemca-key.pemca.pemkube-proxy-key.pemkube-proxy.pemkubernetes-key.pemkubernetes.pem
View certificate information
[root@master1 ssl] # cfssl-certinfo-cert kubernetes.pem {"subject": {"common_name": "kubernetes", "country": "CN", "organization": "K8s", "organizational_unit": "System", "locality": "BeiJing", "province": "BeiJing", "names": ["CN", "BeiJing", "BeiJing", "k8s" "System", "kubernetes"]}, "issuer": {"common_name": "kubernetes", "country": "CN", "organization": "K8s", "organizational_unit": "System", "locality": "BeiJing", "province": "BeiJing", "names": ["CN", "BeiJing", "BeiJing", "k8s" "System", "kubernetes"]}, "serial_number": "321233745860282370502438768971300435157761820875", "sans": ["192.168.214.1 kubernetes", "kube-api.wangdong.com", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local", "127.0.0.1" "192.168.214.88", "192.168.214.89", "192.168.214.90", "192.168.214.200", "192.168.214.201", "192.168.214.202", "10.254.0.1", "192.168.214.210"], "not_before": "2019-03-12T11:26:00Z" "not_after": "2029-03-09T11:26:00Z", "sigalg": "SHA256WithRSA", "authority_key_id": "CB:34:54:33:1F:F4:37:E:E5:94:B7:F5:8A:3D:F4:A4:43:43:E2:7F" "subject_key_id": "EC:31:D8:5F:4:E3:6F:C2:7F:DA:A8:F0:BD:A:B9:1F:56:7B:9A:DF" "pem": "- BEGIN CERTIFICATE-\ nMIIExjCCA66gAwIBAgIUOESejeFvtUe1qwPcXOQdC9a6iMswDQYJKoZIhvcNAQEL\ nBQAwZTELMAkGA1UEBhMCQ04xEDAOBgNVBAgTB0JlaUppbmcxEDAOBgNVBAcTB0Jl\ naUppbmcxDDAKBgNVBAoTA2s4czEPMA0GA1UECxMGU3lzdGVtMRMwEQYDVQQDEwpr\ ndWJlcm5ldGVzMB4XDTE5MDMxMjExMjYwMFoXDTI5MDMwOTExMjYwMFowZTELMAkG\ nA1UEBhMCQ04xEDAOBgNVBAgTB0JlaUppbmcxEDAOBgNVBAcTB0JlaUppbmcxDDAK\ nBgNVBAoTA2s4czEPMA0GA1UECxMGU3lzdGVtMRMwEQYDVQQDEwprdWJlcm5ldGVz\ nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAteZIJbL5G2ZHEKajyVe7\ nv4E1F9K9RzLTxghStRo808QOpVclOkFRHCi2qplrFrQmW4d/5AhJmofdoBuwIe/T\ n3UgrhlPj1rWC5DhaG8J7+wOIp62yURslnXE+A+EsXQLXxeKxrbrodNwTmGJHXdGl\ nv2pi0lyAgewdnhJHcYTvQbrDvbxpqYOHqKzJ3sqm1TSjnWSI9C1Hk/iF9xmjA4CG\ nLDHocnxzNv+T/qSofv0yyGgA/HovlNxP+jSIwaWJu3QHhOxV3k2Bj7i0jSJoq3n9\ nDl4co22Ge4SLiI2zPZayt9whzSyUoc5eloYJ1w7INcmfz2gOYl7L3godLg/gI5Eh\ nNQIDAQABo4IBbDCCAWgwDgYDVR0PAQH/BAQDAgWgMB0GA1UdJQQWMBQGCCsGAQUF\ nBwMBBggrBgEFBQcDAjAMBgNVHRMBAf8EAjAAMB0GA1UdDgQWBBTsMdhfBONvwn/a\ nVR0RBIHgMIHdghAxOTIuMTY4LjIxNC4xLzI0ggprdWJlcm5ldGVzghVrdWJlLWFw\ naS53YW5nZG9uZy5jb22CEmt1YmVybmV0ZXMuZGVmYXVsdIIWa3ViZXJuZXRlcy5k\ nZWZhdWx0LnN2Y4Iea3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVygiRrdWJl\ ncm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWyHBH8AAAGHBMCo1liHBMCo\ n1lmHBMCo1lqHBMCo1siHBMCo1smHBMCo1sqHBAr+AAGHBMCo1tIwDQYJKoZIhvcN\ ngnF7P9/35IjkNlnYhlpUTTIJbnlQY8mDyKx1AaZOkzr+2djYRpg2vL3E7+CdRldQ\ nUpNANSITolInKqboXev8SlLF9Mc/dWqgZzoifezuEkZ+c5KM6MY6MpMDVjVKNBxy\ nJTd3bZNaPopop8IWxLAel5IQbzUhooswtzUxUslwKnYYC9tsKc5AgiXdehCnbGNf\ nIr6wkK2OJBJStNPqarnpH6FZ6JxJ+qt59SdNhLixOT84HBR7ews/ZCYhQuPaJTy2\ nwIb0XOtxILF3JMBNW/n21IyhF0vXsMdLg+o=\ n-END CERTIFICATE-\ n"}
When building a k8s cluster, distribute these files to other node machines in the cluster. At this point, the TLS certificate is created.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.