In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-14 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article will explain in detail how kind plays kubernetes locally. The content of the article is of high quality, so the editor shares it for you as a reference. I hope you will have some understanding of the relevant knowledge after reading this article.
Kubernetes has now entered the public's field of vision, many students are curious about this, more or less learned from other channels, but suffering from the kubernetes environment, can not be immersive experience, after all, if a complete set of kubernetes environment requires resources. Today, we introduce a tool (kind) that allows you to build a kubernetes environment locally and play kubernetes happily.
Kind, whose full name is kubernetes in docker, runs all the components of the kubernetes control plane in a docker container and communicates locally through 127.0.0.1. This method can only be tested locally and cannot be used in the generation environment. It is especially suitable for newcomers to experience locally and debug kubernetes-related components locally. For example, you can debug in kind when you start operator.
Golang environment is installed
If you have a golang environment, you can install it with the following command:
GO111MODULE= "on" go get sigs.k8s.io/kind@v0.10.0
If the download is slow, you can set the proxy and add an environment variable:
GOPROXY= "https://goproxy.cn" has no golang environment
Linux:
Curl-Lo. / kind https://kind.sigs.k8s.io/dl/v0.10.0/kind-linux-amd64chmod + x. / kindmv. / kind / some-dir-in-your-PATH/kind
Mac (homebrew)
Brew install kind
Or:
Curl-Lo. / kind https://kind.sigs.k8s.io/dl/v0.10.0/kind-darwin-amd64
Windows:
Curl.exe-Lo kind-windows-amd64.exe https://kind.sigs.k8s.io/dl/v0.10.0/kind-windows-amd64Move-Item.\ kind-windows-amd64.exe c:\ some-dir-in-your-PATH\ kind.exe basic command
Use kind-- help to see which commands are supported
Kind-- help kind creates and manages local Kubernetes clusters using Docker container 'nodes'Usage: kind [command] Available Commands: build Build one of [node-image] completion Output shell completion code for the specified shell (bash, zsh or fish) create Creates one of [cluster] delete Deletes one of [cluster] export Exports one of [kubeconfig, logs] get Gets one of [clusters, nodes Kubeconfig] help Help about any command load Loads images into nodes version Prints the kind CLI versionFlags:-h,-- help help for kind-- loglevel string DEPRECATED: see-v instead-Q,-- quiet silence all stderr output-v,-- verbosity int32 info log verbosity-- version version for kindUse "kind [command]-- help" for more information about a command.
You can see that three types of commands are supported, cluster-related, image-related, and generic commands.
Cluster is related to create, delete, etc., which is mainly used to create and delete kubernetes clusters.
Image is related to build, load, etc. It is mainly used for local debugging, where build images can be directly load to the cluster without having to push to the image repository and then go to pull through the cluster.
General commands such as get, version, etc.
Kind-version
Kind-- versionkind version 0.9.0
This article is introduced with kind 0.9.0. Here is the more exciting part, take a closer look: eyes:
Create a kubernetes cluster
Create a kubernetes cluster:
Kind create clusterCreating cluster "kind"... ✓ Ensuring node image (kindest/node:v1.19.1)? ✓ Preparing nodes? ✓ Writing configuration? ✓ Starting control-plane?? ️✓ Installing CNI?? ✓ Installing StorageClass?? Set kubectl context to "kind-kind" You can now use your cluster with:kubectl cluster-info-- context kind-kindThanks for using kind!
A cluster has already been started with a command, and you can view the cluster that has been created through kind get clusters.
Kind get clusterskind
Since it's kubernetes in docker, take a look at which containers are started:
Docker ps-a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESfdb88a476bb0 kindest/node:v1.19.1 "/ usr/local/bin/entr..." 3 minutes ago Up 2 minutes 127.0.0.1 minutes ago Up 43111-> 6443/tcp kind-control-plane
You can see that the container with a control plane is started, go into the container and see what's there.
[root@iZuf685opgs9oyozju9i2bZ] # docker exec-it kind-control-plane bash root@kind-control-plane:/# root@kind-control-plane:/# root@kind-control-plane:/# ps-ef UID PID PPID C STIME TTY TIME CMDroot 1 00 02:49? 00:00:00 / sbin/initroot 126 10 0 02:49? 00:00:00 / lib / systemd/systemd-journaldroot 14511 02:49? 00:00:06 / usr/local/bin/containerdroot 25710 02:49? 00:00:00 / usr/local/bin/containerd-shim-runc-v2-namespace k8s.io-id c1a5e2c868b9a744f4f78a85a8d660950bb76103a38e7root 271 10 02:49? 00:00:00 / usr/local/bin/containerd-shim-runc-v2- Namespace k8s.io-id 3549ecade28e2dccbad5ed15a4cd2b6e6a886cd3e10abroot 297 10 02:49? 00:00:00 / usr/local/bin/containerd-shim-runc-v2-namespace k8s.io-id 379ed27442f35696d488dd5a63cc61dc474bfa9bd08a9root 335 02:49? 00:00:00 / usr/local/bin/containerd-shim-runc-v2-namespace k8s.io-id e4eae33bf489c617c7133ada7dbd92129f3f817cb74b7root 343 2710 02:49? 00:00:00 / Pauseroot 360257 0 02:49? 00:00:00 / pauseroot 365297 0 02:49? 00:00:00 / pauseroot 377,335 0 02:49? 00:00:00 / pauseroot 443,335 0 02:49? 00:00:01 kube-scheduler-authentication-kubeconfig=/etc/kubernetes/scheduler.conf-authorization-kubeconfig=/etc/root 468 297 4 02:49? 00:00:17 kube-apiserver-advertise-address=172.18.0.2-allow-privileged=true-authorization-mode=Node RBAC-cliroot 496 271 1 02:49? 00:00:05 kube-controller-manager-- allocate-node-cidrs=true-- authentication-kubeconfig=/etc/kubernetes/controller-root 540 257 1 02:49? 00:00:05 etcd-- advertise-client-urls= https://172.18.0.2:2379-- cert-file=/etc/kubernetes/pki/etcd/server.crt-- cliroot 580 1 1 02:49? 00:00:06 / usr/bin/kubelet-- bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf-- kubeconfig=/etc/kuberneteroot 673 10 02:50? 00:00:00 / usr/local/bin/containerd-shim-runc-v2-namespace k8s.io-id b0965a6f77f58c46cfe7b30dd84ddf4bc37516ba60e6eroot 695 673 0 02:50? 00:00:00 / pauseroot 709 10 02:50? 00:00:00 / usr/local/bin/containerd-shim-runc-v2-namespace k8s.io-id aedf0f1fd02baf1cf2b253ad11e33e396d97cc7c53114root 738 709 02:50? 00:00:00 / pauseroot 789 673 0 02:50? 00:00:00 / usr/local/bin/kube-proxy-- config=/var/lib/kube-proxy/config.conf-- hostname-override=kind-control-planeroot 798 709 0 02:50? 00:00:00 / bin/kindnetdroot 1011 10 02:50? 00:00:00 / usr/local/bin/containerd-shim-runc-v2-namespace k8s.io-id aa554aa998c3091a70eacbc3e4a2f275a1e680a585d69root 1024 10 02:50? 00:00:00 / usr/local/bin/containerd-shim-runc-v2-namespace k8s.io-id 7373488f811fc5d638c2b3f5f79d953573e30a42ff52froot 1048 1 0 02:50? 00:00:00 / usr/local/bin/containerd-shim-runc-v2-namespace k8s.io-id 5ab6c3ef1715623e2c28fbfdecd5f4e6e2616fc20a387root 1079 1011 0 02:50? 00:00:00 / pauseroot 1088 1024 0 02:50? 00:00:00 / pauseroot 1095 1048 02:50? 00:00:00 / pauseroot 1152 1011 02:50? 00:00 : 00 / coredns-conf / etc/coredns/Corefileroot 1196 1024 0 02:50? 00:00:00 / coredns-conf / etc/coredns/Corefileroot 1205 1048 0 02:50? 00:00:00 local-path-provisioner-- debug start-- helper-image k8s.gcr.io/build-image/debian-base:v2.1.0-- config / etroot 1961 02:56 pts/1 00:00: 00 bashroot 1969 1961 0 02:56 pts/1 00:00:00 ps-efroot@kind-control-plane:/#
You can see that there are many processes in the container. Comb through them carefully to see what components there are.
Kube-apiserver...: api-server component, which is the entry point to manipulate resources and provides mechanisms for authentication, authorization, permission control, API registration and service discovery
Kube-scheduler...: scheduler component, which is responsible for scheduling resources and dispatching pod to appropriate nodes according to predetermined scheduling policies.
Kube-controller-manager...: controller-manager component, which is responsible for managing the status of the cluster, such as exception discovery, automatic expansion and rolling updates, etc.
Etcd.: etcd component, which is mainly used to store kubernetes data
/ usr/bin/kubelet.: kubelet component, which manages the lifecycle, data volumes, and network (CNI) of the container
/ usr/local/bin/kube-proxy...: kube-proxy component: responsible for service discovery and load balancing of cluster Service
/ coredns...: dns component, responsible for domain name resolution within the cluster
/ usr/local/bin/containerd.: the concrete implementation of kubernetes's CRI (Container Runtime), and this component has been created since the concrete pod was created
/ pause... The root container of pod. Create this container when you create pod. The network configuration of pod is configured in this container. Subsequent containers will share the network of this container.
/ usr/local/bin/containerd-shim-runc-v2.: the real container, and the subsequent pod starts in this form.
You can see that this container contains the components of all the control planes and data planes in kubernetes, which is a cluster of all in one.
The detailed configuration of this container can be viewed through docker inspect kind-control-plane.
Using clusters
There have been a lot of articles about the use of kubernetes, so I'm not going to focus on it here and just demonstrate it briefly. You can interact with kuberntes through api or kubectl, and here you choose to demonstrate it with kubectl.
If there is no local kubectl to install, see the installation documentation: https://kubernetes.io/docs/tasks/tools/install-kubectl/
The basic usage of kubectl can be found in my previous article: kubectl Common commands
Taking deploying logstash as an example, we will create the following resources:
Namespace
Deployment
Configmap
Hpa
Service
The specific yaml files are as follows:
Cat logstash.yaml
-- # setting NamespaceapiVersion: v1kind: Namespacemetadata: name: logging---# setting ConfigMapkind: ConfigMapapiVersion: v1metadata: name: logstash-conf namespace: loggingdata: logstash.conf: | input {http {host = > "0.0.0.0" # default: 0.0.0.0 port = > 8080 # default: 8080 response_headers = > {"Content-Type" = > "text/plain" "Access-Control-Allow-Origin" = > "*"Access-Control-Allow-Methods" = > "GET POST, DELETE, PUT "" Access-Control-Allow-Headers "= >" authorization " Content-type "" Access-Control-Allow-Credentials "= > true}} output {stdout {codec = > rubydebug}}-# setting DepolymentapiVersion: apps/v1kind: Deploymentmetadata: name: logstash namespace: loggingspec: replicas: 1 selector: matchLabels: app: logstash template: metadata: labels: app: logstash Spec: volumes:-name: config configMap: name: logstash-conf hostname: logstash containers:-name: logstash image: russellgao/logstash:7.2.0 args: ["- f" "/ usr/share/logstash/pipeline/logstash.conf" ] imagePullPolicy: IfNotPresent volumeMounts:-name: config mountPath: "/ usr/share/logstash/pipeline/logstash.conf" readOnly: true subPath: logstash.conf resources: requests: cpu: 1 memory: 2048Mi limits: cpu: 3 Memory: 3072Mi readinessProbe: tcpSocket: port: 8080 initialDelaySeconds: 5 periodSeconds: 10 livenessProbe: tcpSocket: port: 8080 initialDelaySeconds: 15 periodSeconds: 20 timeoutSeconds: 15 imagePullSecrets:-name: harbor---apiVersion: autoscaling/v2beta1kind: HorizontalPodAutoscalermetadata: Name: logstash-hpa namespace: loggingspec: scaleTargetRef: apiVersion: apps/v1beta2 kind: Deployment name: logstash minReplicas: 1 maxReplicas: 10 metrics:-type: Resource resource: name: cpu targetAverageUtilization: 80---apiVersion: v1kind: Servicemetadata: name: logstash-custerip namespace: loggingspec: selector: app: logstash type: ClusterIP ports:-name: 'port' protocol: TCP port: 8080 targetPort: 8080
Execute kubectl apply-f logstash.yaml
Kubectl apply-f logstash.yaml namespace/logging createdconfigmap/logstash-conf createddeployment.apps/logstash createdhorizontalpodautoscaler.autoscaling/logstash-hpa createdservice/logstash-custerip created
You can see that specific resources have been created, so let's take a look at the specific resources:
View ConfigMap:
Kubectl-n logging get configmapNAME DATA AGElogstash-conf 1 4m
View Deployment:
Kubectl-n logging get deployment NAME READY UP-TO-DATE AVAILABLE AGElogstash 1max 1 1 4 m
View Pod:
Kubectl-n logging get po-owideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESlogstash-64d58c4b98-nqk4v 1 kind-control-plane 1 Running 0 93s 10.244.0.9
It should be noted that the node where the Pod is located is kind-control-plane, not the local machine, indicating that kubernetes node is the container. The address of curl 10.244.0.9 curl 8080 is not accessible, indicating that it is outside the cluster, and then curl is connected into the container:
Curl 10.244.0.9 port 8080-v * About to connect () to 10.244.0.9 port 8080 (# 0) * Trying 10.244.0.9.. ^ C [root@iZuf685opgs9oyozju9i2bZ k8s] # [root@iZuf685opgs9oyozju9i2bZ k8s] # [root@iZuf685opgs9oyozju9i2bZ k8s] # docker exec-it kind-control-plane bash root@kind-control-plane:/# curl 10.244.0.9 to 8080-v * Trying 10.244.0.9 to 80. * TCP_NODELAY Set* Connected to 10.244.0.9 (10.244.0.9) port 8080 (# 0) > GET / HTTP/1.1 > Host: 10.244.0.9 Host 8080 > User-Agent: curl/7.68.0 > Accept: * / * > * Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK< Access-Control-Allow-Origin: *< Access-Control-Allow-Methods: GET, POST, DELETE, PUT< Access-Control-Allow-Headers: authorization, content-type< Access-Control-Allow-Credentials: true< content-length: 2< content-type: text/plain< * Connection #0 to host 10.244.0.9 left intactokroot@kind-control-plane:/# 查看 service : kubectl -n logging get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGElogstash-custerip ClusterIP 10.96.234.144 8080/TCP 5m pod 和 service 的原理是一样的,通过 CLUSTER-IP 访问只能在容器内进行访问。 在 pod 内进行访问 [root@iZuf685opgs9oyozju9i2bZ k8s]# kubectl -n logging exec -it logstash-64d58c4b98-nqk4v bash bash-4.2$ curl 10.96.234.144:8080 -v* About to connect() to 10.96.234.144 port 8080 (#0)* Trying 10.96.234.144...* Connected to 10.96.234.144 (10.96.234.144) port 8080 (#0)>GET / HTTP/1.1 > User-Agent: curl/7.29.0 > Host: 10.96.234.144 User-Agent 8080 > Accept: * / * >
< HTTP/1.1 200 OK< Access-Control-Allow-Origin: *< Access-Control-Allow-Methods: GET, POST, DELETE, PUT< Access-Control-Allow-Headers: authorization, content-type< Access-Control-Allow-Credentials: true< content-length: 2< content-type: text/plain< * Connection #0 to host 10.96.234.144 left intactokbash-4.2$bash-4.2$ curl logstash-custerip:8080 -v* About to connect() to logstash-custerip port 8080 (#0)* Trying 10.96.234.144...* Connected to logstash-custerip (10.96.234.144) port 8080 (#0)>GET / HTTP/1.1 > User-Agent: curl/7.29.0 > Host: logstash-custerip:8080 > Accept: * / * >
< HTTP/1.1 200 OK< Access-Control-Allow-Origin: *< Access-Control-Allow-Methods: GET, POST, DELETE, PUT< Access-Control-Allow-Headers: authorization, content-type< Access-Control-Allow-Credentials: true< content-length: 2< content-type: text/plain< * Connection #0 to host logstash-custerip left intactokbash-4.2$ 查看 hpa : kubectl -n logging get hpaNAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGElogstash-hpa Deployment/logstash /80% 1 10 1 5m 演示就到这里,可以看到和真正的 kubernetes 使用并无两样。那么这里还有一个问题,启动的这个 pod 是如何运行的呢 ? 再次进到控制面的容器内看看 : root@kind-control-plane:/# ps -ef UID PID PPID C STIME TTY TIME CMDroot 1 0 0 02:49 ? 00:00:00 /sbin/initroot 126 1 0 02:49 ? 00:00:00 /lib/systemd/systemd-journaldroot 145 1 0 02:49 ? 00:01:12 /usr/local/bin/containerdroot 257 1 0 02:49 ? 00:00:03 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id c1a5e2c868b9a744f4f78a85a8d660950bb76103a38e7root 271 1 0 02:49 ? 00:00:03 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 3549ecade28e2dccbad5ed15a4cd2b6e6a886cd3e10abroot 297 1 0 02:49 ? 00:00:02 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 379ed27442f35696d488dd5a63cc61dc474bfa9bd08a9root 335 1 0 02:49 ? 00:00:02 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id e4eae33bf489c617c7133ada7dbd92129f3f817cb74b7root 343 271 0 02:49 ? 00:00:00 /pauseroot 360 257 0 02:49 ? 00:00:00 /pauseroot 365 297 0 02:49 ? 00:00:00 /pauseroot 377 335 0 02:49 ? 00:00:00 /pauseroot 443 335 0 02:49 ? 00:00:43 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/root 468 297 3 02:49 ? 00:09:25 kube-apiserver --advertise-address=172.18.0.2 --allow-privileged=true --authorization-mode=Node,RBAC --cliroot 496 271 0 02:49 ? 00:02:53 kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-root 540 257 1 02:49 ? 00:03:33 etcd --advertise-client-urls=https://172.18.0.2:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --cliroot 580 1 1 02:49 ? 00:05:07 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kuberneteroot 673 1 0 02:50 ? 00:00:02 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id b0965a6f77f58c46cfe7b30dd84ddf4bc37516ba60e6eroot 695 673 0 02:50 ? 00:00:00 /pauseroot 709 1 0 02:50 ? 00:00:03 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id aedf0f1fd02baf1cf2b253ad11e33e396d97cc7c53114root 738 709 0 02:50 ? 00:00:00 /pauseroot 789 673 0 02:50 ? 00:00:01 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=kind-control-planeroot 798 709 0 02:50 ? 00:00:02 /bin/kindnetdroot 1011 1 0 02:50 ? 00:00:02 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id aa554aa998c3091a70eacbc3e4a2f275a1e680a585d69root 1024 1 0 02:50 ? 00:00:03 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 7373488f811fc5d638c2b3f5f79d953573e30a42ff52froot 1048 1 0 02:50 ? 00:00:03 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5ab6c3ef1715623e2c28fbfdecd5f4e6e2616fc20a387root 1079 1011 0 02:50 ? 00:00:00 /pauseroot 1088 1024 0 02:50 ? 00:00:00 /pauseroot 1095 1048 0 02:50 ? 00:00:00 /pauseroot 1152 1011 0 02:50 ? 00:00:35 /coredns -conf /etc/coredns/Corefileroot 1196 1024 0 02:50 ? 00:00:35 /coredns -conf /etc/coredns/Corefileroot 1205 1048 0 02:50 ? 00:00:13 local-path-provisioner --debug start --helper-image k8s.gcr.io/build-image/debian-base:v2.1.0 --config /etroot 1961 0 0 02:56 pts/1 00:00:00 bashroot 34093 1 0 07:27 ? 00:00:00 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 438c08255ede5fb7fa93b37bcbe51807d2fa5e507122broot 34115 34093 0 07:27 ? 00:00:00 /pause1000 34151 34093 6 07:27 ? 00:01:05 /bin/java -Xms2g -Xmx2g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatinroot 36423 0 0 07:43 pts/2 00:00:00 bashroot 36540 36423 0 07:44 pts/2 00:00:00 ps -ef 可以看到 STIME 是 07:27 的就是刚刚启动 logstash 相关的进程,通过 containerd-shim-runc-v2 启动的 logstash 进程,/pause 为 pod的根容器。 环境清理 在本地体验完或者测试完成之后,为了节省资源,可以把刚刚启动的集群进行删除,下次需要时再创建即可 。 kind delete clusterDeleting cluster "kind" ...[root@iZuf685opgs9oyozju9i2bZ k8s]# docker ps -aCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES4ec800c3ec10 russellgao/openresty:1.17.8.2-5-alpine "/usr/local/openrest…" 8 weeks ago Up 7 days 0.0.0.0:80->80/tcp, 0.0.0.0 root@iZuf685opgs9oyozju9i2bZ 443-> 443/tcp openresty-app-1 [root@iZuf685opgs9oyozju9i2bZ k8s] # kubectl-n logging get po The connection to the server localhost:8080 was refused-did you specify the right host or port?
As can be seen from the above command:
The container (kind-control-plane) of the control plane is deleted after executing the kind delete cluster command
When you execute the kubectl command again, you can no longer find the corresponding api-server address, you can look at the .kube / config file and find that the configuration information about the cluster has been deleted.
On the local kind is how to play kubernetes to share here, I hope the above content can be of some help to you, can learn more knowledge. If you think the article is good, you can share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.