In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
Today, the editor will share with you the relevant knowledge about what the core concept of Kubernetes is. The content is detailed and the logic is clear. I believe most people still know too much about this, so share this article for your reference. I hope you can get something after reading this article. Let's take a look at it.
Container technology is one of the core technologies of micro-service technology, and it has rapidly become the mainstream with the popularity of micro-service. Docker is the pioneer and founder of container technology. After it appeared, it quickly occupied the market and almost became synonymous with container. But at the beginning, it did not solve the problem of container clustering very well. Kubernetes seized this opportunity and emerged as a container choreographer (Container Orchestration) to manage and schedule container clusters, which has now defeated Docker to become the de facto standard of container technology. Of course, Docker is still needed inside K8s, but its functional scope has been greatly compressed. It is only responsible for the underlying container engine and Docker Image management, and has become an indispensable but non-existential part of the container system. K8s is responsible for most of the external interfaces.
K8s core object:
Compared with the simple and easy-to-learn Docker, K8s system is complex and has many concepts, and there are many different ways to complete the same function, which makes you at a loss and much more difficult to learn. For ordinary programmers, there is no need to establish a complete production environment, only a local development environment, then you only need to understand the core concept of k8s, which can greatly shorten the learning time. Everything in K8s is an Object, and its core concepts are only four, Pod, Deployment, Service, and Node. Plus a container image (Docker Image), which is the core of the Docker engine. Having mastered these five core concepts, we have a basic understanding of container technology and laid a solid foundation.
Windows installation environment:
Windows10 Windows10 Enterprise Edition, Professional Edition, and Education Edition can support direct installation of K8s, but computers are required to support Hyper-V, see here for details. Since my Windows is a home version, I can only install the virtual machine (VirtualBox) first and then install K8s on the virtual machine. The K8s I use is Minikube, a simplified version of K8s. Vagrant (a piece of software that manages virtual machines) is also installed as an interface to manage VirtualBox.
Container image (Docker Image):
Any program runs in a container, and K8s supports multiple containers, of which Docker is the most popular. Container image (Docker Image) is a running environment in the form of files, which is layered inside, and each layer adds new functions on the basis of the previous layer. The container image is created by Dockerfile. Dockerfile is a file that contains a set of defined Docker commands (similar to Linux commands). When running Dockerfile, the commands are executed in turn, and finally the required container image is generated. You then call the Docker command (Docker run) to run the container image to generate the Docker container, and after that, the application is deployed in the container. This way ensures that the environment is the same every time. What makes a container stronger than a virtual machine is that it takes up less system resources and takes less time to generate. The time it takes to create a container is generally seconds, while the virtual machine is minutes. The efficiency of creating a container image depends on its size. Generally speaking, the smaller the container image, the shorter its generation time. Here is a Dockerfile example of "nginx".
FROM alpine:3.2EXPOSE 80 443RUN apk add-- update nginx & &\ rm-rf / var/cache/apk/* & &\ mkdir-p / tmp/nginx/client-bodyCOPY. / nginx.conf / etc/nginx/nginx.confVOLUME ["/ etc/nginx/sites-enabled", "/ etc/nginx/certs", "/ etc/nginx/conf.d"] CMD ["nginx", "- g", "daemon off;"]
The first sentence of any Dockerfile is always "FROM." Is to create a runtime environment for Linux Generally speaking, there are the following:
FROM ubuntu:18.04: create an image according to the specific version of Linux. Such a Linux running environment is relatively complete, and the size of the image is 100 megabytes.
FROM alpine:latest: alpine is a streamlined Linux runtime environment with a size of 10 megabytes.
FROM scratch: scratch is the smallest Linux runtime environment, which is very fast to create, but the problem is that you can't log in to the container through shell, so I generally don't use it.
When managing a virtual machine with Vagrant, you can first start the virtual machine with the Vagrant command, then type in vagrant ssh to enter the virtual machine, and the system displays:
PS E:\ app2\ kub > vagrant sshLast login: Sat Sep 28 06:56:11 2019 from 10.0.2.2
Then type "docker run-- name docker-nginx-p 8001 nginx 80" to run the Nginx mirror. " Docker-nginx "is the name of the container, and"-- name "is the parameter option for the name." "- p" indicates port mapping, mapping the virtual machine's "8001" port to the container's "80" port (the default port for Nginx). " "nginx" is the name of the image. If the "Nginx" image is not found locally, the system will automatically download the Nginx image from the Docker image library to the local, and then run it. This image has a relatively complete Linux system, so the file is relatively large (100m), but it can also be used alive. After the command runs, it displays:
Vagrant@ubuntu-xenial:~$ docker run-name docker-nginx-p 8001purl 80 nginx
At this point, Nginx is running, but there are no requests and there is no output from the console. If the container named "docker-nginx" has been run before, you need to delete the original and run the above command. You can type "docker ps-a" to find all the containers that have been run, and then type "docker rm 1ec2e3d63537" to delete them, where "1ec2e3d63537" is the container ID.
Switch to another virtual machine window, type in curl localhost:8001, and display:
Vagrant@ubuntu-xenial:/usr/bin$ curl localhost:8001...Welcome to nginx!...
The home page of the Nginx appears, indicating that the Nginx in the Docker container is running properly.
Switch back to the original container display window, and when there is a request, the console outputs the Nginx log.
Vagrant@ubuntu-xenial:~$ docker run-- name docker-nginx-p 8001 nginx172.17.0.1 80 nginx172.17.0.1-- [28/Sep/2019:07:02:25 + 0000] "GET / HTTP/1.1" 200612 "-"curl/7.47.0"-"
Now that you have verified that the Docker image is good, type "CTRL-C" to exit.
Pod:
Pod is the most basic concept of K8, and you can think of it as an encapsulation of a container (container) to manage containers. One or more containers can be managed in a Pod, but it is usually one. All containers in Pod share Pod resources and networks. When a Pod does not meet the user's needs, you can copy the Pod as the smallest unit of replication to copy the same Pod to process the user's request. Pod supports a variety of containers, but generally uses Docker. You can create a Pod with a separate Pod configuration file, or you can put the configuration information of Pod in the configuration file of other objects (such as Deployment) and create it with other objects, which is more common. Each Pod has a unique IP address, and once the Pod is generated, it can be accessed through the IP address. But generally do not do so, but through the service (Service) to access indirectly. Here is the configuration file for Pod. Its explanation is put in the following Deployment.
Kind: PodapiVersion: v1metadata: labels: app: nginx-appspec: containers:-name: nginx-container image: nginx:latest restartPolicy: NeverPod template (Pod Templates)
Pod templates are configuration instructions for Pod embedded in other K8s objects (Object), such as Replication Controllers, Jobs, and DaemonSets. At this point, the Pod is not created separately, but by other objects, the most common of which is Deployment.
Deployment (Deployment):
Deployment is a higher-level object than Pod, its main role is to manage Pod clusters, it can have one or more Pod, each Pod is functionally equivalent. Generally, multiple Pod are configured in Deployment to achieve load balancing and fault tolerance. When configuring Deployment, you need to specify the number of Pod copies, and Deployment will automatically manage the Pod in it. When a pod goes down, Deployment can automatically copy a new Pod and replace the down Pod.
Here is the configuration file (nginx-deployment.yaml) for Deployment. In the gray box of the square (starting with template) are the settings of Pod embedded in Deployment, and above the gray box are the settings of Deployment. When you run this configuration file, it creates a Deployment as well as an embedded Pod. This is why we generally do not need a separate Pod configuration file, because it is already embedded in the Deployment.
Type "kubectl create-f nginx-deployment.yaml" to run the deployment, which shows:
Vagrant@ubuntu-xenial:~/dockerimages/kubernetes/nginx$ kubectl create-f nginx-deployment.yamldeployment.apps/nginx-deployment created
Now that the deployment is successful, you can access it now. Each Pod has its own IP address inside the K8s cluster, so we can only access it with the IP address inside the K8s. Type the following command to see the Pod address, where there are two "Nginx" Pod, because there are two Pod clusters in the Deployment.
Vagrant@ubuntu-xenial:~/nginx$ kubectl get pods-ostensible customcolor columnhouse name IPhello-minikube-856979d68c-74c65 172.17.0.3nginx-deployment-77fff558d7-bhbbt 172.17.0.10nginx-deployment-77fff558d7-v6zqw .metadata.namememe IPVOR .status.podIPNAME 172.17.0.9
Open another window, connect to the virtual machine, and type the following command "kubectl exec-ti hello-minikube-856979d68c-74c65-- / bin/sh" to log in to the K8s cluster, and you can access Nginx. Here "hello-minikube-856979d68c-74c65" is the name of "Minikube" Pod. "172.17.0.10" is the internal IP address of one of the Pod.
Vagrant@ubuntu-xenial:~$ kubectl exec-ti hello-minikube-856979d68c-74c65-/ bin/sh# curl 172.17.0.10 Service (Service):
Service is the object of the top-level K8s, which can be regarded as what we usually call micro-services. The most important thing for a service is service registration and discovery. In K8s, Service is used to implement this function. Here is the configuration file (nginx-service.yaml) for Service. Generally speaking, to invoke a service, you need to know three things, the IP address, the protocol and the port, such as "http://10.0.2.1:80"." But instead of using IP, we want to address it by name, which requires DNS. DNS implements service name-based addressing within the K8s cluster. The following is the configuration file for Service. Service binds to Pod through "selector", where it forwards requests to Pod with the label "app" is "nginx-app". "nodePort" creates an externally accessible port for the service so that the service can be accessed directly on the virtual machine without having to log in to the K8s cluster.
Run the following command "kubectl create-f nginx-service.yaml" to create the service.
Vagrant@ubuntu-xenial:~/$ kubectl create-f nginx-service.yamlservice/nginx-service created
After the service is created, call the following command to show all the current services, and you now have the "nginx-service" service. "80" is the internal port of the service and "30163" is the external port of the service.
Vagrant@ubuntu-xenial:~/$ kubectl get servicesNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEkubernetes ClusterIP 10.96.0.1 443/TCP 35dnginx-service NodePort 10.109.7.249 80:30163/TCP 18s
Because the port has been opened through "NodePort", there is no need to log in to K8s to access the service on the virtual machine. You can type "localhost"
The difference between curl localhost:30163Service and Deployment:
Deployment is used to manage clusters and is bound to Pod, but you cannot access Deployment directly. The service (Service) is designed to facilitate access by name rather than IP address. But this is only within the cluster. You can log in to the K8s cluster and type the following command to access the service, but not on the virtual machine (outside K8s), where you can only use the IP address. Because DNS only works inside K8s.
# curl nginx-service node (Node):
Node is easily confused with Pod when it first comes into contact, but it is the equivalent of a virtual machine or physical machine, while Pod is the equivalent of a container. All Pod or containers are deployed on top of Node. Minikube only supports a single Node, but you can deploy multiple Pod in it. The following is a schematic diagram of Node.
Picture source
The relationship between K8s core concepts: the relationship between objects:
The following is a simple structure diagram of k8s:
Picture source
Each diamond in the figure is a Node, the middle Node is Master Node, and the other three are Worker Node. Master Node is responsible for managing Worker Node. Worker Node is the Node that really works. There are various controllers in the Master Node, and the Deployment is managed by the controller. It is in the middle management Node, where there are two deployments, An and B, corresponding to Service An and Service B. "Service A" is deployed in the bottom Node, which has only one Pod. "Service B" is deployed in the two Node above, the Node on the left has only one Pod, and the Node on the right has two Pod, which is a cluster with three Pod. When there are multiple Pod in a deployment, it is common to deploy them on different Node, so that even if one Node goes down, Deployment is still available.
Object binding:
The following figure is a diagram of how objects are bound.
Picture source
Each object has multiple Label, and "app" is a tag used to identify Pod objects. In the figure, there are two Pod whose "app" tags have values of "A" and "B", where Pod "B" is a cluster and Pod "A" is not. Both the service (Service) and the deployment (Deployment) bind the matching Pod through a tag selector (Label Selector).
These are all the contents of the article "what are the Core Concepts of Kubernetes". Thank you for reading! I believe you will gain a lot after reading this article. The editor will update different knowledge for you every day. If you want to learn more knowledge, please pay attention to the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.