In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly introduces the example analysis of 19-K8s in docker, which is very detailed and has certain reference value. Friends who are interested must finish it!
The first type of orchestration tool for Docker:
1) docker compose (docker native): containers can only be choreographed on one host, but not on multiple hosts
2) docker swarm (docker native): containers on multiple hosts can be choreographed.
3) docker machine (docker native): you can quickly initialize a host to the docker swarm cluster.
The above three are called docker three Musketeers.
The second type of orchestration tool for Docker:
Mesos: it is not an orchestration tool for docker, but a resource allocation tool. So mesos must rely on the container orchestration framework marathon.
The third category of orchestration tools for Docker:
Kubernetes (k8s for short): this container orchestration tool occupies 80% of the market share.
With container and container orchestration technology, the need for continuous integration (CI), continuous delivery of Delivery (CD), and continuous deployment of Deployment (CD) becomes possible, which is the concept of DevOps.
Note: DevOps is not a technology, but a sport, a culture.
Introduction of K8s
K8s was opened by google in 2014.
Borg is a great container orchestration tool within Google. K8s is developed on the basis of Borg, so K8s has attracted too many people's attention since it was born, and to this day, it really lives up to people's expectations.
2017 was the most brilliant year for container technology, and cloud vendors such as AWS, Microsoft's cloud technology and Aliyun began to announce that they supported K8s.
The code for k8s is hosted on github: https://github.com/kubernetes/kubernetes/releases
Features of K8s:
1) it can be boxed automatically, that is, the deployment of the container can be completed automatically without affecting availability.
2) it can repair itself. If the container collapses, it can be restarted within 1 second. With K8s, we no longer pay attention to the individual, but to the concern group. One individual is broken. Just take it out and replace it with a new one.
3) horizontal scaling can be realized automatically. If one container is not enough, just start another.
4) Service discovery and load balancing can be realized automatically, that is, the relationship between each microservice can be discovered automatically, and multiple services in the container can also be load balanced automatically.
5) automatic publishing and rollback can be realized.
6) key and configuration management can be implemented, that is, instead of loading the configuration file in the container, each container loads the configuration file on the remote server (configuration center)
7) Storage choreography can be realized
8) batch processing and execution of tasks can be realized.
K8s is a cluster with a central node architecture, consisting of master nodes (at least three) and nodes nodes (nodes that run the container). The customer's startup container and other requests will be sent to the master node first. The master node has a scheduler that analyzes the available status of the node node resources (cpu, memory) to find the best node to start the container of the user request.
The first component on the master is called Scheduler, which works in two steps: the first step is to pre-select the scheduler, that is, to evaluate how many node meet the container requirements; and the second step is to select the best node among the node to run the container.
If node goes down, all containers hosted on node are gone. At this point, K8s can create exactly the same container on other nodes as on the downtime node.
In addition, there is a component on the master called the controller, which constantly Loop to monitor the health status of each node periodically; there are multiple controllers (because there are at least three master).
In addition, there is a component on master called Controller Manager (Controller-Mnager), which is used to monitor the health of each controller.
The smallest unit running on K8s is not a container, but a pod. Pod can be understood as a container shell, and pod contains containers. Multiple containers can be placed in a pod, and these containers can share an underlying network namespace and storage volume, so that the pod is more like a virtual machine.
Generally speaking, there is only one container in a pod; if multiple containers must be placed in a pod, then one is the main container and the other is the auxiliary container, which is mainly set up to assist some functions of the main program of the main container.
All containers in a pod can only run on one node.
Pod is an atomic unit called by K8s and is a logical concept.
End users no longer have to care about which node pod runs on, which is the concept of cloud, that is, a lot of node is used as a resource pool for unified management.
Pod is managed by the controller as much as possible, not manually.
Pod can be divided into two categories:
A) Autonomous pod: self-managed pod. We create the Pod, first give it to the Apiserver, and then the scheduler dispatches it to the specified node node. If the container needs to be started, it is done by the kubelet component on the node; if the node fails, the pod disappears.
B) Pod managed by the controller (it is recommended that you create such a Pod): this Pod is an object with a lifecycle. The pod is dispatched to a node by the scheduler on master to run or stop. There are many kinds of pod controllers, the earliest one is called Replicaton Controller (replica controller), which means that when we start a pod, if the pod is not enough, we will start another one, which is called a replica. The replica controller controls the number of copies, and once there are fewer copies, one more will be added automatically. If there are more copies than the defined number, it will be stopped. That is, the number of copies must exactly match the number defined by people. The number of copies should be at least two. If the node where a copy of the pod is located is down, a new request is made to the ApiServer, and the Apiserver uses the scheduler to create a new pod to the new node. Scrolling update: for example, if I have a version 1.0 image and now a version 1.1 image, the controller-managed pod will launch a new version 1.1 Pod, and then delete the version 1.0 pod. This is called rolling update. Similarly, K8s also supports rollback updates. In the new version of K8s, there is another Replica Set (replica set controller), but the controller is not used directly, but uses a controller Deployment that declares updates, which is also the most frequently used. But the Deployment controller can only manage stateless applications. Stateful applications are managed by the Stateful Set controller. For the Deployment controller, it also supports a secondary controller, called HPA (horizontalPodAutoscaler), which can automatically expand the pod horizontally, that is, when a pod is under great pressure, the HPA controller will automatically horizontally expand and add several new pod to decompose the pressure, and the HPA will be calculated according to the cpu and memory load of the current node. Once the number of visits is small, the HPA will automatically reduce the number of pod. If we want to run only one copy on a Node, we need to use a DaemonSet controller. If you need to run jobs (such as backing up, cleaning up data, etc.), you need a conjob controller. All of the above are pod controllers that are used to manage different types of pod.
In order to group pod, you can label pod (Lablel) so that you can group it.
Tag selector (Lablel Selector) component: a resource mechanism that filters resources according to tags to meet the requirements.
The client finds pod through service, and service finds pod through pod's tag selector. Service is only an iptables-based net address translation routing rule, but to the latest version 1.11 of K8s, it supports ipvs distribution rules and various scheduling algorithms, which achieves load balancing. After installing K8s, you need to create a DNS pod because the name of the service needs to be parsed by the DNS server. This pod is part of K8s, known as the pod of K8s infrastructure, also known as the attachment of K8s, and called AddOns in English. This DNS is used to resolve service names, not pod, and DNS name resolution is automatically maintained by K8s without our human intervention.
In a word, the address in service is stored in iptables net or ipvs. Service is used to schedule traffic and does not start or stop the container.
However, the startup or shutdown and creation of pod is done by the controller. For example, if we want to create a nginx pod, we first create a Nginx controller, and the nginx controller will automatically create a nginx pod; for us, and then we create a nginx service to publish the nginx pod.
There are two types of service, one is that scheduling traffic is only for internal use of K8s, and the other is that traffic can be scheduled for external use of K8s.
In the figure above, we should understand that service is used to distribute traffic to pod, controllers are used to create, start, and stop pod, and tag selector is used by service to identify each pod based on tags.
In the operation of K8s, three kinds of networks are needed. The first kind of network requires each pod to be in one network, while the service is in another network, that is, the address of service is different from that of pod. The address of pod is configured in the network namespace inside pod and can be connected by ping, but the address of service is virtual, is a fake address, and only exists in iptables or ipvs. In addition, there is another network for node, so there are three kinds of networks. So the outside first reaches the node network, then to the service network, and finally to the pod network.
So how do pod communicate with each other? Multiple containers in the same pod communicate through lo; each pod communicates through overlay network (overlay network), even if the pod is across hosts, there is no problem; pod and service communicate through the gateway (that is, the address of the docker zero bridge).
There is a component on node called kube-proxy, which is responsible for communicating with ApiServer. Once kube-proxy finds that the pod address behind service has changed, kube-proxy will reflect the pod address to iptables or ipvs. So the management of service is realized by kube-proxy.
The data on master (note that there are multiple master) does not exist locally in master, but in shared storage DB, which is called etcd. The data in the etcd is stored in the form of key-value, and all the status information in the cluster is in the etcd, so the ectd needs to be redundant, usually with at least three nodes. Etcd is accessed through https. Etcd has one port for clustering internal communication and another port for communicating with ApiServer. In this way, etcd internal communication requires a point-to-point specialized certificate, and another set of certificates for ApiServer communication. In addition, ApiServer provides services to clients and requires another set of certificates. Similarly, CA certificates are required for communication between kubelet components and kube-proxy components on ApiServer and node. So to deploy K8s, you need to set up five CA, which is also the most difficult.
Let's classify k8s as three types of nodes: master, node (with pod on it), and ectd (storage cluster status information), all of which communicate with each other by http or https. We know that networks are also divided into pod network, service network and node network. So we need to build three types of networks, but K8s itself does not provide these three types of networks, but depends on the third-party plug-in CNI.
K8s is connected to the network through CNI (Container Network Interface) plug-in system. Currently, the common CNI plug-in is flannel. In fact, the network only provides two functions, one is to provide ip address to pod, service and so on, and the other is that the network can provide network testing function to isolate the communication between different Pod.
The flannel plug-in only supports network configuration (for ip addresses), but does not support network policies.
The plug-in calico in CNI can support both network configuration and network policy, but it is very difficult to deploy and use calico.
As a result, there is a third CNI plug-in, canel, which provides network configuration with flannel and network policy with calico. These plug-ins can be run as daemons on K8s or as containers in K8s.
Namespaces, you can realize that different kinds of pod run in different namespaces. For example, namespaces can be divided into development namespaces, production namespaces and so on. In this way, network behavior between namespaces and between pod of the same namespace is defined by network policy.
The summary of this section is:
1) master/node:
A) components included on master: API Server,Scheduler (Scheduler), Controller-Manager (Controller Manager)
B) components included on node: kubelet (a component used to communicate with master and attempt to start work such as containers on this node In addition, the container engine is used to start the container. The most popular container engines are docker, docker engine (you can also use other container engines), and kube-proxy (responsible for communicating with ApiServer. Once kube-proxy finds that the pod address behind the service has changed, kube-proxy will reflect the pod address to iptables or ipvs. So the management of service is realized by kube-proxy)
2) Pod:Lablel (tag, kv format), Lablel Selector (tag selector)
Final architecture diagram of k8s in production (PaaS)
The above is all the content of the article "sample Analysis of 19-k8s in docker". Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.