In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Kubernetes architecture
In this system architecture diagram, we divide services into services running on work nodes and services that make up cluster-level dashboards.
The Kubernetes node has the services necessary to run the application container, which are controlled by Master.
Of course, Docker runs on each node. Docker is responsible for all specific image downloads and container runs.
Kubernetes mainly consists of the following core components:
Kubectl:k8s is the command line side, which is used to send customer operation instructions. Master node
1. API server [Resource operator]: it is the front-end interface of K8s cluster. Various client tools and other components of K8s can manage various resources of K8s cluster through it. It provides HTTP/HTTPS RESTful API, or K8S API.
It provides a unique operator for the resource object, and all other components must manipulate the resource data through the API provided by it. Only API Server communicates with the storage, and other modules access the cluster state through API Server.
The first is to ensure the security of cluster state access.
The second is to isolate the way of cluster state access and the way of back-end storage implementation: API Server is the way of state access and will not change because of the change of back-end storage technology etcd.
As the entrance of kubernetes system, it encapsulates the operations of adding, deleting, modifying and querying core objects, and provides them to external customers and internal components by means of RESTFul interface. For the relevant resource data "full query" + "change monitoring", complete the relevant business functions in real time.
2. Scheduler [Cluster Distribution Scheduler]: it is responsible for deciding which Node to run Pod on. When scheduling, we will fully consider the topology of the cluster, the current load of each node, as well as high availability, performance, data affinity and requirements.
1.Scheduler collects and analyzes the resource (memory, CPU) load of all Minion nodes in the current Kubernetes cluster, and then distributes the newly created Pod to the available nodes in the Kubernetes cluster accordingly.
two。 Monitor all running Pod that is not distributed and distributed in the Kubernetes cluster in real time.
3.Scheduler also monitors Minion node information, and because Minion nodes are frequently looked up, Scheduler caches a copy of the latest information locally.
4. Finally, after Scheduler distributes the Pod to the specified Minion node, it writes the Pod-related information Binding back to API Server.
4. Controller Manager [Internal Management and Control Center]: manages all kinds of resources in the cluster to ensure that the resources are in the expected state. It consists of a variety of Controller, including Replication Controller, Endpoints Controller, Namespace Controller, Serviceaccounts Controller and so on.
Realize the automation of cluster fault detection and recovery, responsible for the implementation of various controllers, mainly include:
1.endpoint-controller: associate service and pod periodically (the association information is maintained by the endpoint object) to ensure that the service-to-pod mapping is always up to date.
2.replication-controller: associate replicationController and pod periodically to ensure that the number of replications defined by replicationController is always the same as the actual number of pod running.
5. Etcd: responsible for saving the configuration information of K8s cluster and the status information of various resources. When the data changes, etcd will quickly notify the relevant components of K8s. [(third-party component) it has an alternative solution. Consul, zookeeper] ()
6. Pod: the smallest component unit of a K8s cluster. Within a Pod, you can run one or more containers. In most cases, there is only one Container container in a Pod.
7. Flanner: it is a K8s cluster network, which can ensure cross-host communication of Pod. There is also an alternative.
[root@master ~] # kubectl get pod-- all-namespaces// views pod information
[root@master ~] # kubectl get pod-- all-namespaces-o wide// displays node information of pod
Node node
Kubelet [Pod butler on the node]: it is the agent (agent) of Node. When Scheduler determines that Pod is running on a Node, it will send the specific configuration information of Pod to the kubelet,kubelet of that node, create and run containers based on this information, and report the running status to Master.
Responsible for the lifecycle management of the creation, modification, monitoring and deletion of pod on Node nodes and periodically report the status information of this Node to API Server. Kubelet is a bridge between Master API Server and Minion, receiving the commands and work assigned to it by Master API Server, interacting with persistent key stores etcd, file, server, and http, and reading configuration information. The specific work is as follows:
Set the environment variable of the container, bind Volume to the container, bind Port to the container, run a single container according to the specified Pod, and create a network container for the specified Pod.
Synchronize the status of Pod, synchronize the status of Pod, and get Container info, pod info, root info, machine info from cAdvisor.
Run commands in containers, kill containers, and delete all containers for Pod.
Kube-proxy [load balancer, routing forwarding]: the container that forwards the TCP/UDP data flow accessing the service to the backend. If there are more than one
Copy, kube-proxy will achieve load balancing.
Proxy is designed to solve the problem that external networks can access application services provided by containers across machine clusters and runs on each Node. Proxy provides the proxy of TCP/UDP sockets. Each time a Service,Proxy is created, the configuration information of Services and Endpoints is mainly obtained from etcd (which can also be obtained from file). Then, according to the configuration information, start a Proxy process on Minion and listen to the corresponding service port. When an external request occurs, Proxy will distribute the request to the correct container at the backend for processing according to Load Balancer. Proxy not only solves the problem of conflicts between the same service ports of the same host and host machine, but also provides the ability of Service forwarding service ports to provide services. The Proxy backend uses random and round-robin load balancing algorithm. In addition to the core components, there are some recommended Add-ons:
Kube-dns is responsible for providing DNS services for the entire cluster.
Ingress Controller provides public network access for services.
Heapster provides resource monitoring
Dashboard provides GUI
Federation provides clusters across availability zones
Fluentd-elasticsearch provides cluster log collection, storage and query.
one。 Hierarchical architecture
The design philosophy and functionality of Kubernetes is actually a hierarchical architecture similar to Linux, as shown in the following figure.
Core layer: the core function of Kubernetes, which provides API to build high-level applications and plug-in application execution environment internally.
Application layer: deployment (stateless applications, stateful applications, batch tasks, cluster applications, etc.) and routing (service discovery, DNS parsing, etc.)
Management: system metrics (such as infrastructure, container and network metrics), automation (such as automatic extension, dynamic Provision, etc.), and policy management (RBAC, Quota, PSP, NetworkPolicy, etc.)
Interface layer: kubectl command line tools, client SDK, and cluster federation
Ecosystem: a large container cluster managing and dispatching ecosystem above the interface layer, which can be divided into two categories
External to Kubernetes: logging, monitoring, configuration management, CI, CD, Workflow, FaaS, OTS applications, ChatOps, etc.
Kubernetes internal: CRI, CNI, CVI, image repository, Cloud Provider, configuration and management of cluster itself, etc.
two。 Run a container application in K8s
: let's take a look at how K8s components work together by running a container application.
After the developer develops an application, package the Docker image, upload it to Docker registry;, and then write a yaml deployment description file to describe the structure and resource requirements of the application. Developers use kubectl (or other applications) to submit deployment description files to API server,API server and update deployment requirements to etcd. The role of etcd in the K8s management node is like a database, and the data submitted to API server by other components is stored in etcd. API server is very lightweight and does not directly create or manage resources such as Pod. In most scenarios, it does not even actively call other K8s components to issue instructions. Other components monitor the objects of concern by establishing a long connection with the API server, monitor the changes, and perform the responsible operations.
Continue our journey of launching the application, as shown in the figure. After the controller in Controller Manager monitors the new deployment description, create ReplicaSet, Pod and other resources according to the deployment description. After Scheduler monitors the new Pod resources, combined with the resources of the cluster, one or more working nodes are selected to run Pod. After the Kubelet on the working node monitors that Pod is planned in its own node, it sends instructions to Container runtime such as Docker to start the container. Docker engineer will pull the image from the Docker registy according to the instruction, and then start and run the container.
three。 High availability deployment of K8s Cluster
through the previous introduction, we saw that K8s can launch and manage containers on multiple working nodes. Let's learn how to achieve the highly available deployment of management nodes.
There are three management nodes in the K8s high availability deployment in the figure above. Etcd itself is a distributed data storage system. According to its multi-instance deployment scheme, nodes only need to know the IP and port numbers of other nodes at startup to form a high availability environment. Like a typical application server, API Server is stateless and can run any number of instances without knowing each other. In order to connect clients such as kubectl and components such as Kubelet to a healthy API Server and reduce the pressure on a single API Server, you need to use the load balancer provided by the infrastructure as the entrance to multiple API Server instances. As shown in the deployment method in the figure above, an etcd instance is running on each master node, so that the API Server only needs to connect to the local etcd instance, and there is no need to use the load balancer as the entrance to the etcd.
Controller Manager and Scheduler need to modify the K8s cluster, which may cause concurrency problems. If two ReplicaSet Controller monitors at the same time that you need to create a Pod, and then do the creation operation at the same time, two Pod will be created. K8s to avoid this problem, a set of instances of such components will elect a leader, with only leader active and other instances on standby. Controller Manager and Scheduler can also be deployed independently of API server, connecting to multiple API server instances through a load balancer.
The example analyzes the role of each component and the architecture workflow:
1) kubectl sends deployment request to API server
2) APIserver tells Controller Manager to create a Deployment resource.
3) Scheduler performs the scheduling task and distributes two copies of Pod to node01 and node02. Go.
4) node01 and node02, the kubelet on the create and run Pod on their respective nodes.
Supplement
1. The configuration of the application and the current status information are saved in etcd, and API server reads this data from etcd when kubectl get pod is executed.
2.flannel assigns an IP to each Pod. However, no Service resource has been created at this time, and kube-proxy is not currently involved.
Run an example (create a deployment resource object) [root@master ~] # kubectl run test-web-- image=httpd-- replicas=2// to create a deployment resource object.
After the operation is completed, if there is an image that can be turned on directly, if not, you need to wait for a while. The node node will download it on the docker hup.
Check out [root@master ~] # kubectl get deployments. Or kubectl get deploy
[root@master ~] # kubectl get pod
[root@master ~] # kubectl get pod-o wide// displays node information of pod
If the node node is not running the test-web service, you need to restart it on the node
If you delete a pod [root@master ~] # kubectl delete pod test-web-5b56bdff65-2njqf, check [root@master ~] # kubectl get pod-o wide
Now it is found that the container still exists, because the controller will automatically find that if there is an error with the previously executed command, it will automatically complete it.
Reference:
Https://blog.csdn.net/gongxsh00/article/details/79932136
Https://www.jianshu.com/p/18edac81c718
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.