Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

The first time of Kubernetes (Architecture Overview)

2025-01-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

I. Overview of kubernetes and Devops

1. Why use kubernetes

Before docker appeared, we went to install and deploy applications, such as nginx, php and other web architecture sites. We have to manually operate the deployment, which is very tedious and time-consuming, and then there are operation and maintenance tools such as ansible. This tool is actually an application orchestration tool that can be installed, configured, and started. You can even quickly deploy a variety of applications with dependencies according to a defined playbook. Instead of tedious manual operation. This tool operates on applications that are directly deployed in the operating system. With the advent of docker, various applications are encapsulated and run in containers (containerized). As the operands change from the actual application object to the application within the container, the access control and management interfaces provided by them are different. Therefore, operation and maintenance tools such as ansible can not complete the scheduling of container operation. Later, there were tools specifically for container orchestration

1.1. 3 common container orchestration tools

Docker compose: it was independently developed by docker and can only perform choreography operations for a single docker host. Later, in order to enable docker compose to support multi-computer choreography, docker introduced docker swarm and docker machine components.

Mesos:mesos is an open source distributed resource management framework under Apache. It can schedule and allocate computing resources provided by all the hardware in an IDC, but the upper interface it faces is not the interface run by the container, but only the resource allocation tool. It cannot directly host the running container. So based on mesos, it must rely on a framework for container orchestration (marathon).

Kubernetes: it is an open source container orchestration engine released by google in 2014 to manage containerized applications on multiple hosts in the cloud platform. The goal of kubernetes is to make it easy and efficient to deploy containerized applications. So far, kubernetes has accounted for 80% of the container orchestration market.

2. Devops concept:

DevOps (a combination of Development and Operations) is a general term for a group of processes, methods, cultures and systems. Depvops focuses on a complete set of process solutions that combine continuous integration, continuous delivery and continuous deployment.

CI (Continued integrate continuous Integration)

CD (Continued Delivery continuous delivery)

CD (Continued Deployment continuous deployment)

2.1.The relationship between devops and docker

The emergence of docker containers and container orchestration tools make it easier to implement the devops process (continuous integration, continuous delivery, continuous deployment). In the original scenario, we need to build applications in different environments according to the target environment, and deploy them in different ways. With docker, you don't need to pay attention to this, because docker can do it, build at once, run everywhere. We can only build once (build as a mirror), and we can run the application as long as there is a docker on the target host (we don't need to pay attention to the environment of the target host).

Although docker can implement devops culture well, it also brings a disadvantage. Among many micro-services, we may have to deal with the collapse of various services every day, and the dependency invocation relationship between services is also very complex, which brings a lot of complexity for us to solve the problem. We should solve this problem very well. We need to use the container orchestration tool.

II. Kubernetes

1. The origin of kubernetes

First released in 2014, kubernetes was rewritten by several google engineers using the go language based on Borg, a powerful container orchestration tool within google. Then after the wide application of container technology, kubernetes has developed rapidly in just a few years, and it is deeply loved by people. The earliest version of kubernetes, version 1. 0, was released in 2015 and has been released to version 1. 12 so far. It was a landmark year in the history of container technology in 2017, because during this year, famous cloud computing companies such as aws, Microsoft Cloud and Aliyun all began to announce native support for kubernetes. Other platforms can provide services with their own K8s, so that users can deploy applications directly on it and provide container-level service environment. With the support of these large cloud vendors, K8S has been widely recognized and supported in the industry. And in October 2017, docker announced native support for both swarm and kubernetes in their enterprise release.

2. Characteristics of kubernetes

Auto-boxing: based on resource dependencies and other constraints, the container can be deployed automatically without affecting its availability

Self-repair: once a container crashes, due to the lightweight nature of the container, kubernetes can quickly start the new container in about 1 second.

Automatic horizontal scaling: kubernetes can add containers indefinitely as long as the resource support of the physical platform is sufficient.

Service discovery and load balancing: when we need to run many applications on k8s, a service can find the services it depends on in the form of automatic discovery, and if each service has multiple containers, it can achieve automatic load balancing.

Auto publish rollback:

Key and configuration management: K8s saves the configuration information of all applications through the configuration center. When the container starts, it will go to the configuration center to load the corresponding configuration information.

Storage choreography: automatically create storage volumes according to the needs of the container itself.

Task batch run:

3. Kubernetes architecture

Kubernetes is a cluster from our traditional operation and maintenance point of view, which combines the resources of multiple hosts into a large resource pool and provides computing storage and other capabilities. K8s-related applications are installed on each host, and work together through this application to use multiple hosts as one host. But in K8s cluster, the host is divided into roles, and K8s is a cluster system with central node architecture (master/nodes model). K8s typically has three master nodes to implement HA,N. Node nodes provide computing storage capacity to run the container. Master is responsible for accepting requests from the client, and then master is responsible for analyzing the status of available resources in each node and dispatching requests to the best node node that can run the container. Finally, the node node first checks locally for mirroring (removing the pull image on the Registry) and finally starts the container in the form of Pod (the minimum scheduling unit of the node node). This is the functional model of kubernetes.

3.1Core components of master node

API Server: API Server is responsible for receiving, parsing, receiving status information and processing requests in the cluster.

Scheduler (scheduler): it is responsible for observing the total cpu computing and storage resources available on each node, and selects a qualified node among many node to create the container according to the amount of resources required for the container created by the user's request. Kubernetes designs a two-level scheduling method to complete the scheduling. The first step is to carry out pre-selection; evaluate each node and select all the matching node. The second step is to select the best node according to the optimal algorithm in the scheduling algorithm on the selected node.

Controller Manager: it is responsible for monitoring the health of each controller. If the controller is not healthy, Controller Manager will regenerate a controller to take over. Controller Manager if downline, there will be a takeover from Controller Manager on master

3.2Node etcd

It is a database of key:value, similar to redis. But it also has the cluster election function that redis does not have, which is responsible for storing the state information stored in the API Server in the cluster (persisted to shared storage) to prevent the primary master node in the cluster from downtime, and other master nodes can read the previous cluster information. Etcd is also a restfull-style cluster that communicates through http or https. In a K8s cluster, etcd is highly available. Cluster leader elections cannot be held after preventing an etcd from downtime.

Core components of node node

Kubelet: responsible for communicating with master, accepting and executing tasks dispatched by master, which may include creating Pod, managing the health status of Pod, creating storage volumes, starting containers, etc.

Container engine: docker, which is responsible for running containers in Pod as a container engine

Service: in K8s cluster, Pod failure and re-creation will lead to frequent changes in hostname and IP address, which causes clients to be unable to communicate with the changed Pod, so K8s adds an intermediate layer (Service) between its clients for each group of Pod providing similar services, and Service proxies requests from clients to many Pod, so it is also a scheduler. After a Pod failure restart, Service uses "label selector" to associate the new Pod object. Finally, Service automatically detects the IP address, port, hostname and other information of the new Pod object. Service is not a component of k8s, it is a Dnet rule of iptables, and k8s has been replaced by IPVS rule in version 1.11.

Kube-Proxy: after each Pod in the node changes, the result is saved in the API Server. Then API Server generates a notification event, and Kube-Proxy is responsible for receiving notification events from API Server. Once it is found that the Pod information behind a certain Service has changed (IP, Port, etc.), Kube-Proxy is responsible for converting the changed service into IPVS or IPtables rules on each node. And every implementation that creates a Service,Service depends on Kube-Proxy to create an IPVS or rule on each node.

Namespace (namespace): it is the namespace of K8s, which is used to isolate different types of Pod in the cluster. What it provides is not the true network boundary, but the administrative boundary. In the future, we can delete one namespace and delete all the Pod. It is not the boundary of Pod, and Pod can also access Pod in other namespaces if there are no network policy restrictions.

What is Pod

Pod is the minimum logic operation and scheduling unit of K8s. In K8s, the scheduler does not directly schedule the operation of the container. The goal of scheduling is that Pod,Pod is the encapsulation of container abstraction in K8s, which can be understood as a shell. Pod is mainly used to store containers. Generally, there is only one container in Pod, and there can also be multiple containers in Pod, so Pod has a feature that shares the underlying namespaces Net, UTS, and IPC. On top of these three namespaces, you can also share a third resource, "storage volume". This allows us to build a more elaborate architecture for communication between containers.

K8s simplifies the resource management of K8s (Pod resources, etc.) by adding some metadata information to Pod, and Tag is one of many metadata, Tag is the form of Key= "Value", through label selector (tag selector), kubelet is managing Pod, grouping Pod, it is easy to identify Pod.

Type of Pod:

Autonomous Pod: after the self-managed Pod,Pod is created, it is submitted to the API Server,API Server for receiving and dispatched to the specified node node with the help of the scheduler. The Pod is started by the node. If the container in the Pod fails, the container needs to be restarted. This operation is done by the kubelet. If the node fails, the pod will disappear.

Pod managed by controller: it is the introduction of Pod controller that makes Pod become a lifecycle object in the design of k8s cluster, and the scheduler schedules Pod to a node in the cluster to run.

3.5.The Pod controller

ReplicationController: it can automatically manage Pod. When Pod is less or more than the specified number, it will automatically restart Pod, delete, and create Pod according to policy. The Pod controller can also implement rolling updates to update container images running inside Pod from one version to another. And the mirror version can be returned. In previous versions of k8s, only one Pod controller for ReplicationController was supported

ReplicaSet: added to the new version of k8s, it is not used directly, it has a declarative updated controller Deployment

Deployment: only responsible for managing those stateless applications, it also supports the secondary controller of HPA, through HPA (horizontal Pod automatic scaling controller), to monitor whether the Pod is sufficient, if insufficient, create a new Pod

StatefulSet: responsible for managing stateful application Pod

DaemonSet: if we want to run a copy on a node in the cluster instead of a random copy, we need to use it

Job: manage Pod that require a specific time, but the time is not fixed, and you can exit when you execute a task. Such as script execution and other operations

Cronjob: Pod that manages tasks that need to run periodically

IV. The network model of Kubernetes

1. Kubernetes requires that there are three kinds of networks in the whole K8s cluster.

Each Pod is to run in a network. The network of Pod is configured on the network namespace inside the Pod, which is equivalent to the IP address of the host, which is called the Pod network.

Each service should run in a network, which is virtual, cannot ping, and exists in the rules of IPVS or IPtables. It's called cluster network.

Each node network, each node has a network. It's called node network.

In order to access the external network, you must first reach the node network, from the node network to the cluster network, and finally from the cluster network to the Pod network.

2, 3 types of communication

Local communication: communication between multiple containers within the same Pod through Lo interface

Communication between Pod: Pod communication between different docker hosts communicates by configuring net rules of IPtables rules, which will affect efficiency through multi-layer forwarding, while using physical bridging communication will generate broadcast storms. K8s uses Overlay Network (overlay network) to forward layer 2 messages through tunneling. Although cross-host, it seems to be working in the same layer 2 network.

Communication between Pod and Service: because the Service address is only the address in the iptables or ipvs rules on the host, you only need to point the gateway address to the docker0 bridge address in the container. Changes or new Service are managed by the Kube-Proxy component on the node node

3. Network implementation of kubernetes.

K8s does not provide three network management and network policy functions in the cluster, but relies on third-party plug-ins. K8s accesses external network service solutions through CNI (Container Network Interface) plug-in architecture. CNI requires that the network solution must support network functions (can provide addresses) and network policy functions (for security). Isolation restricts access between Pod in different namespaces or between Pod in the same namespace. As long as a network service provider can develop this service according to CNI, it can be used as a network solution for K8s. These network solutions can be hosted on the cluster as attachments (which can be run as Pod), which is a special Pod that requires the network namespace of the shared node. In this way, it is possible to run system management commands as a container. At present, there are many CNI plug-ins available, the common ones are

Flannel: only network configuration is supported

Calico: supports network configuration and network policies. Based on the BGP protocol to achieve direct routing communication. But it is difficult to deploy and use.

Canel: a combination of the above two plug-ins.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report