Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to understand the core concepts and components of Kubernetes

2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article mainly explains "how to understand Kubernetes core concepts and components". Interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Let's let Xiaobian take you to learn "how to understand Kubernetes core concepts and components"!

Kubernetes is an open source container orchestration engine for automating the deployment, scaling, and management of containerized applications. However, not all projects need microservices, nor do all projects need Kubernetes, such as management background, scheduled task services, non-distributed databases, etc. There is no need for containerized deployment, Kubernetes is more suitable for deploying distributed microservices applications.

Kubernetes architecture

Kubernetes system adopts C/S architecture design. The system architecture is divided into two parts: Master and Node. Master is Server (master node) and Node is Client (working node).

The master node, as the brain of the cluster, is responsible for managing all nodes, scheduling which nodes the Pod runs on, and controlling all states in the process of cluster operation, where nodes represent cloud virtual servers.

Node workers manage containers, monitor and report the health status of all pods running on their node.

The components running on the master node include kube-apiserver, kube-controller-manager, and kube-scheduler components.

Kube-apiserver is responsible for exposing Kubernetes"resource groups/resource versions/resources" in RESTful style and providing services. All components in the cluster manipulate resource objects through kube-apiserver components. The kube-apiserver component is also the only core component in the cluster that interacts with the Etcd cluster.

kube-controller-manager manages nodes, Pod replicas, services, endpoints, namespaces, serviceaccounts, etc. in a Kubernetes cluster. It is responsible for ensuring that the actual state of the Kubernetes system converges to the desired state. By default, it provides some controllers, such as DeploymentControllers controller, StatefulSet controller, Namespace controller and PersistentVolume controller. Each controller monitors the current state of each resource object in the whole cluster in real time through the interface provided by kube-apiserver component. When a failure occurs and the system state changes, it tries to repair the system state to the desired state.

The kube-scheduler component is responsible for finding the appropriate node in the Kubernetes cluster for a Pod resource object and running it on that node. The scheduler schedules only one Pod resource object at a time, and the process of finding suitable nodes for each Pod resource object is a scheduling cycle. The scheduler component monitors the Pod resource objects and Node resource objects of the whole cluster, and selects the optimal node for a new Pod resource object through scheduling algorithm when it is monitored.

Components running on Node worker nodes include kubelet, kube-proxy, and container components.

Kubelet is responsible for receiving, processing and reporting tasks sent by kube-apiserver component. When the kubelet process starts, it registers the node itself with the kube-apiserver. It is mainly responsible for creating, modifying, monitoring, deleting, expelling, and managing the Pod lifecycle of the Pod resource object on the Node. Kubelet components implement three open interfaces: CRI(Container Runtime Interface), CNI(Container Network Interface), and CSI(Container Storage Interface).

Kube-proxy acts as a network proxy on nodes, running on each Kubernetes node. It monitors service and endpoint resource changes of kube-apiserver and configures Load Balancer via iptables/ipvs etc. to provide unified TCP/UDP traffic forwarding and Load Balancer functionality for a group of pods, but only issues requests to Kubernetes service and its backend pods.

resource concept

In Kubernetes, resources are the core concept, and the entire ecosystem operates around resources. Kubernetes is essentially a resource control system responsible for registering, managing, scheduling, and maintaining the state of resources.

Kubernetes groups and versioning resources:

Group: Resource Group

Version: Resource Version

Resource: Resources

Kind: Resource type (category)

Resource objects and resource manipulation methods:

Resource Object: A resource object contains fields such as resource group, resource version, and resource type.

Resource operation method (Verbs): Each resource has a resource operation method to implement CURD operation on Etcd. Kubernetes supports 8 resource operation methods: create, delete, deletecollection, get, list, patch, update, watch.

Kubernetes supports two types of resource groups: resource groups with group names and resource groups without group names:

Resource groups with group names: in the form of//, for example apps/v1/deployments;

Resource groups without group names: Core resource groups, expressed as/, for example/v1/pods.

The Restful API provided by Kubernetes uses GVR(resource grouping/resource version/resource) to generate path, as shown in the following table example:

PATH resource resource operation method/api/v1/configmapsConfigMapcreate,delete,deletecollection,get,list,patch,update,watch/api/v1/podsPodcreate,delete,deletecollection,get,list,patch,update,watch/api/v1/servicesServicecreate,delete,deletecollection,get,list,patch,update,watch

Resource groups with group names have paths prefixed with/apis, and resource groups without group names have paths prefixed with/api. Take/api/v1/configmaps as an example, where v1 is the resource version number and configmaps is the resource name.

Resources can also have child resources, for example pods have child resources logs. Kubectl logs [pod] is used to query the diary, and the corresponding API path is: /api/v1/pods/logs.

Kubernetes supports 8 resource manipulation methods, but not every resource needs to support 8 resource manipulation methods. For example, the pods/logs subresource only has the get operation method, because the log only needs to perform the view operation.

Kubernetes system supports namespaces, each namespace is equivalent to a "virtual cluster", and different namespaces can be isolated. Namespaces are often used to divide different environments, such as production environment, test environment, development environment, etc., using different namespaces to divide, but also can be used to divide unrelated projects, such as for dividing project A, project B.

resource object description file definition

Kubernetes resources can be divided into built-in resources and custom resources, which are defined through resource object description files. A resource object needs to be described by five fields: Group/Version, Kind, MetaData, Spec, Status.

Taking the Service resource description file as an example, the configuration is as follows:

apiVersion: v1 kind: Service metadata: name: test-service namespace: default spec: ....

apiVersion: i.e. Group/Version, Service is in the core resource group, so there is no resource group name, v1 is the resource version;

Kind: resource type;

MetaData: defines metadata information such as resource names and namespaces;

Spec: Describe the expected state of the Service;

Status: Describes the actual status of the resource object, hidden, does not require configuration, provided and updated by the Kubernetes system.

Pod scheduling

Pod resource objects support priority and preemption mechanisms. When the kube-scheduler is running, scheduling is performed according to the priority of Pod resource objects, Pod resource objects with high priority are arranged at the front of the scheduling queue, appropriate nodes are obtained first, and appropriate nodes are selected for Pod resource objects with low priority.

When a high-priority Pod resource object does not find a suitable node, the scheduler will try to preempt the node of the low-priority Pod resource object. The preemption process is to expel the low-priority Pod resource object from the node where it is located, so that the high-priority Pod resource object runs on the node. The expelled low-priority Pod resource object will re-enter the scheduling queue and wait for the appropriate node to be selected again.

By default, existing Pod resource objects have priority 0 if priority is not enabled. The steps to configure priority for Pod resources are as follows:

1. Create a PriorityClass resource object through the PriorityClass resource object description file. The configuration file is as follows:

apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: MainResourceHighPriority value: 10000 globalDefault: false description: "highest priority"

value: indicates priority, the higher the value, the higher the priority;

globalDefault: Whether it is the global default, this priority is used by default when the Pod does not specify a priority for use.

2. Modify the Pod Resource Object Description file to assign priority to the Pod.

To configure a Pod resource through Deployment, you only need to add a configuration item named priorityClassName under Spec in the Deployment description file, as follows:

apiVersion: apps/v1 kind: Deployment metadata: name: test-server namespace: default spec: replicas: 1 #configure pod spec: containers: - name: test-server-pod image: test-server:latest imagePullPolicy: IfNotPresent ports: - name: http-port containerPort: 8080 envFrom: - configMapRef: name: common-config serviceAccountName: admin-sa priorityClassName: MainResourceHighPriority

affinity scheduling

Related to scheduling is affinity scheduling. The kube-scheduler automatically selects globally optimal or locally optimal nodes for Pod resource objects (i.e., nodes with sufficient hardware resources, nodes with sufficiently low load, etc.). In a production environment, it is generally desirable to be able to intervene more in the scheduling of Pod resource objects, for example, assigning Pod resource objects that do not require GPU hardware resources to nodes that do not have GPU hardware resources, and assigning Pod resource objects that require GPU hardware resources to nodes that have GPU hardware resources. Developers only need to label these nodes accordingly, and then the scheduler can schedule Pod resource objects through labels. This scheduling strategy is called affinity and anti-affinity scheduling.

Affinity: Used for multi-service deployment nearby, for example, it allows Pod resource objects of two services (such as advertisement click service and IP query service) to be scheduled to the same node as much as possible to reduce network overhead;

Anti-Affinity: Allows multiple copies of a business's Pod resource object to be scheduled to different nodes for high availability, e.g. order service POD expects three copies and deploys three copies on different nodes.

Pod resource objects currently support two affinities and one antiaffinity:

Node Affinity: Node Affinity, scheduling a Pod resource object to a specific node, such as scheduling a POD that requires GPU to a node with GPU;

PodAffinity: Pod resource object affinity, scheduling a Pod resource object to a position adjacent to another Pod resource object, for example, scheduling to the same host, scheduling to the same hardware cluster, scheduling to the same computer room, so as to shorten network transmission delay;

PodAntiAffinity: Pod resource object antiaffinity, scheduling multiple copies of a Pod resource object to different nodes, scheduling to different hardware clusters, etc., which reduces risk and improves the availability of Pod resource objects.

Built-in scheduling algorithm

Kube-scheduler provides two kinds of scheduling algorithms by default, namely preselected scheduling algorithm and optimal scheduling algorithm.

Pre-scheduling algorithm: check whether the node meets the conditions for running the "Pod Resource Object to be Scheduled", and if so, add it to the list of available nodes;

Optimal scheduling algorithm: A final score is calculated for each available node, and the kube-scheduler selects the node with the highest score as the node that runs the "scheduled Pod resource object" optimally.

At this point, I believe everyone has a deeper understanding of "how to understand Kubernetes core concepts and components," so let's actually operate it! Here is the website, more related content can enter the relevant channels for inquiry, pay attention to us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report