Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Introduction and use of CRD in Kubernetes

2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

I. sources of demand

First, let's take a look at the source of requirements for the API programming paradigm.

In Kubernetes, the API programming paradigm is Custom Resources Definition (CRD). When we often talk about CRD, we actually mean user-defined resources.

Why is there a user-defined resource problem?

As more and more Kubernetes is used, there will be more and more requirements for user-defined resources. However, the function of aggregating various sub-resources provided by Kubernetes can no longer meet the growing demand. Users want to provide a user-defined resource that aggregates all the sub-resources. However, the expansion and use of Kubernetes native resources are more complex, so the function of user-defined resources is born.

An example of interpreting CRD by use case

Let's start with a specific introduction to what CRD is.

The CRD feature was introduced in Kubernetes 1.7, and users can add custom Kubernetes object resources according to their own needs. It is worth noting that the Kubernetes object resources added by users here are all native and first-class citizens, and are the same object resources as the native Pod and Deployment in Kubernetes. In the view of Kubernetes's API Server, they are all first-class resources that exist in etcd.

At the same time, like native built-in resources, custom resources can be created and viewed using kubectl, and they also enjoy RBAC and security features. Users can develop custom controllers to perceive or manipulate changes in custom resources.

Let's look at a simple example of CRD. The following figure is a definition of CRD.

First, the top apiVersion refers to an apiVersion declaration of CRD, declaring that it is a requirement of CRD or a defined Schema.

Kind is CustomResourcesDefinition, which means CRD. Name is a user-defined name in a user-defined resource. Generally speaking, we recommend using the format "top-level domain .xxx.API Group". For example, here is foos.samplecontroller.k8s.io.

Spec is used to specify the group and version of the CRD. For example, when creating Pod or Deployment, its group may be apps/v1 or apps/v1beta1 and so on. Here we also need to define the group of CRD.

The group in the figure is samplecontroller.k8s.io

Verison is v1alpha1

Names refers to what its kind is. For example, the kind of Deployment is Deployment,Pod 's kind is Pod, and the kind here is defined as Foo.

Plural field is a nickname. For example, when some fields or resources have long names, you can use this field to customize some nicknames to simplify its length.

The scope field indicates whether the CRD is managed by a namespace. For example, ClusterRoleBinding is Cluster level. For example, Pod and Deployment can be created into different namespaces, then their scope is Namespaced. The CRD here belongs to Namespaced.

The following figure is an example of the CRD defined in the previous figure.

Its apiVersion is the samplecontroller.k8s.io/v1alpha1 that we just defined.

Kind is Foo.

Metadata's name is the name of our example.

In this example, the spec field is not defined in the Schema of CRD. We can write it in spec according to our own needs. The format is key:value, such as deploymentName: example-foo, replicas: 1 in the figure. Of course, we can also do some verification or status resources to define what is included in the spec.

CRD with check

Let's look at a CRD definition that contains parity:

You can see that this definition is more complex, so we won't go into the previous fields of validation and look at the check section alone.

First of all, it is a definition of openAPIV3Schema, and spec defines what resources there are. Take replicas as an example, replicas is defined as a resource of integer, with a minimum value of 1 and a maximum value of 10. So, when we use this CRD again, if we give a replicas that is not int, or write a value of-1, or a value greater than 10, the CRD object will not be submitted to API Server,API Server and will directly report an error, telling you that the defined parameter condition is not met.

CRD with status field

Take another look at the CRD definition with a status field.

When we use some Deployment or Pod, after the deployment is complete, we may want to check the status of the current deployment, whether it is updated, and so on. All this is achieved by adding a status field. In addition, Kubernetes did not have a status field before version 1.12.

The state is actually a child of a custom resource, and its advantage is that updates to this field do not trigger the redeployment of Deployment or Pod. We know that for some Deployment and Pod, as long as some spec is modified, it will recreate a new Deployment or Pod. However, the status resource is not recreated, it is simply used to respond to the entire state of the current Pod. The status of its child resources in the CRD declaration in the figure above is very simple, which is in a key:value format. What is written in "{}" is customized.

Take the status field of a Deployment as an example, which contains information such as availableReplicas, current status (such as which version was updated, when was the last version), and so on. When a user customizes a CRD, you can also do some complex operations to tell other users what its current state is.

Third, operation demonstration

Let's demonstrate CRD in detail.

We have two resources here: crd.yaml and example-foo.yaml.

First create the Schema of the CRD to let our Kubernetes Server know what the CRD is really like. The way to create it is very simple, which is "kuberctl create-f crd.yaml".

Through "kuberctl get crd", you can see that the CRD has been created successfully.

At this point, we can create the corresponding resource "kuberctl create-f example-foo.yaml":

Let's take a look at what's in it, "kubectl get foo example-foo-o yaml":

You can see that it is a Foo resource, spec is what we just defined, and the selected part will be found in almost all Kubernetes metadata resources. Therefore, creating this resource is not much different from our normal creation of a Pod, but this resource is not a Pod, nor is it a built-in resource of Kubernetes itself, it is a resource that we create ourselves. In terms of usage and experience, it is almost consistent with the use of Kubernetes's built-in resources.

IV. Overview of Architectural Design Controller

It doesn't really work to define a CRD, it is simply counted into the etcd by the API Server. How to do some complex operations according to the resources and Schema defined by the CRD is realized by the Controller, that is, the controller. Controller is actually a pluggable way provided by Kubernetes to extend or control declarative Kubernetes resources. It is the brain of Kubernetes and is responsible for controlling the operation of most resources. Deployment, for example, is deployed through kube-controller-manager.

For example, if you declare that a Deployment has replicas and two Pod, then kube-controller-manager will create two copies of the corresponding Pod when it receives the request while observing the etcd, and it will observe the status of these Pod in real time. If the Pod changes, rollback, fails, restarts, and so on, it will do some corresponding actions.

So Controller is the brain that controls the final state of the entire Kubernetes resource.

After the user declares that the CRD is complete, a controller needs to be created to achieve the corresponding goal. For example, the previous Foo, it wants to create a Deployment,replicas of 1, which requires us to create a controller to create a corresponding Deployment in order to really implement the function of CRD.

Overview of Controller Workflow

Take kube-controller-manager as an example.

As shown in the figure above, on the left is an Informer, which works by going to watch kube-apiserver, while kube-apiserver oversees the creation, update, and deletion of all resources in etcd. There are two main methods for Informer: one is ListFunc; and the other is WatchFunc.

ListFunc is an operation like "kuberctl get pods" that lists all the current resources.

WatchFunc will establish a long link with apiserver, and once a new object is submitted, apiserver will push it back, telling Informer that there is a new object creation or update and other operations.

After Informer receives the requirements for the object, it calls the corresponding functions (such as the three functions AddFunc, UpdateFunc, and DeleteFunc in the figure) and puts them into a queue according to the format of the key value. The naming rule for the key value is "namespace/name", and name is the name of the corresponding resource. For example, if you create a resource of type foo in the namespace of default, its key value is "default/example-foo". After Controller gets an object from the queue, it will do the corresponding operation.

The following figure shows the workflow of the controller.

First, push events through kube-apiserver, such as Added, Updated, Deleted;, and then enter into Controller's ListAndWatch () loop; there is a first-in, first-out queue in ListAndWatch, which is Pop () out during operation; and then find the corresponding Handler. Handler will give it to the corresponding functions (such as Add (), Update (), Delete ()).

A function usually has more than one Worker. Multiple Worker means that for example, if there are several objects coming in at the same time, the Controller may launch five or ten such Worker at the same time to execute in parallel, each Worker can handle different object instances.

After the work is done, that is, after the corresponding object is created, the key is discarded, which means that the processing is complete. If there is any problem in the process, report an error directly, type an event, put the key back in the queue, and the next Worker can receive it and continue with the same processing.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report