In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-08 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Author | Yi Li Ali Yun senior technical expert
Containerd is an open source industry standard container runtime that focuses on simplicity, stability, and portability, while supporting both Linux and Windows.
On December 14, 2016, Docker announced that containerd, the core component of Docker Engine, would be donated to a new open source community to develop and operate independently. Aliyun, AWS, Google, IBM and Microsoft, as initial members, jointly build containerd community
In March 2017, Docker donated containerd to CNCF (Cloud Native Computing Foundation). Containerd has received rapid development and wide support.
The Docker engine has used containerd as the basis for container life cycle management, and Kubernetes officially supported containerd as the container runtime manager in May 2018.
In February 2019, CNCF announced that it had graduated from containerd and became a production available project.
Containerd has built-in Container Runtime Interface (CRI) support since version 1.1, further simplifying support for Kubernetes. The architecture is as follows:
Cdn.com/cebd07a2f58c10144a8a459377aadd66278a5bec.jpeg ">
In the Kubernetes scenario, containerd has less resource consumption and faster startup speed than the full Docker Engine.
Photo Source: containerd
Red Hat-dominated cri-o is a container runtime management project that competes with containerd. Containerd has advantages over cri-o projects in terms of performance and community support.
Photo Source: ebay sharing
More importantly, containerd provides a flexible extension mechanism to support a variety of OCI (Open Container Initiative) compliant container runtime implementations, such as runc containers (also known as Docker containers), KataContainer, gVisor, and Firecraker security sandboxed containers.
In a Kubernetes environment, different API and command-line tools can be used to manage concepts such as container / Pod, mirroring, and so on. For ease of understanding, we can use the following figure to illustrate how to manage container lifecycle management using different levels of API and CLI.
Kubectl: is a command line tool at the cluster level, supporting the basic concept of Kubernetes: crictl: a command line tool for CRI on nodes ctr: a command line tool for containerd
Experience
Minikube is the easiest way to experience containerd as a Kubernetes container runtime. Let's use it as a Kubernetes container runtime and support two different implementations of runc and gvisor.
In the early days, due to Internet access, many friends could not directly use the official Minikube to carry out experiments. In the latest Minikube 1.5 version, a complete configuration method has been provided, which can help you use the image address of Ali Cloud to obtain the required Docker image and configuration, and support the runtime of different containers such as Docker/Containerd. Let's create a Minikube virtual machine environment and note that we need to specify-- the container-runtime=containerd parameter sets the containerd run time as a container. At the same time, registry-mirror should also be replaced with its own Ali Cloud image acceleration address.
$minikube start-- image-mirror-country cn\-- iso-url= https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.5.0.iso\-- registry-mirror= https://XXX.mirror.aliyuncs.com\-- minikube v1.5.0 Automatically selected the 'hyperkit' driver (alternates: [virtualbox]) ️on container-runtime=containerd Darwin 10.14.6 is inaccessible to any known repository in your location. Registry.cn-hangzhou.aliyuncs.com/google_containers is being used as a backup repository. Creating hyperkit virtual machine (CPUs=2,Memory=2000MB, Disk=20000MB)... ️VM is unable to connect to the selected image repository: command failed: curl-sS https://k8s.gcr.io/stdout:stderr: curl: (7) Failed to connect to k8s.gcr.io port 443: Connection timed out: Process exited with status 7 is preparing Kubernetes v1.16.2 in containerd 1.2.8. Pull the mirror image. Starting Kubernetes... ⌛ Waiting for: apiserver etcd scheduler controller complete! Kubectl has been configured to "minikube" $minikube dashboard Verifying dashboard health... Launching proxy... Verifying proxy health... Opening http://127.0.0.1:54438/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser... Deploy test applications
We deploy a nginx application through Pod:
$cat nginx.yamlapiVersion: v1kind: Podmetadata: name: nginxspec: containers:-name: nginx image: nginx$ kubectl apply-f nginx.yamlpod/nginx created$ kubectl exec nginx-- uname-aLinux nginx 4.19.76 # 1 SMP Fri Oct 25 16:07:41 PDT 2019 x 86 hours 64 GNU/Linux
Then, we turn on minikube support for gvisor:
$minikube addons enable gvisor gvisor was successfully enabled$ kubectl get pod,runtimeclass gvisor-n kube-systemNAME READY STATUS RESTARTS AGEpod/gvisor 1 kubectl get runtimeClassNAME CREATED ATgvisor 1 Running 0 60mNAME CREATED ATruntimeclass.node.k8s.io/gvisor 2019-10-27T01 purse 40Z $kubectl get runtimeClassNAME CREATED ATgvisor 2019-10-27T01:40:45Z
When gvisor pod enters the Running state, the gvisor test application can be deployed.
We can see that a "runtimeClassName" of gvisor has been registered in the K8s cluster. Developers can then select different types of container runtime implementations by "runtimeClassName" in the Pod declaration. For example, let's create a nginx application that runs in a gvisor sandbox container.
$cat nginx-untrusted.yamlapiVersion: v1kind: Podmetadata: name: nginx-untrustedspec: runtimeClassName: gvisor containers:-name: nginx image: nginx$ kubectl apply-f nginx-untrusted.yamlpod/nginx-untrusted created$ kubectl exec nginx-untrusted-- uname-aLinux nginx-untrusted 4.4 # 1 SMP Sun Jan 10 15:06:54 PST 2016 x 86 hours 64 GNU/Linux
We can clearly find that because the operating system kernel is shared between the runc-based container and the host, the OS kernel version found in the runc container is the same as the Minikube host OS kernel version, while the gvisor runsc container uses a separate kernel, which is different from the Minikube host OS kernel version.
It is precisely because each sandbox container has an independent kernel, which reduces the security area and has a better security isolation feature. Suitable for isolating untrusted applications, or multi-tenant scenarios. Note: in minikube, gvisor intercepts kernel calls through ptrace, which consumes a lot of performance. in addition, the compatibility of gvisor needs to be enhanced.
Use ctl and crictl tools
We can now enter the Minikube virtual machine:
$minikube ssh
Containerd supports isolating container resources through namespaces to view existing containerd namespaces:
$sudo ctr namespaces lsNAME LABELSk8s.io# lists all container images $sudo ctr-- namespace=k8s.io images ls...# lists all containers list $sudo ctr-- namespace=k8s.io containers ls
A simpler way in a Kubernetes environment is to use crictl to operate on pods.
# View pod list $sudo crictl podsPOD ID CREATED STATE NAME NAMESPACE ATTEMPT78bd560a70327 3 hours ago Ready nginx-untrusted default 094817393744fd 3 hours ago Ready Nginx default 0... # View the details of the pod whose name contains nginx $sudo crictl pods-- name nginx- vID: 78bd560a70327f14077c441aa40da7e7ad52835100795a0fa9e5668f41760288Name: nginx-untrustedUID: dda218b1-d72e-4028-909d-55674fd99ea0Namespace: defaultStatus: ReadyCreated: 2019-10-27 02 ReadyCreated 40 UTCLabels 02.660884453 + 0000 UTCLabels: io.kubernetes.pod.name-> nginx-untrusted io.kubernetes.pod.namespace -> default io.kubernetes.pod.uid-> dda218b1-d72e-4028-909d-55674fd99ea0Annotations: kubectl.kubernetes.io/last-applied-configuration-> {"apiVersion": "v1" "kind": "Pod", "metadata": {"annotations": {}, "name": "nginx-untrusted", "namespace": "default"}, "spec": {"containers": [{"image": "nginx", "name": "nginx"}] "runtimeClassName": "gvisor"} kubernetes.io/config.seen-> 2019-10-27T02:40:00.675588392Z kubernetes.io/config.source-> apiID: 94817393744fd18b72212a00132a61c6cc08e031afe7b5295edafd3518032f9fName: nginxUID: bfcf51de-c921-4a9a-a60a-09faab1906c4Namespace: defaultStatus: ReadyCreated: 2019-10-27 0238 UTCLabels 19.724289298: io.kubernetes.pod.name-> nginx io.kubernetes.pod.namespace-> default io.kubernetes.pod.uid-> bfcf51de-c921-4a9a-a60a-09faab1906c4Annotations: Kubectl.kubernetes.io/last-applied-configuration-> {"apiVersion": "v1" "kind": "Pod", "metadata": {"annotations": {}, "name": "nginx", "namespace": "default"}, "spec": {"containers": [{"image": "nginx", "name": "nginx"}]} kubernetes.io/config.seen-> 2019-10-27T02:38:18.206096389Z kubernetes.io/config.source-> relationship between apicontainerd and Docker
Many students are concerned about the relationship between containerd and Docker, and whether containerd can replace Docker?
Containerd has become the mainstream implementation of the container runtime and is strongly supported by the Docker community and the Kubernetes community. The underlying container lifecycle management of Docker Engine is also based on containerd implementation.
But Docker Engine includes more developer tool chains, such as mirror builds. It also includes Docker's own log, storage, network, Swarm choreography and other capabilities. In addition, the vast majority of container ecological manufacturers, such as security, monitoring, development, etc., have relatively perfect support for Docker Engine, and the support for containerd is also gradually being made up.
So in the Kubernetes runtime environment, users who are more concerned about security and efficiency and customization can choose containerd as the container runtime environment; for most developers, continuing to use Docker Engine as the container runtime is also a good choice.
Ali Cloud CCS supports containerd
At Aliyun Kubernetes service ACK, we have adopted containerd as the container runtime management to support the mixed deployment of secure sandboxed containers and runc containers. Among the existing products, together with Aliyun operating system team and Ant Financial Services Group, we support runV sandbox container based on lightweight virtualization. 4Q will also work with operating system team and security team to release trusted Intel SGX-based encrypted sandbox container.
For specific product information, please refer to this document.
In Serverless Kubernetes (ASK), we also use containerd's flexible plug-in mechanism to customize and tailor the container runtime implementation for the nodeless environment.
"Alibaba Cloud's native Wechat official account (ID:Alicloudnative) focuses on micro-services, Serverless, containers, Service Mesh and other technology areas, focuses on cloud native popular technology trends, and large-scale cloud native landing practices, and is the technical official account that best understands cloud native developers."
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.