In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article focuses on "how to use and configure Stream services in containerd". Interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Let the editor take you to learn how to use and configure Stream services in containerd.
What should the TKE cluster do?
TKE has supported the selection of containerd as the container runtime as early as May 2019. If you create a new cluster, it is recommended to choose containerd as the container runtime
Existing clusters can continue to use docker as the container runtime before upgrading to K8s 1.23 (assuming that TKE first does not support the K8s version of dockershim, or 1.24).
When an existing cluster is upgraded to 1.23 through the TKE cluster upgrade feature, TKE provides the option to switch to containerd at run time. Of course, in this case, there is no way to ensure that Pod is not affected, so we can only upgrade by reinstalling nodes.
Existing clusters can also switch the runtime to containerd, the new nodes will use containerd, and the stock nodes will still use docker. (note: this will result in the coexistence of docker nodes and containerd nodes in the same cluster. If there are services that use Docker in Docker or other dependent nodes on docker daemon and docker.sock, you need to take measures in advance to avoid problems, such as scheduling by node label. Ensure that this kind of service is scheduled to the docker node Or use the scheme of running Docker in Docker on the containerd cluster as described earlier)
Of course, it is also possible for docker to implement CRI internally or add a dockershim process in the future. If docker adapts accordingly, TKE will also support it in the future.
Interpretation of K8s abandoning dockershim
Docker support in the kubelet is now deprecated and will be removed in a future release. The kubelet uses a module called "dockershim" which implements CRI support for Docker and it has seen maintenance issues in the Kubernetes community. We encourage you to evaluate moving to a container runtime that is a full-fledged implementation of CRI (v1alpha1 or v1 compliant) as they become available. (# 94624 https://my.oschina.net/u/4534936/blog/, @ dims) [SIG Node]
K8s mentioned in change log 1.20 that K8s will phase out support for Docker starting with version 1.20. Specific announcements and some FAQ are also mentioned in the official blog of K8s.
Don't Panic: Kubernetes and Docker
Dockershim FAQ
It was mentioned in the blog that K8s will add information that it is not recommended to use docker in version 1.20, and dockershim will be removed from kubelet as early as version 1.23. At that time, users will not be able to use docker as the runtime of K8s cluster, but the images built through docker can still be used in K8s cluster without docker.
Dockershim of "parasitic" in kubelet
The main content of this change is to delete the dockershim in kubelet, which is also in line with expectations. In the early days when rkt and docker competed for supremacy, kubelet needed to maintain two blocks of code to adapt to docker and rkt respectively, which made kubelet need to consider the adaptation of runtime components every time a new function was released, which seriously slowed down the release speed of the new version. In addition, virtualization is already a common requirement, and if there is a type of runtime, the SIG-Node team may need to add code adapted to the new runtime to kubelet. This is not a long-term solution, so in 2016, SIG-Node proposed the container manipulation interface CRI (Container Runtime Interface). CRI is a set of abstractions for container operations, and as long as each container runtime implements this set of interfaces, kubelet can adapt to all runtimes through this set of interfaces. However, Docker did not (and did not intend to) implement this set of interfaces at that time. Kubelet can only maintain a component called "dockershim" internally, which acts as a CRI adapter for docker. Kubelet calls dockershim through the CRI interface when creating the container, while dockershim delivers the request to docker through the http request. So the architecture of kubelet looks like this: in the case of using a component that implements the CRI interface as the container runtime, the call chain of the container created by kubelet is shown in the red arrow in the figure, and the ContainerManager in kubelet can be called directly to the container runtime through CRI, requiring only one grpc request. When using docker, ContainerManager will follow the blue call chain in the figure. The request of CRI is transferred to dockershim,dockershim through unix:///var/run/dockershim.sock and forwarded to docker. As to why there is a containerd after docker, we will talk about it later. Implementing the adapter for docker in kubelet is inherently an inelegant implementation, which makes the call chain longer and unstable, and adds extra work to the maintenance of kubelet, which is only a matter of time before it is removed from kubelet.
What will be the difference after abandoning Docker?
If you're an end-user of Kubernetes https://my.oschina.net/u/4534936/blog/, not a whole lot will be changing for you. This doesn't mean the death of Docker https://my.oschina.net/u/4534936/blog/, and it doesn't mean you can't https://my.oschina.net/u/4534936/blog/, or shouldn't https://my.oschina.net/u/4534936/blog/, use Docker as a development tool anymore. Docker is still a useful tool for building containers https://my.oschina.net/u/4534936/blog/, and the images that result from running docker build can still run in your Kubernetes cluster.
As soon as the news comes out, what we should be most concerned about is what will happen after abandoning docker?
The official answer is: Don't Panic! Then we focus on several issues that everyone is most concerned about. Let's analyze these aspects that have been officially mentioned:
Normal K8s users will not have any impact.
Yes, a higher version of a cluster in a production environment only needs to switch the runtime from docker to another runtime, such as containerd. Containerd is an underlying component of docker, which is mainly responsible for maintaining the lifecycle of containers and has been tested for a long time with docker. At the same time, he graduated from CNCF in early 2019 and can be used as a separate container runtime in the cluster. TKE also provided containerd as a runtime option as early as 2019, so converting runtime from docker to containerd is a largely painless process. CRI-O is another frequently mentioned runtime component, provided by redhat, which is more lightweight than containerd, but is quite different from docker, and there may be some differences in conversion.
Images built through docker build in the development environment can still be used in clusters.
Mirroring has always been a major advantage of container ecology, although people always call mirrors "docker mirrors", but mirroring has long been a norm. For specific specifications, please refer to image-spec. As long as you build an Image Spec-compliant image anywhere, you can run it on other Image Spec-compliant container runtimes.
Users who use DinD (Docker in Docker) in Pod will be affected
Some users will mount the socket (/ run/docker.sock) of docker into Pod and call docker's api in Pod to build an image or create a compilation container. The official recommendation here is to use Kaniko, Img or Buildah. We can use the DinD scheme in any runtime by using docker daemon as DaemonSet or by adding a docker daemon's sidecar to the Pod that wants to use docker. TKE also provides a solution specifically for using DinD in containerd clusters, as detailed in using DinD in containerd.
The present and past lives of containerd
So what exactly is a containerd? What does it have to do with docker? Maybe some students will ask this question after reading the blog, and then explain the origin of containerd and docker to the students.
Docker and containerd
In 2016, docker split off the modules responsible for the container life cycle and donated them to the community, now known as containerd. The split structure of docker is shown in the following figure (of course, docker also adds partially choreographed code to docker). After we call the docker command to create the container, docker daemon downloads the image through the Image module and saves it to the Graph Driver module, and then calls containerd through client to create and run the container. We may need to use-- volume to add persistent storage to the container when we use docker to create containers; it is also possible to connect several containers we created with the docker command through-- network, which, of course, is provided to us by the Storage module and Networking module in docker. However, K8s provides stronger volume mount capabilities and cluster-level network capabilities. In a cluster, kubelet only uses the image download and container management features provided by docker, but not orchestration, network, storage and other functions. In the following figure, you can see the call chain of each module in the current mode, and the modules marked with red boxes in the figure are several runtime modules that kubelet relies on when creating Pod. After containerd is donated to the CNCF community, the community adds an image management module and a CRI module to it, so that containerd can not only manage the lifecycle of the container, but also be directly used as the runtime of K8s. So containerd graduated from the CNCF community in February 2019 and officially entered the production environment. As you can see in the following figure, running with containerd as a container gives kubelet all the functionality it needs to create Pod, as well as a purer functional module and a shorter call chain. From the above comparison, we can see that since containerd was donated to the community, it has always been the goal of becoming a simple, stable and reliable container runtime, while docker wants to become a complete product. This point is also mentioned in the official documentation. In order to give users a better interaction and use experience and more functions, docker provides many of the features that developers need, as well as network and volume functions in order to provide the foundation for swarm. In fact, these features are not needed by K8s; containerd, on the contrary, only provides the basic functions that kubelet needs to create Pod, which in return for higher robustness and better performance. To some extent, even after kubelet 1.23, docker provides the CRI interface, containerd is still a better choice.
Using containerd in Kubernetes clusters
Of course, there are many CRI implementers, mainly in addition to containerd and CRI-O. CRI-O is a CRI runtime developed primarily by Red Hat employees and has nothing to do with docker, so it can be difficult to migrate from docker. There is no doubt that containerd is the best candidate for the CRI runtime after docker is abandoned. For developers, the whole migration process should be imperceptible, but for some operation and maintenance students, they may be more concerned about the differences in deployment and operation details. Next, we will focus on several differences between using containerd and docker in K8s.
Container log comparison entry
Comparison item DockerContainerd storage path if docker runs as a K8s container, the storage of the container log will be done by docker and saved in a directory similar to / var/lib/docker/containers/$CONTAINERID. Kubelet sets up soft links under / var/log/pods and / var/log/containers to point to the container log files in / var/lib/docker/containers/$CONTAINERID this directory. If Containerd runs as a K8s container, the container log will be unloaded by kubelet and saved to the / var/log/pods/$CONTAINER_NAME directory, and a soft link will be created in the / var/log/containers directory to point to the log file. Configuration parameters are specified in the docker configuration file: "log-driver": "json-file" https://cache.yisu.com/upload/information/20210522/355/608106"log-opts": {"max-size": "100m" https://cache.yisu.com/upload/information/20210522/355/608106"max-file": "5"} method 1: specify:-- container-log-max-files=5-- container-log- in the kubelet parameter Max-size= "100Mi" method 2: specify "containerLogMaxSize": "100Mi" https://my.oschina.net/u/4534936/blog/, in KubeletConfiguration "containerLogMaxFiles": 5 https://my.oschina.net/u/4534936/blog/, saves the container log to the data disk and mounts the data disk to "data-root" (default is / var/lib/docker). Create a soft link / var/log/pods to point to a directory under the data disk mount point. Selecting "Storage containers and mirrors on a data disk" in TKE automatically creates a soft link / var/log/pods.
The difference of cni configuration method is that when using docker, the dockershim in kubelet is responsible for calling the cni plug-in, while the containerd-cri plug-in built into containerd in containerd is responsible for calling cni, so the configuration file about cni needs to be placed in the configuration file of containerd (/ etc/containerd/containerd.toml):
[plugins.cri.cni] bin_dir = "/ opt/cni/bin" conf_dir = "/ etc/cni/net.d"
Differences between stream services
Description:
Commands such as Kubectl exec/logs need to establish a flow channel between apiserver and the container runtime.
How do I use and configure Stream services in containerd?
Docker API itself provides stream service, and docker-shim within kubelet will stream and forward through docker API. Containerd's stream service needs to be configured separately:
[plugins.cri] stream_server_address = "127.0.0.1" stream_server_port = "0" enable_tls_streaming = false [plugins.cri] stream_server_address = "127.0.0.1" stream_server_port = "0" enable_tls_streaming = false
What is the difference in configuration before and after K8s 1.11?
Containerd's stream service is configured differently in different K8s runtime scenarios.
Before K8s 1.11: kubelet does not do stream proxy, only does redirection. That is, kubelet sends the stream server address exposed by containerd to apiserver and gives apiserver direct access to containerd's stream service. At this point, you need to authenticate the stream service transponder for security protection.
After K8s1.11: K8s1.11 introduces kubelet stream proxy so that containerd stream services only need to listen on local addresses.
Using containerd in TKE clusters
TKE has supported containerd as one of the container runtime options since May 2019. As TKE gradually supports log collection service and GPU capability in the containerd cluster, containerd removed the label of Beta version in TKE in September 2020 and can be officially used in production environment. In the long-term use, we also found some problems with containerd and fixed them in time, such as:
Pod Terminating due to mishandling of problems
The image file is lost due to a problem with the kernel version
There are three ways to use containerd as a runtime in a TKE cluster:
When creating a cluster, select version 1.12.4 or above of K8s and select containerd as the runtime component.
In an existing docker cluster, create a portion of the containerd node by creating a node pool that runs as containerd (new node pool > more settings > runtime components)
In an existing docker cluster, modify the runtime component property of the cluster or node pool to "containerd"
Note: the latter two methods will cause docker nodes and containerd nodes to coexist in the same cluster. If there are services that use Docker in Docker or other dependent nodes on docker daemon and docker.sock, you need to take measures in advance to avoid problems, for example, by scheduling by node label to ensure that such services are scheduled to docker nodes, or by running Docker in Docker on the containerd cluster as described earlier.
At this point, I believe you have a deeper understanding of "how to use and configure Stream services in containerd". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.