Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Why doesn't Kubernetes use libnetwork?

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

Today, I will talk to you about why Kubernetes does not use libnetwork, many people may not know much about it. In order to make you understand better, the editor has summarized the following content for you. I hope you can get something according to this article.

Kubernetes had the original web plug-in before version 1. 0. At the same time, Docker introduces libnetwork and Container Network Model (CNM). Now Docker has released and supported the web plug-in libnetwork, but the plug-in for Kubernetes is still in the alpha stage. So the obvious question is why Kubernetes hasn't adopted libnetwork yet. After all, most vendors will definitely support the Docker plug-in, and Kubernetes should adopt it as well.

Before we start the discussion, the first thing we need to know is that Kubernetes is a system that supports a variety of container runtime environments, and Docker is just one of them. Configuring the network is equally important for every operating environment. So when people ask "does Kubernetes support CNM", they are actually asking "will Kubernetes support CNM in the Docker runtime environment?" We certainly want to use the same network plug-in to support all running environments, but this is not an absolute goal.

However, Kubernetes does not adopt CNM/libnetwork in its Docker runtime environment. Recently, we have been studying and exploring the possibility of using the network model Container Network Interface (CNI) in CoreOS's App Container (appc) to replace it. Why? There are a variety of technical and non-technical reasons.

First of all, the network driver design of Docker makes some basic assumptions of poor compatibility, which brings us a lot of difficulties.

For example, Docker has the concept of local and global drivers. Local drivers (such as "bridge") are fixed to one machine and cannot be remotely coordinated across machines. Global drivers (such as "overlay") rely on the libkv library for cross-machine coordination. The library defines an interface for key-value storage, and the interface is very low-level. In order for the global driver of Docker to run on the Kubernetes cluster, we also need the system administrator to run instances of etcd, ZooKeeper, and Consul (see Docker's multi-host network documentation), or we have to provide another set of libkv implementations in Kubernetes.

By comparison, however, the second option (global driver) looks more flexible and better, so we try to implement it. But the interface of libkv is very low-level, and its pattern is also designed for the Docker runtime environment itself. For Kubernetes, we can either directly expose our underlying key-value interface or provide a semantic interface for key-value (that is, API that implements structured storage on a key-value system). These two choices are not very appropriate for us in terms of performance, scalability, and security. If we implement it this way, the whole system will become obviously more complex. This conflicts with what we want to use Docker's network model to simplify the implementation.

For users who want to run the global driver of Docker and have the ability to configure Docker, Docker's network should run well. For Kubernetes, we don't want to get involved in or influence the configuration steps of Docker, and no matter how the Kubernetes project evolves in the future, this should not change. Even, we will try to be compatible with more options for users. But the conclusion we have drawn from practice is that the global driver of Docker is an extra burden on users and developers, and we will not use it as the default network option-which means that the value of using Docker plug-ins is largely excluded.

At the same time, Docker's network model also makes a lot of assumptions that are incompatible with Kubernetes. For example, in versions 1.8 and 1.9 of Docker, there was a fundamental design flaw in implementing Service Discovery (Service discovery). As a result, the / etc/hosts file in the container was arbitrarily rewritten or even corrupted (docker # 17190)-and we can't easily turn off the Service Discovery feature. In version 1.10, Docker also plans to add the ability to bundle a new DNS server, and it's not clear whether this feature can be turned off. For Kubernetes, binding naming and addressing at the container level is not the right design-we have defined a set of concepts and rules for Service naming, addressing, and binding, and we have our own DNS architecture and services (built on very mature SkyDNS). Therefore, the solution of bundling a DNS server does not meet our needs, and it may not be able to be shut down.

In addition to the distinction between so-called "local" and "global" drivers, Docker also defines "in-process" and "out-of-process" ("remote") plug-ins. We also looked at the possibility of bypassing libnework itself (so that we can avoid the problems described earlier) and directly using "remote" plug-ins. Unfortunately, this means that we also lose the possibility of using Docker's "in-process" plug-ins, especially plug-ins such as bridge and overlay. This makes the use of libnetwork itself lose a lot of meaning.

On the other hand, CNI and Kubernetes are very consistent in design philosophy. It is much simpler than CNM, does not require daemons, and is at least cross-platform (CoreOS's rkt container supports CNI). Cross-platform means that the same network configuration can be used in multiple operating environments (such as Docker, rkt, and Hyper). This is also very much in line with Unix's design philosophy: do one thing well.

In addition, wrapping the CNI module to implement a more custom module is very simple-it can be done by writing a simple shell script. On the contrary, CNM is much more complicated. Therefore, we think that CNI is more suitable for rapid development and iteration. Early experiments have proved that we can use CNI plug-ins to replace almost all hard-coded network logic in kubelet.

We also tried to implement a bridge CNM driver for Docker to use the CNI driver. It turns out that this complicates the problem. First of all, the models of CNM and CNI are so different that there is no "method" to make them well compatible. Second, we still have the problems of "global" and "local" as well as key-value mentioned earlier. Assuming that this is a local driver, we still need to get the corresponding logical network information from Kubernetes.

Unfortunately, for management platforms like Kubernetes, Docker drivers are very difficult to access. In particular, these drivers use an ID assigned within Docker rather than a common network name to point to the network to which a container belongs. Such a design makes it difficult for a network defined in an external system (such as Kubernetes) to be understood by the driver.

We and other Internet providers have reported these and other related issues to Docker developers. Although these problems cause a lot of trouble for non-Docker third-party systems, they are often shut down on the grounds that "that's the way the design is" (libnetwork # 139, libnetwork # 486, libnetwork # 514, libnetwork # 865, docker # 18864). Through these actions, we observe that Docker clearly shows that they are not open enough to some suggestions, which may distract some of their main efforts or reduce their control over the project. This worries us because Kubernetes has always supported Docker and added so many features to it, but Kubernetes is also a separate project from Docker.

For various reasons, we choose CNI as the network model of Kubernetes. This will have some unfortunate side effects, although most of them are minor problems, such as the fact that Docker's inspect command does not display the network address. However, there will be some significant problems, especially the container launched by Docker may not be able to communicate with the container started by Kubernetes, and the network integration engineer must provide a CNI driver to fully integrate the network into Kubernetes. But what is important is that Kubernetes will become simpler, more flexible, and do not require advance configuration (such as configuring Docker to use our bridge).

After reading the above, do you have any further understanding of why Kubernetes doesn't use libnetwork? If you want to know more knowledge or related content, please follow the industry information channel, thank you for your support.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report