In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article shares with you the content of a sample analysis of the OpenStack container network project Kuryr. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.
Containers have become very popular in recent years, and there are many projects that consider combining containers with SDN. Kuryr is one of the projects. The Kuryr project is under OpenStack big tent and aims to connect the container network with openstack Neutron. The first impression of Kuryr is that this is another project under the framework of Neutron, which can control the SDN project of container network through the unified northbound interface of Neutron. But in fact, Kuryr uses Neutron as a southbound interface to dock with the container network. The northbound direction of Kuryr is the container network interface, and the southbound direction is OpenStack Neutron.
Background introduction of Kuryr
Before the formal introduction, let's talk about the word Kuryr. Kuryr is a Czech word kur arrow r, corresponding to courier in English, and the corresponding Chinese meaning is messenger, messenger. As can be seen from this name, Kuryr does not produce information, but a porter in the online world. This can also be seen in the icon of the project. In addition, since they are all Latin, it is irresponsible to say that the pronunciation of Kuryr should be similar to courier.
When Kuryr was first created, it was designed to provide a connection between Docker and Neutron. Bring Neutron's network services to docker. With the development of containers, there are differences in the development of container networks. It is mainly divided into two groups, one is Docker native CNM (Container Network Model), and the other is more compatible CNI (Container Network Interface). Kuryr correspondingly has two branches, one is kuryr-libnetwork (CNM), the other is kuryr-kubernetes (CNI).
How Kuryr works in Libnetwork
Kuryr-libnetwork is a plugin running under the Libnetwork framework. To understand how kuryr-libnetwork works, first take a look at Libnetwork. Libnetwork is an independent project that modularizes the network logic from Docker Engine and libcontainer, and replaces the original Docker Engine network subsystem. Libnetwork defines a flexible model that uses local or remote driver to provide network services to container. Kuryr-libnetwork is a remote driver implementation of Libnetwork and has now become a remote driver recommended by Docker's official website.
Libnetwork's driver can be seen as a plugin of Docker, sharing a set of plugin management framework with other plugin of Docker. That is, remote driver for Libnetwork is activated in the same way as other plugin in Docker Engine and uses the same protocol. The interfaces that Libnetwork remote driver needs to implement are described in detail on Libnetwork's Git.
All kuryr-libnetwork needs to do is implement these interfaces. You can see it in the kuryr-libnetwork code. Libnetwork calls the Plugin.Activate interface of remote driver to see what remote driver implements. From the code of kuryr-libnetwork, we can see that it implements two functions: NetworkDriver, IPAMDriver.
@ app.route ('/ Plugin.Activate', methods= ['POST']) def plugin_activate (): "" Returns the list of the implemented drivers. This function returns the list of the implemented drivers defaults to ``[NetworkDriver, IpamDriver] ``in the handshake of the remote driver, which happens right before the first request against Kuryr. See the following link for more details about the spec: docker/libnetwork # noqa "" app.logger.debug ("Received / Plugin.Activate") return flask.jsonify (const.SCHEMA ['PLUGIN_ACTIVATE'])
How does Kuryr register with Libnetwork as remote driver? The question should be looked at this way. How did Libnetwork discover Kuryr? This depends on Docker's plugin discovery mechanism. When a user or container needs to use Docker's plugin, he / it only needs to specify the name of plugin. Docker looks in the appropriate directory for a file with the same name as plugin, which defines how to connect to the plugin.
If the script to install kuryr-libnetwork,devstack with devstack creates a folder in / usr/lib/docker/plugins/kuryr, the contents of the files are also very simple. The default is: http://127.0.0.1:23750. In other words, kuryr-libnetwork actually has a http server, and this http server provides all the interfaces needed by Libnetwork. When Docker finds such a file, it communicates with Kuryr through the contents of the file.
So the interaction between Libnetwork and Kuryr is as follows:
Libnetwork: someone wants to use a plugin called Kuryr. Let me have a look. Oh, hello, Kuryr. What's your function?
Kuryr: I have functions like NetworkDriver and IpamDriver. So, are you happy?
How Kuryr connects to Neutron
How Kuryr connects to Docker Libnetwork described above. Let's see how Kuryr docks with OpenStack Neutron. Since they are both projects under the OpenStack camp and are developed in the Python language, there is no suspense, Kuryr uses neutronclient to connect with Neutron. So overall, Kuryr works like this:
Since there is a Neutron between Kuryr and the actual L2 implementation below, Kuryr is not too dependent on L2 implementation. The figure above shows some of the Neutron L2 implementations supported by Kuryr listed by Gal Sagie. In addition to this, I have tried the integration of kuryr-libnetwork and Dragonflow, but there is not much to pay attention to, and I have the opportunity to talk about this specifically.
Let's take a look at how Kuryr-libnetwork makes a courier between Neutron and Docker. Since the north direction is Libnetwork and the south direction is Neutron, it is conceivable that what kuryr-libnetwork does is to receive the resource model of Libnetwork and transform it into the resource model of Neutron. Let's first take a look at Libnetwork's resource model, which is CNM, one of the two groups of container networks mentioned earlier. CNM consists of three data models:
Network Sandbox: defines the network configuration of the container
Endpoint: the Nic used by the container to access the network, which exists in Sandbox. There can be multiple Endpoint in a Sandbox.
Network: equivalent to a Switch,Endpoint access on Network. Different Network are isolated.
The corresponding Neutron,Endpoint is the Port in Neutron, and Network is the Subnet in Neutron. Why does Network not correspond to Network in Neutron? It may be because of the difference in the network definition between Libnetwork and Neutron, but at least when there is only one Subnet in a Neutron Network, the two can correspond in name.
In addition, Kuryr relies on another feature in OpenStack Neutron: subnetpool. Subnetpool is a purely logical concept in Neutron, which ensures that all subnet,IP address fields in subnetpool do not overlap. With this feature, Kuryr ensures that the Docker Network,IP address provided by it is unique.
Kuryr converts the request from Libnetwork into the request from the corresponding Neutron and sends it to Neutron.
Kuryr connects container network and virtual machine network
However, for the actual network connection, it is impossible to tell Neutron how to do it through Neutron's API. Neutron does not know how to connect the container network, nor does it provide such API. This part needs to be done by Kuryr itself, which is where Kuryr's Magic is (otherwise it's different from an agent). Finally, let's take a look at this part.
When Docker creates a container and needs to create an Endpoint, the request is sent to the remote driver---Kuryr as the Libnetwork. Kuryr will first create a Neutron port when receiving this request:
Neutron_port, subnets = _ create_or_update_port (neutron_network_id, endpoint_id, interface_cidrv4, interface_cidrv6, interface_mac)
After that, the corresponding driver is called according to the contents of the configuration file. Currently, the supported driver is veth, which is used to connect the container network of the host, and the other is nested, which is used to connect the container network in the virtual machine. Of course, the host and virtual machine here are relative to OpenStack. Strictly speaking, the host machine of OpenStack can also be a virtual machine, such as my development environment. Next, take veth driver as an example. Let's look at the code first:
Try: with ip.create (ifname=host_ifname, kind=KIND, reuse=True, peer=container_ifname) as host_veth: if not utils.is_up (host_veth): host_veth.up () with ip.interfactions [container _ ifname] as container_veth: utils._configure_container_iface (container_veth, subnets Fixed_ips=port.get (utils.FIXED_IP_KEY), mtu=mtu, hwaddr= port [utils.MAC _ ADDRESS_KEY] .lower () except pyroute2.CreateException: raise exceptions.VethCreationFailure ('Virtual device creation failed.') Except pyroute2.CommitException: raise exceptions.VethCreationFailure ('Could not configure the container virtual device networking.') Try: stdout, stderr = _ configure_host_iface (host_ifname, endpoint_id, port_id, port ['network_id'], port.get (' project_id') or port ['tenant_id'], port [utils.MAC _ ADDRESS_KEY], kind=port.get (constants.VIF_TYPE_KEY) Details=port.get (constants.VIF_DETAILS_KEY)) except Exception: with excutils.save_and_reraise_exception (): utils.remove_device (host_ifname)
Similar to the bridge mode in Docker network, Driver first creates a veth pair pair and two network cards, one of which is container interface, which is connected to the network namespace of the container and configured by calling _ configure_container_iface, and the other is host interface, which is connected to the L2 topology of Neutron by calling _ configure_host_iface.
The way Host interface is handled is customized for OpenStack Neutron. It should be noted here that the underlying L2 topology of different SDN is different, OpenVswitch,LinuxBridge,Midonet, and so on. How does Kuryr support different L2 layers? First of all, if you look at the port information of OpenStack Neutron, you can see that there is such an attribute: binding:vif_type. This property indicates what kind of L2 layer the port is in. Kuryr implements some shell scripts for different L2 to connect the specified network card to Neutron's L2 topology. These scripts are located in the / usr/libexec/kuryr directory and correspond to the values of binding:vif_type one by one. So, all Kuryr has to do is read the Neutron port information, find the corresponding shell script, and connect the host interface in veth pair to OpenStack Neutron's L2 topology by calling shell. After access, the container is actually in an L2 network with the virtual machine, so it can naturally communicate with the virtual machine. On the other hand, you can also use the various services provided by Neutron, Security group,QoS, and so on.
Currently, the L2 network types supported by kuryr are: bridge iovisor midonet ovs tap unbound
Wait, isn't this very similar to the way OpenStack Nova uses Neutron? Nova calls Neutron API to create port,Nova to actually create a network card, which is bound to the virtual machine.
Thank you for reading! This is the end of this article on "sample Analysis of the OpenStack Container Network Project Kuryr". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it for more people to see!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.