In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
01
Introduction to Zun Servic
Zun is the container service (Containers as Service) of OpenStack, which is similar to AWS's ECS service, but the implementation principle is different. ECS launches the container on the EC2 virtual machine instance, while Zun runs the container directly on the compute node.
Different from Magnum projects related to another OpenStack container: Magnum provides container orchestration services, can provide container infrastructure services such as elastic Kubernetes, Swarm, Mesos, and manages Kubernetes, Swarm, Mesos clusters, while Zun provides native container services, supporting different runtime such as Docker, Clear Container, etc., and managing units is container.
The architecture of the Zun service is shown in the figure:
The function and structure of Zun service and Nova service are very similar, except that the former provides container service and the latter provides virtual machine service. Both of them are mainstream computing service delivery models. The similar functions are reflected in the following points:
Provide network services through Neutron.
The persistent storage of data is realized through Cinder.
All support the use of Glance to store images.
Other functions such as quota, security group and so on.
The structural similarity of components is shown in:
Both of them are composed of API, scheduling and computing modules. Nova is composed of three core components: nova-api, nova-scheduler and nova-compute, while Zun is composed of two core components: zun-api and zun-compute. The reason why there is no zun-scheduler is that scheduler is integrated into zun-api.
Nova-compute calls compute driver to create a virtual machine, such as Libvirt. Zun-compute calls container driver to create a container, such as Docker.
Nova accesses virtual terminals such as VNC (nova-novncproxy) and Splice (nova-spiceproxy) through a series of proxy agents, and Zun also realizes the remote attach container function through the websocket of the proxy proxy container.
02
Zun service deployment
Zun service deployment is similar to Nova and Cinder deployment models, in which the control node creates the database, Keystone creates the service, registers the endpoints, and finally installs the relevant packages and initializes the configuration. In addition to installing the zun-compute service, the compute node also needs to install the container to be used, such as Docker. For detailed installation process, please refer to the official documentation. If you only want to conduct POC testing, you can quickly deploy an AllInOne environment through DevStack automation. The local.conf configuration file for reference is as follows:
The above configuration automatically installs Zun-related components, Kuryr components, and Docker through DevStack.
03
Getting started with Zun Servic
3.1 Dashboard
After the Zun service is installed, containers can be created and managed through the zun command line as well as Dashboard.
A great feature is that if Zun,Dashboard is installed to support Cloud Shell, users can enter the OpenStack command line interactively in DashBoard.
The principle is to start a gbraad/openstack-client:alpine container through Zun.
The process of creating a container through Dashboard is very similar to that of creating a virtual machine. The process is to select image (image), select specification (Spec), select or create volume (volume), select network (network/port), select security group (SecuiryGroup), and scheduler hint through panel, as shown in the figure:
Among them, Miscellaneous Miscellaneous is a special configuration for containers, such as setting environment variables (Environment), working directory (Working Directory), and so on.
3.2 Command line operation
Creating a container through the command line is also very similar. People who have used nova and docker command line will not have any difficulty. Here is an example of creating a mysql container:
The volume size is specified with the-- mount parameter above, and since volume_id is not specified, Zun creates a new volume. It should be noted that the volume created by Zun will also be auto remove automatically after the container is deleted. If you need to persist the volume volume, you should first create a volume through Cinder, and then specify volume_id through the source option. The existing volume volume will not be deleted when the container is deleted.
Unlike virtual machines, virtual machines configure specifications through flavor, while containers directly specify cpu, memory, and disk.
If the-- image-driver parameter is not specified above, the image is downloaded from dockerhub by default. If glance is specified, the image is downloaded to glance.
In addition, the data volume must be an empty directory when initializing the mysql container, and the lost+found directory will be automatically created when the mounted volume new volume is formatted, so it needs to be deleted manually, otherwise the initialization of the mysql container will fail:
After the creation is completed, you can view the container list through the zun list command:
You can see that the container fixed IP of mysql is 192.168.233.80. Like virtual machines, tenant IP is not connected to the outside by default, so you need to bind a floating IP (floating ip).
Currently, the zun command line cannot view the floatingip, but can only view it through the neutron command. After obtaining the floatingip and allowing the security group to access port 3306, you can remotely connect to the mysql service:
Of course, virtual machines in the same tenant can also access mysql services directly through fixed ip:
It can be seen that there is no difference in user access between starting mysql services in containers and deploying mysql services in virtual machines. In the same environment, virtual machines and containers can coexist and communicate with each other. Virtual machines and containers can be used transparently in the application layer, while virtual machines or containers can be selected in the underlying layer through application scenarios.
3. 3 about capsule
In addition to managing the container container, Zun also introduces the concept of capsule. Capsule is similar to Kubernetes's pod. A capsule can contain multiple container, and these container share network, ipc, pid namespace, and so on.
Start a mysql service through capsule and declare the yaml file as follows:
Create a mysql capsule:
It can be seen that the init container of capsule uses the pause image of kubernetes.
3.4 Summary
OpenStack's container service is originally implemented in Nova and implements Nova ComputeDriver, so other features of Zun, such as container lifecycle management, image management, service management, action management and so on, are very similar to Nova virtual machines. You can view the official documents, which will not be discussed here.
04
Principle of Zun implementation
4.1 call container interface to implement container life cycle management
As mentioned earlier, Zun is mainly composed of zun-api and zun-compute services. Zun-api is mainly responsible for receiving user requests, parameter verification, resource preparation and other work, while zun-compute is really responsible for container management. The back end of Nova is configured through compute_driver, while the back end of Zun is configured through container_driver. Currently, only DockerDriver has been implemented. So call Zun to create the container, and eventually zun-compute calls docker to create the container.
Let's take the creation of a container as an example to outline its process.
4.1.1 zun-api
The first entry is zun-api, and the main code is implemented in zun/api/controllers/v1/containers.py and zun/compute/api.py. The method entry for creating a container is the post () method, and the calling process is as follows:
Zun/api/controllers/v1/containers.py
Policy enforce: check the policy to verify that the user has the API call to create the container.
Check security group: checks whether the security group exists and returns the ID of the security group based on the passed name.
Check container quotas: check the quota quota.
Build requested network: check the network configuration, such as the existence of port and the legality of network id, and finally build an internal network object model dictionary. Note that this step only checks that the port is not created.
Create container object: construct the container object model based on the passed parameters.
Build requeted volumes: check the volume configuration, if the volume id is passed, check if the volume exists, and if no volume id is passed and only size is specified, call Cinder API to create a new volume.
Zun/compute/api.py
Schedule container: use FilterScheduler to schedule container and return the host object of the host. This is very similar to nova-scheduler, except that Zun is integrated into zun-api. Currently supported filters, such as CPUFilter, RamFilter, LabelFilter, ComputeFilter, RuntimeFilter and so on.
Image validation: check whether the image exists. Here, the image_search method of zun-compute is called remotely, which is actually calling docker search. The main purpose here is to achieve rapid failure and avoid finding that image is illegal only when you go to the compute node.
Record action: like Nova's record action, record the operation log of container.
Rpc cast container_create: the container_create () method of zun-compute is called remotely asynchronously, and the zun-api task ends.
4.1.2 zun-compute
Zun-compute is responsible for the creation of container. The code is located in zun/compute/manager.py. The process is as follows:
Wait for volumes avaiable: wait for the volume creation to complete, and the status changes to avaiable.
Attach volumes: Mount volumes, which will be described later in the mount process.
Checksupportdisk_quota: if using a local disk, check the local quota quota.
Pull or load image: call Docker to pull or load the image.
Create docker network, create neutron port, this step is described in more detail below.
Create container: call Docker to create the container.
Container start: call Docker to start the container.
The above code that calls Dokcer to pull the image, create the container, and start the container is located in zun/container/docker/driver.py. This module is basically the encapsulation of the community Docker SDK for Python.
Other operations of Zun, such as start, stop, kill, and so on, are implemented in a similar way, so I won't repeat them here.
4.2 remote container access through websocket
We know that virtual machines can be logged in remotely through VNC, physical servers can be accessed remotely through SOL (IPMI Serial Over LAN), and containers can be accessed remotely through websocket interfaces.
Docker natively supports websocket connection. The reference APIAttach to a container via a websocket,websocket address is / containers/ {id} / attach/ws, but it can only be accessed from the compute node, so how to access it through API?
Just like the implementation of Nova and Ironic, it is also implemented through proxy proxy forwarding, and the process responsible for websocket forwarding of container is zun-wsproxy.
When calling the container_attach () method of zun-compute, zun-compute saves the websocket_url and websocket_token of container to the database.
Zun-wsproxy can read the websocket_url of container and forward it as the destination side:
The shell of container can be accessed remotely through Dashboard:
Of course, you can also attach container through the command line zun attach.
4.3.Container persistent storage using Cinder
Earlier, I introduced Zun's implementation of container persistent storage through Cinder. My previous article introduced the principle analysis and practice of Docker using OpenStack Cinder to persist volume, and introduced the docker-cinder-driver and OpenStack Fuxi projects developed by john griffith, both of which implemented Cinder volume mounting into the Docker container. In addition, the extension module python-brick-cinderclient-ext of cinderclient implements the local attach of Cinder volume, that is, mounting the Cinder volume to the physical machine.
Instead of reusing the above code modules, Zun reimplements the functions of volume attach, but the implementation principle is exactly the same as the above method, and mainly includes the following processes:
Connect volume: connect volume is to map volume attach to the host where container resides, and the protocol for establishing a connection is obtained through initialize_connection information. If it is a LVM type, it is usually through iscsi, and if it is Ceph rbd, it is directly used by rbd map.
Ensure mountpoit tree: check to see if the mount point path exists, and if not, call mkdir to create the directory.
Make filesystem: if it is a new volume, the mount will fail because there is no file system, and the file system will be created.
Do mount: everything is ready, call the mount API of OS to mount volume to the specified directory point.
The code of Cinder Driver is located in the Cinder class of `zun/volume/driver.py, and the method is as follows:
Where cinder.attach_volume () implements step 1 above, and _ mount_device () implements steps 2-4 above.
4.4 Integration of Neutron network to realize multi-tenancy in container network
4.4.1 about Container Network
Previously, we created a container through Zun, using Neutron network, which means that containers and virtual machines share Neutron network services exactly the same. Virtual machine networks can also achieve the functions of containers, such as multi-tenant isolation, floating ip, security groups, firewalls and so on.
How does Docker integrate with Neutron networks? According to the official Docker network plugin API, the plug-in is located in the following directory:
/ run/docker/plugins
/ etc/docker/plugins
/ usr/lib/docker/plugins
This shows that Docker uses the kuryr network plug-in.
Kuryr is also a relatively new project in OpenStack, and its goal is "Bridge between container framework networking and storage models to OpenStack networking and storage abstractions.", that is, to realize the network and storage integration of container and OpenStack, of course, only the network part integration has been realized at present.
We know that there are two main implementation models of container network:
CNM:Docker Company proposed that the scheme used natively by Docker, through HTTP request call, the model design can refer to the The Container Network Model Design,network plug-in to implement two Driver, one of which is IPAM Driver, which is used to implement IP address management, and the other is Docker Remote Drivers, which implements network-related configuration.
CNI:CoreOS proposes that Kubernetes chooses this solution and invokes it through a local method or command line.
Therefore, Kuryr is also divided into two sub-projects, kuryr-network implements CNM interface, mainly to support native Docker, while kury-kubernetes implements CNI interface, mainly to support Kubernetes,Kubernetes service and integrates Neutron LBaaS. Next time, we will introduce this project separately.
Since Zun uses native Docker, it uses the kuryr-network project and implements the CNM API. When registered to Docker libnetwork in the form of remote driver, Docker will automatically send HTTP requests to the socket address specified by the plug-in for network operation. Our environment is http://127.0.0.1:23750, that is, the address for kuryr-libnetwork.service listening, and the Remote API API can refer to Docker Remote Drivers.
4.4.2 implementation principle of kuryr
The previous section 4.1 described that zun-compute will call docker driver's create () method to create a container. In fact, this method not only calls python docker sdk's create_container () method, but also does a lot of work, including network-related configuration.
First check whether the network of Docker exists. If it does not exist, create it. Network name is the UUID of Neutron network.
Then the Neutron is called to create the port, from which it can be concluded that the container's port is not created by Docker libnetwork or Kuryr, but by Zun.
Going back to the previous Remote Driver,Docker libnetwork, POST will first call kuryr's / IpamDriver.RequestAddressAPI to request the allocation of IP, but it is obvious that the previous Zun has already created the port,port and allocated the IP, so this method is actually a formality. If the docker command is called directly to specify the kuryr network creation container, this method is called to create a port from the Neutron.
Next, POST calls the / NetworkDriver.CreateEndpoint method of kuryr. The most important step of this method is binding, that is, the port attach is transferred to the host, and the binding operation is separated into a kuryr.lib library. Here we use veth driver, so it is implemented by the port_bind () method of the kuryr/lib/binding/drivers/veth.py module. This method creates a veth pair, one of which is the tap-xxxx,xxxx prefix port ID and is placed in the namespace of the host. Another namespace,t_cxxxx that puts t_cxxxx into the container is configured with IP, while tap-xxxx calls the shell script (the script is located in / usr/local/libexec/kuryr/) to add the tap device to the ovs br-int bridge. If you use HYBRID_PLUG, that is, the security group is implemented through Linux Bridge instead of OVS, qbr-xxx is created and an veth pair is created to be associated with ovs br-int.
As you can see here, there is basically no difference between a Neutron port binding to a virtual machine and a container, as shown below:
The only difference is that the virtual machine maps the tap device directly to the virtual device of the virtual machine, while the container puts another tap into the container's namespace through the veth pair.
Some people will say, where has the flow table of br-int been updated? This is exactly the same as the virtual machine. When the port update operation is called, the neutron server will send RPC to the L2 agent (such as neutron-openvswitch-agent), and the agent will update the corresponding tap device and flow table according to the status of the port.
So in fact, kuryr only does one thing, and that is to bind the port of the Zun application to the container.
05
Summary
The OpenStack Zun project perfectly realizes the integration of container, Neutron and Cinder, coupled with Ironic bare metal service, OpenStack implements container, virtual machine, bare metal sharing network and storage. In the future, I think bare metal, virtual machine and container will be mixed in the data center for a long time. OpenStack realizes the complete equality between container and virtual machine, bare metal, resource sharing and function alignment. Applications can choose containers, virtual machines or bare metal according to their own needs. There is no difference in use. Users only need to care about business performance requirements and special access to hardware. Completely transparent to the load (workload).
reference
Docker python sdk: https://docker-py.readthedocs.io/en/stable/
Zun's documentation: https://docs.openstack.org/zun/latest/
Https://docs.docker.com/engine/api/v1.39/#operation/ContainerAttachWebsocket
Principle Analysis and practice of http://int32bit.me/2017/10/04/Docker using OpenStack-Cinder to persist volume
Https://specs.openstack.org/openstack/cinder-specs/specs/mitaka/use-cinder-without-nova.html
Https://docs.docker.com/engine/extend/plugin_api/
Https://github.com/docker/libnetwork/blob/master/docs/design.md
Https://github.com/docker/libnetwork/blob/master/docs/ipam.md
Https://github.com/docker/libnetwork/blob/master/docs/remote.md
Https://docs.openstack.org/kuryr-libnetwork/latest/
Https://docs.openstack.org/magnum/latest/user/
Https://github.com/docker/libnetwork
Https://www.nuagenetworks.net/blog/container-networking-standards/
Http://blog.kubernetes.io/2016/01/why-Kubernetes-doesnt-use-libnetwork.html
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.