In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly introduces the relevant knowledge of "what is the metadata acquisition mechanism of OpenStack". The editor shows you the operation process through an actual case. The operation method is simple, fast and practical. I hope this article "what is the metadata acquisition mechanism of OpenStack" can help you solve the problem.
Metadata is not an unfamiliar concept in cloud computing. Literally, Metadata means metadata. In cloud computing, Metadata services can inject some additional information into the virtual machine, so that the virtual machine can have some customized configuration after it is created. In OpenStack, the Metadata service can provide the virtual machine with other information such as hostname, ssh public key, custom data passed in by the user, and so on. These data are divided into two categories: metadata and user data,metadata mainly include some data of the virtual machine itself, such as hostname, ssh key, network configuration, etc., while user data mainly includes some customized scripts, commands, and so on. But no matter what kind of data it is, openstack provides data to virtual machines in the same way.
The acquisition Mechanism of Metadata
In OpenStack, there are two ways for virtual machines to obtain Metadata information: Config drive and metadata RESTful services. Next, we introduce and analyze these two mechanisms respectively.
Config drive
Config drive mechanism means that OpenStack writes metadata information into a special configuration device of the virtual machine, and then automatically mounts and reads the metadata information when the virtual machine starts, so as to achieve the purpose of obtaining metadata. In the client operating system, the device that stores metadata needs to be an ISO9660 or VFAT file system. The specific implementation will vary depending on the Hypervisor and configuration. Take libvirt as an example: OpenStack writes metadata to the virtual disk file of libvirt and instructs libvirt to virtualize it as a cdrom device, as shown in figures 1 and 2. On the other hand, when the virtual machine starts, the cloud-init in the guest operating system mounts and reads the device, and then configures the virtual machine according to what it reads.
Figure 1. Virtual machine defined xml file
Figure 2. Virtual disk files that store metadata
Of course, to achieve the above functions, both host and virtual machine images need to work together, and they need to meet some conditions:
Host (compute node of OpenStack)
The Hypervisors that supports config drive mechanism are: libvirt, hyper-v and VMware. When using libvirt and VMware as Hypervisor, you need to make sure that the genisoimage program is installed on the host and that the mkisofs_cmd flag is set to the location of genisoimage. When using hyper-v as Hypervisor, you need to set the mkisofs_cmd flag to the full path of mkisofs.exe, and you also need to set qume_img_cmd as the path of the qemu-img command in the configuration file of hyper-v.
Virtual machine image
The virtual machine image needs to ensure that cloud-init is installed. If cloud-init is not installed, you need to write your own script to mount the configuration disk, read the data, parse the data, and perform the corresponding actions according to the data content during the startup of the virtual machine.
OpenStack provides a command line parameter-config-drive to configure whether to use the config drive mechanism when creating a virtual machine. For example:
Listing 1
# nova boot-config-drive=true-image image-name-key-name mykey-flavor 1-user-data./my-user-data.txt myinstance-file / etc/network/interfaces=/home/myuser/instance-interfaces
Alternatively, it can be configured in / etc/nova/nova.conf, as shown in listing 2, so that the OpenStack computing service uses the config drive mechanism by default when creating virtual machines.
Listing 2
Force_config_drive=true
Users can view the written metadata information in the virtual machine. If the guest operating system supports disk access through tags, you can use the following command to view it:
Listing 3
# mkdir-p / mnt/config#mount / dev/disk/by-label/config-2 / mnt/config
Metadata RESTful service
OpenStack provides a RESTful interface, and virtual machines can obtain metadata information through REST API. The component that provides this service is: nova-api-metadata. Of course, only the nova-api-metadata service is not enough to complete the request sending and response from the virtual machine to the network node, and the other services that work together to accomplish this task are Neutron-metadata-agent and Neutron-ns-metadata-proxy. Below we will examine how they work together to provide metadata services to virtual machines.
Nova-api-metadata
Nova-api-metadata starts the RESTful service, which is responsible for processing REST API requests from the virtual machine. Take the corresponding information from the HTTP header of the request, obtain the ID of the virtual machine, then read the metadata information of the virtual machine from the database, and finally return the result.
Neutron-metadata-agent
Neutron-metadata-agent runs on the network node and is responsible for forwarding the received request for obtaining metadata to nova-api-metadata. Neutron-metadata-agent takes the ID of the virtual machine and tenant and adds it to the HTTP header of the request. Nova-api-metadata acquires the metadata based on this information.
Neutron-ns-metadata-proxy
Neutron-ns-metadata-proxy also runs on network nodes. In order to solve the problem of duplication of network segments of network nodes and virtual network segments of tenants, OpenStack introduces network namespaces. The routing and DHCP servers in Neutron are in separate namespaces. Because the requests of virtual machines to obtain metadata are all based on routing and DHCP server as network exit, it is necessary to connect different network namespaces through neutron-ns-metadata-proxy and forward requests between network namespaces. Neutron-ns-metadata-proxy uses HTTP technology on top of unix domain socket to forward HTTP requests between different network namespaces. And in the request header, add the information of 'XmurNeutronMutual RouterMId' and 'XMae NeutronMutual NetworkMeiID', so that Neutron-metadata-agent can identify the virtual machine that sent the request and obtain the ID of the virtual machine.
Figure 3.Metadata request sending process
As shown in figure 3, the general process for a virtual machine to obtain metadata is as follows: first, the request is sent to neutron-ns-metadata-proxy, router-id and network-id are added to the request, then the request is forwarded to neutron-metadata-agent through unix domian socket, and the port information is obtained according to the router-id, network-id and IP in the request, so as to get instance-id and tenant-id to join the request, and finally the request is forwarded to nova-api-metadata It uses instance-id and tenant-id to get the metadata of the virtual machine and returns the corresponding.
Above we analyzed the process of forwarding requests between services, so now there is only one problem, and the whole route to get metadata is clear: how does the virtual machine send the request to neutron-ns-metadata-proxy?
Let's first analyze the requests sent by the virtual machine. Since metadata was first proposed by Amazon, it was stipulated that the address of the metadata service was 169.254.169.254. So the virtual machine sends a medtadata request to 169.254.169.254VRO 80. So how did this request come out of the virtual machine? Currently, Neutron has two ways to solve this problem: sending requests through router and sending requests through DHCP. Send a request through router if the subnet where the virtual machine resides is connected to the router, the message sent to 169.254.169.254 will be sent to router.
As shown in figure 4, Neutron forwards the message to port 9697 by adding iptables rules in the network namespace where router is located, while neutron-ns-metadata-proxy listens on that port, so the message is obtained by neutron-ns-metadata-proxy and enters the above subsequent processing and forwarding process.
Figure iptables rules for the network namespace where the 4.router resides
Figure 5. Listen for Neutron-ns-metadata-proxy services on port 9697
Send a request through DHCP
If the subnet where the virtual machine resides is not connected to any router, the request cannot be forwarded through the router. At this point, Neutron forwards the metadata request through the DHCP server. The DHCP service sets a static route for the virtual machine through option 121 of the DHCP protocol. As shown in figure 6, 10.0.0.3 is the IP address of the DHCP server. By looking at the static routing table of the virtual machine, we can see that the message sent to 169.254.169.254 was sent to 10.0.0.3, the DHCP server.
Figure 6. Static routing tables in virtual machines
In addition, looking at the IP configuration information of the DHCP server, it is found that the DHCP server is configured with two IP, one of which is 169.254.169.254. Similar to router, Neutron starts the neutron-ns-metadata-proxy service that listens on port 80 in the DHCP network namespace to enter the process of processing and forwarding requests.
Figure IP configuration of 7.DHCP server
This is the end of the content about "what is the metadata acquisition mechanism of OpenStack". Thank you for your reading. If you want to know more about the industry, you can follow the industry information channel. The editor will update different knowledge points for you every day.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.