In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article is about how to implement the transparent transfer function of PCI in openstack nova. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.
1. PCI transparent transmission technology 1.1.The relationship between PCI transparent transmission technology and Liunx virtualization
Passthrough of the device (or PCI), an innovative technology that improves the performance of PCI devices through the use of hardware support from Intel ®(VT-d) or AMD (IOMMU).
Platform virtualization is the sharing of a platform between two or more operating systems to make more efficient use of resources. But a platform does not just mean more than one processor, it also contains other important elements that make up the platform, such as memory, network, and other hardware resources. Some hardware resources can be easily virtualized, such as processors and memory, while others, such as video adapters and serial ports, are not. When sharing is impossible or useless, Peripheral Component Interconnect (PCI) transparent transmission technology provides a way to use these resources effectively.
Before exploring transparent transmission technology, let's discuss how device emulation works today in two hypervisor architectures. The first architecture integrates device emulation into the hypervisor, while the second architecture pushes the device emulation to an application outside the hypervisor.
Device emulation in hypervisors is a common method implemented in the VMware workstation product, an operating system-based hypervisor. In this model, the hypervisor contains common devices that can be shared by various guest operating systems, such as virtual disks, virtual network adapters, and other necessary platform elements. This particular model is shown in figure 1.
The second architecture is called user space device emulation (see figure 2). As the name implies, this device emulation is implemented in user space and is not embedded in the hypervisor. QEMU (which provides not only device emulation but also a hypervisor) provides device emulation for a large number of stand-alone hypervisors such as Kernel-based Virtual Machine (KVM) and VirtualBox. This model has an advantage because device emulation is independent of the hypervisor and can be shared among multiple hypervisors. In addition, this model also supports any device emulation, without the need for the hypervisor (running in a privileged state) to bear this function.
There are some obvious advantages in pushing device emulation from hypervisors to user space, the biggest of which involves the so-called trusted Computing Foundation (trusted computing base,TCB). The TCB of a system is a collection of all the security components that are critical to the security of the system. One thing is obvious: if the system is minimized, bug is less likely to occur, so the system is more secure. This principle also applies to management programs. The security of the hypervisor is important because it separates multiple independent guest operating systems. The less code in the hypervisor (pushing device impersonation to a less privileged user space), the less likely it is to leak privileges to untrusted users.
Another variant of hypervisor-based device emulation is the quasi-virtualization (paravirtualized) driver. In this model, the hypervisor contains physical drivers, and each guest operating system contains a driver that the hypervisor can perceive, which works with the hypervisor driver (called a quasi-virtualization or PV driver).
Whether the device emulation occurs within the hypervisor or on a guest virtual machine (VM), the simulation method is similar. Device emulation can simulate a specific device (such as a Novell NE1000 network adapter) or a specific disk type (such as Integrated Device Electronics [IDE]). Physical hard drives can be completely different-for example, although an IDE drive is simulated as a guest operating system, the physical hardware platform can use a serial port ATA (SATA) drive. This technique is useful because IDE support is common in many operating systems and can be used as a common standard rather than requiring all operating systems to support more advanced drive types.
1.2 Overview of transparent transmission technology of equipment
As shown in the two device models described above, device sharing comes at a cost. There is overhead regardless of whether the setup simulation is performed in the hypervisor or in the user space in a separate VM. As long as there are multiple guest operating systems that need to share these devices, the cost is worth it. If sharing is not necessary, there are more efficient ways to share these devices.
Therefore, at the highest level, device pass-through provides a device isolation to a particular guest operating system so that the device can be used exclusively by that guest operating system (see figure 3). But why does this technology work? There are many reasons why transparent transmission of devices is valuable, two of the most important of which are performance and the exclusive right to provide devices that cannot be shared in nature.
As far as performance is concerned, the performance of the machine can be almost achieved by using the device through transmission. This technique is perfect for some network applications (or those with high disk IBG O). These network applications do not adopt virtualization because passing through the hypervisor (reaching the driver in the hypervisor or from hypervisor to user space emulation) can lead to competition and performance degradation. However, when these devices cannot be shared, they can also be assigned to specific clients. For example, if a system contains multiple video adapters, those adapters can be delivered to a specific customer domain.
Finally, there may be some dedicated PCE devices used by only one customer domain, or some devices that are not supported by the hypervisor and should be passed to the client. A separate USB port can be isolated from a given domain, and a serial port (which itself is not shareable) can be isolated from a specific client.
1.3 hardware support for transparent transmission of devices
Both Intel and AMD provide support for device pass-through (and new instructions for auxiliary hypervisors) in their next-generation processor architectures. Intel calls this support Virtualization Technology for Directed I Memory Management Unit O (VT-d), while AMD calls it I sign O support (IOMMU). In either case, the new CPU provides a way to map the PCI physical address to the customer's virtual system. When this mapping occurs, the hardware is responsible for access (and protection), and the guest operating system uses the device as if it were not a virtual system. In addition to mapping the client to physical memory, the new architecture also provides an isolation mechanism to prevent other clients (or hypervisors) from accessing that memory in advance. Intel and AMD CPU provide more virtualization capabilities, and you can learn more in the Resources section.
Another innovation that helps scale interrupts to a large number of VM is called Message Signaled Interrupts (MSI). MSI converts interrupts into messages that are easier to virtualize (scaled to thousands of separate interrupts) rather than relying on the physical interrupt pin that will be associated with a client. MSI has been available since PCI 2.2, but PCI Express (PCIe) also provides MSI, and in PCIe, MSI supports scaling the structure to multiple devices. MSI is an ideal Iamp O virtualization technology because it supports the isolation of multiple outage sources (rather than physical pin that must be multiplexed or routed by software).
2. PCI transparent transmission function is implemented in Openstack nova (translated from openstack administrator guide) 2.1.Implementing PCI transparent transmission steps in Openstack nova
Enable PCI passthrough (Compute) starts the PCI passthrough function on the host
Configure PCI devices in nova-compute (Compute) configures the PCI device in nova-compute.
Configure nova-scheduler (Controller) configure nova-scheduler
Configure nova-api (Controller) * * configure nova-api
Configure a flavor (Controller) configure a flavor
The PCI device with address 0000:41:00.0 is as an example. Expect to change this according to your actual environment.
2.2 Enable PCI passthrough (Compute)
It needs to be started in the server BIOS, and Intel CPU starts VT-d,AMD CPU to start IOMMU.
2. 3 Configure PCI devices nova-compute (Compute)
1. Configure nova-compute to allow the PCI device to be passed through to VMs. Edit / etc/nova/nova.conf:
Alternatively specify multiple PCI devices using whitelisting:
All PCI devices matching the vendor_id and product_id are added to the pool of PCI devices available for passthrough to VMs.
For more information about the syntax of pci_passthrough_whitelist, refer to nova.conf configuration options.
2. Restart nova-compute with service nova-compute restart.
2.4 Configure nova-scheduler (Controller)
Configure nova-scheduler as specified in Configure nova-scheduler.
Restart nova-scheduler with service nova-scheduler restart.
2.5 Configure nova-api (Controller)
1. Specify the PCI alias for the device.
Configure a PCI alias a1 to request a PCI device with a vendor_id of 0x8086 and a product_id of 0x154d. The vendor_id and product_id correspond the PCI device with address 0000:41:00.0.
Edit / etc/nova/nova.conf: 2.For more information about the syntax of pci_alias, refer to nova.conf configuration options.
Restart nova-api with service nova-api restart.2.6 Configure a flavor (Controller)
Configure a flavor to request two PCI devices, each with vendor_id as 0x8086 and product_id as 0x154d.
For more information about the syntax for pci_passthrough:alias, refer to flavor.
2.7 Create instances with PCI passthrough devices
The nova-scheduler selects a destination host that has PCI devices available with the specified vendor_id and product_id that matches the pci_alias from the flavor.
Thank you for reading! This is the end of this article on "how to achieve the transparent transmission function of PCI in openstack nova". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it out for more people to see!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.