Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to implement Hypervisor in KVM Virtualization Technology

2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

How to implement Hypervisor in KVM virtualization technology? in order to solve this problem, this article introduces the corresponding analysis and solution in detail, hoping to help more partners who want to solve this problem to find a more simple and feasible method.

Implementation of Hypervisor for KVM Virtualization Technology

The virtualization of physical resources by VMM (VirtualMachineMonitor) can be divided into three parts:

CPU virtualization, memory virtualization and Iamp O device virtualization, among which the virtualization of CPU is the most important. Classic virtualization method: modern computer architecture generally has at least two privilege levels (user mode and core mentality, and x86 has four privilege levels Ring0~Ring3) to separate system software from application software. Those instructions that can only be executed at the highest privileged level of the processor (kernel state) are called privileged instructions. most of the instructions (that is, sensitive instructions) of critical resources in general readable and writable systems are privileged instructions (in the case of X86 where several sensitive instructions are non-privileged instructions). If the state of the processor is not kernel when executing privileged instructions, an exception is usually thrown and the system software handles this illegal access (trap). The classic virtualization approach is to use "de-privileged" and "trapped-emulated" methods, that is, GuestOS runs at the non-privileged level, while VMM runs at the highest privileged level (full control of system resources). After the privilege level of GuestOS is removed, most of the instructions of GuestOS can still be run directly on the hardware, and only when the privileged instructions are executed, will they fall into VMM simulation execution (trap-simulation). The essence of "trap-simulation" is to ensure that the instructions that may affect the correct operation of VMM are simulated by VMM, and most of the non-sensitive instructions still run as usual.

Because several instructions in the X86 instruction set are sensitive instructions that need to be captured by VMM, but they are not privileged instructions (called critical instructions), "de-privileged" can not cause them to fall into simulation, and the execution of them will not automatically "trap" and be captured by VMM, thus hindering the virtualization of instructions, which is also known as the virtualization vulnerability of X86.

The implementation of X86 architecture virtualization can be divided into:

1. X86 "full Virtualization" (meaning that the abstract VM has complete physical machine characteristics, on which OS runs without any modification) the Full school optimizes the process of "run-time monitoring, capture and simulation", adhering to the idea of running directly without modification. There are some differences in the internal implementation of this school, in which the full virtualization based on binary translation (BT) represented by VMWare is represented. Its main idea is to translate the GuestOS instructions executed on VM into a subset of x86 instructions, in which sensitive instructions are replaced with trapped instructions. The translation process is intersected with instruction execution, and user-mode programs without sensitive instructions can be executed directly without translation.

2. X86 "paravirtualization" (virtualization that needs the assistance of OS, on which the OS running needs to be modified) the basic idea of paravirtualization is to change the code of GuestOS to replace the operation containing sensitive instructions with an overcall to VMM, a system call like OS, and transfer control to VMM, which is well known for the VMM project. The advantage of this technology is that the performance of VM is close to that of physical machines, while the disadvantage is that it needs to modify GuestOS (such as Windows does not support modification) and increased maintenance costs. The key modification of GuestOS will lead to operating system dependence on specific hypervisor. Therefore, many virtualization manufacturers have abandoned Linux paravirtualization in their virtualization products developed based on VMM, and focus on hardware-assisted full virtualization development to support unmodified operating systems.

3. X86 "hardware-assisted virtualization": its basic idea is to introduce a new processor running mode and new instructions, so that VMM and GuestOS run in different modes, GuestOS runs in controlled mode, and some of the original sensitive instructions will all fall into VMM in controlled mode, thus solving the problem of "trap-simulation" of some unprivileged sensitive instructions, and the preservation and recovery of the context during mode switching is completed by hardware. This greatly improves the efficiency of context switching during "trap-simulation".

Take IntelVT-x hardware-assisted virtualization technology as an example, this technology adds two kinds of processor working modes in virtual state: Root operation mode and Non-root operation mode. VMM operates in Root mode, while GuestOS runs in Non-root mode. These two operating modes have their own privilege ring respectively, and the GuestOS of VMM and virtual machine run on the 0 ring of these two operating modes respectively. In this way, we can not only make the VMM run on the 0 ring, but also make the GuestOS run on the 0 ring, avoiding modifying the GuestOS. The switch between Root operation mode and Non-root operation mode is accomplished through the new CPU instruction (such as VMXON,VMXOFF).

Hardware-assisted virtualization technology eliminates the ring conversion problem of the operating system, lowers the threshold of virtualization, supports the virtualization of any operating system without modifying the OS kernel, and is supported by virtualization software vendors. Hardware-assisted virtualization technology has gradually eliminated the differences between software virtualization technologies, and become the development trend in the future.

²vCPU mechanism

VCPU scheduling mechanism

For the virtual machine, the physical CPU is not directly perceived, and the computing unit of the virtual machine is represented by the vCPU object. The virtual machine only sees the vCPU presented to it by VMM. In VMM, each vCPU corresponds to a VMCS (Virtual-MachineControlStructure) structure. When the vcpu is switched from the physical CPU, its running context is saved in its corresponding VMCS structure. When the vcpu is switched to run on the pcpu, its running context is imported from the corresponding VMCS structure to the physical CPU. In this way, the independent operation between each vCPU is realized. From the structure and function division of the virtual machine system, we can see that the guest operating system and the virtual machine monitor

Together constitute the two-level scheduling framework of the virtual machine system, as shown in the figure is a two-level scheduling framework of the virtual machine system in the multi-core environment. The guest operating system is responsible for level 2 scheduling, that is, the scheduling of threads or processes on the vCPU (mapping core threads to the corresponding virtual CPU). The virtual machine monitor is responsible for level 1 scheduling, that is, the scheduling of vCPU on the physical processing unit. There is no dependency on the scheduling policy and mechanism of two-level scheduling. VCPU scheduler is responsible for the allocation and scheduling of physical processor resources among virtual machines. In essence, the vCPU in each virtual machine is scheduled according to certain policies and mechanisms. On the physical processing unit, any policy can be used to allocate physical resources to meet the different needs of virtual machines. VCPU can schedule execution in one or more physical processing units (time-sharing multiplexing or spatial multiplexing physical processing units), or it can establish an one-to-one fixed mapping relationship with physical processing units (restricting access to specified physical processing units).

²memory virtualization

Memory Virtualization three-tier Model

Because VMM (VirtualMachineMonitor) controls all system resources, VMM holds the entire memory resources, which is responsible for page memory management and maintains the mapping of virtual addresses to machine addresses. Because GuestOS itself has a page memory management mechanism, the whole system with VMM has one more layer of mapping than the normal system:

a. Virtual address (VA) refers to the linear address space provided by GuestOS for its applications; B. Physical address (PA), pseudo-physical address abstracted by VMM and seen by the virtual machine

c. Machine address (MA), the real machine address, that is, the address signal that appears on the address bus; the mapping relationship is as follows: GuestOS:PA=f (VA), VMM:MA=g (PA) VMM maintenance

Set page table, responsible for PA to MA mapping. GuestOS maintains a set of page tables and is responsible for VA to PA mapping. In the actual operation, the user program accesses VA1, converts the page table of GuestOS to PA1, then VMM intervenes, and uses the page table of VMM to convert PA1 to MA1.

²page table virtualization technology

Ordinary MMU can only complete the mapping from virtual address to physical address once. In virtual machine environment, the "physical address" obtained by MMU translation is not the real machine address. If you want to get the real machine address, it must be intervened by VMM and then mapped again to get the machine address used on the bus. If every memory access of virtual machine needs VMM intervention, and the efficiency of address translation simulated by software is very low, and there is almost no practical availability, in order to achieve efficient translation from virtual address to machine address, the common idea is that VMM generates a compound mapping fg according to mapping f and g, and writes this mapping into MMU directly. At present, the main methods of page table virtualization are MMU class virtualization (MMUParavirtualization) and shadow page table, which has been replaced by memory hardware-assisted virtualization technology.

1 、 MMUParavirtualization

The basic principle is that when GuestOS creates a new page table, it allocates a page from the free memory it maintains and registers the page with VMM, VMM deprives GuestOS of write permission to the page table, and then GuestOS writes to the page table fall into VMM for verification and conversion. VMM examines each item in the page table to ensure that they only map machine pages that belong to the virtual machine and do not contain writable mappings to page table pages. Later, VMM will replace the physical address in the page table entry with the corresponding machine address according to the mapping relationship maintained by itself, and finally load the modified page table into MMU. In this way, MMU can directly complete the translation of the virtual address to the machine address according to the modified page table.

2. Memory hardware-assisted virtualization

Schematic diagram of memory hardware-assisted virtualization technology

The hardware-assisted virtualization technology of memory is a kind of hardware-assisted virtualization technology which is used to replace the "shadow page table" realized by software in the virtualization technology. Its basic principle is: GVA (virtual address of guest operating system) > GPA (physical address of guest operating system)-> HPA (physical address of host operating system) two address translations are automatically completed by CPU hardware (software implementation memory overhead, poor performance). Taking the page table expansion technology ExtendedPageTable (EPT) of VT-x technology as an example, firstly, VMM sets the EPT page table of the client physical address to the machine address to CPU in advance; secondly, the client modifies the client page table without VMM intervention; finally, during the address translation, CPU automatically looks up two page tables to complete the translation from the client virtual address to the machine address. Using the hardware-assisted virtualization technology of memory, the client runs without VMM intervention, removes a lot of software overhead, and the memory access performance is close to that of the physical machine.

I am virtualizing the device

VMM reuses the limited peripheral resources by virtualizing the device, intercepting the access request of GuestOS to the device, and then simulating the real hardware by software. At present, there are three main ways of virtualization of the device: complete simulation of device interface, front-end / back-end simulation, and direct partition.

1. Complete simulation of device interface:

That is, the software accurately simulates the exact same interface as the physical device, and the GuestOS driver can drive the virtual device without modification.

Advantages: no additional hardware overhead, reusable existing drivers

Disadvantages: the operation of multiple registers is involved in one operation, which makes VMM intercept each register access and simulate accordingly, which leads to multiple context switching; due to software simulation, the performance is low.

2. Front / back end simulation:

VMM provides a simplified driver (back-end, Back-End). The driver in GuestOS is the front-end (Front-End,FE). The front-end driver sends requests from other modules directly to the back-end driver of GuestOS through a special communication mechanism with GuestOS. After processing the request, the back-end driver sends a notification back to the front end. VMM adopts this method.

Advantages: transaction-based communication mechanism, can greatly reduce context switching overhead, no additional hardware overhead

Disadvantages: GuestOS is required to implement the front-end driver, and the back-end driver may become a bottleneck.

3. Direct division:

That is, the physical device is directly assigned to a GuestOS, and the GuestOS directly accesses the Imax O device (without VMM). At present, the related technologies are IOMMU (IntelVT-d,PCI-SIG 's SR-IOV, etc.), which aims to establish an efficient Imax O virtualization straight channel.

Advantages: reusable existing drivers, direct access reduces virtualization overhead

Cons: more additional hardware needs to be purchased.

This is the answer to the question about how to implement Hypervisor in KVM virtualization technology. I hope the above content can be of some help to you. If you still have a lot of doubts to be solved, you can follow the industry information channel to learn more about it.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report