In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article introduces you how to analyze the memory virtualization in the principle of KVM virtualization, the content is very detailed, interested friends can refer to, hope to be helpful to you.
Introduction to memory Virtualization
Let's take a look at the memory virtualization principles of KVM. It can be said that memory is the most important component besides CPU, and Guest ultimately uses host memory, so memory virtualization is actually about how to do all kinds of address translation from Guest to host physical memory, and how to make the conversion more efficient. KVM has experienced three generations of memory virtualization technology, which greatly speeds up the access rate of memory.
Traditional address translation
In protected mode, ordinary application processes use their own virtual address space, and every process on a 64-bit machine can access the address range from 0 to 2 ^ 64. In fact, there is not so much memory and will not give you so much. As far as the process is concerned, it has all the memory. For the kernel, only a small amount of memory is allocated to the process, and it will be allocated to the process when it needs more processes.
Usually the memory used by the application process is called the virtual address, while the kernel uses physical memory. The kernel is responsible for maintaining the translation mapping of virtual addresses to physical memory for each process.
First, logical addresses need to be translated into linear addresses, and then from linear addresses to physical addresses.
Logical address = > Linear address = > physical address
There is a simple offset between the logical address and the linear address.
A complete logical address = [segment selector: intra-segment offset address], look for GDT or LDT (through register gdtr,ldtr) to find the descriptor, do index in the first 13 bits of the segment descriptor through the segment selector (selector), and find the Base address. Base+offset is the linear address.
Why would you do that? It is said that Intel is trying to ensure compatibility.
The translation of logical address to linear address does not need to be introduced in virtualization. There is no actual virtualization operation in this layer. Like the traditional way, the most important thing is the translation of linear address to physical address.
The traditional linear address to physical address translation is managed by CPU page memory, page memory management.
Page memory management is responsible for translating linear addresses to physical addresses. A linear address is described in five segments. The first segment is the base address. By calculating with the current CR3 register (one for each process in the CR3 register, shared by threads, when a process switch occurs, the CR3 is loaded into the corresponding register, which is also the basis of memory isolation for each process), the address index of the page table is obtained through four operations. You end up with a page with a size of 4K (possibly larger, such as when hugepages is set). The whole process is completed by CPU, the process does not need to participate in it, if the page already exists in the query, directly return to the physical address, if the page does not exist, then there will be a page fault interrupt, the kernel is responsible for handling the page fault interrupt and loading the page into the page table. After the interrupt returns, CPU gets the page address and continues the operation.
Memory structure in KVM
Since the qemu-kvm process acts as a normal process on the host machine, this is the transformation process required for Guest.
Guest Virtual memory address (GVA) | Guide Linear address | Guest physical address (GPA) | Guest-| HV HV Virtual address (HVA) | HVLinear address | HV physical address (HPA)
What's the fu*k? So many.
Don't worry, the translation of Guest virtual addresses to HV linear addresses and the translation of HV virtual addresses to linear addresses can be omitted to make it look clearer.
Guest virtual memory address (GVA) | Guest physical address (GPA) | Guest-| HV HV virtual address (HVA) | HV physical address (HPA)
It is also mentioned earlier that KVM makes KVM's memory virtualization more efficient by constantly improving the conversion process, which we introduce from the original way of software virtualization.
Implementation of software virtualization
In the first layer of transformation, the conversion by GVA- > GPA is the same as the traditional transformation relationship. By looking up CR3 and then querying the page table, qemu-kvm is responsible for maintaining the corresponding relationship between GPA,GPA and HVA. In the second chapter of the demo of the KVM startup process, we introduce how to map memory to KVM and how to map the memory of HV to Guest by mmap.
Struct kvm_userspace_memory_region region = {.slot = 0, .guest _ phys_addr = 0x1000, .memory _ size = 0x1000, .userspace _ addr = (uint64_t) mem,}
You can see that the kvm_userspace_memory_region structure of qemu-kvm describes the starting position of the physical address and the memory size of the guest, and then describes the mapping userspace_addr of the physical memory HV of the Guest. Through multiple slot, the virtual address space of the discontiguous HV can be mapped to the contiguous physical address space of the Guest.
Qemu-kvm is responsible for maintaining the conversion of GPA- > HVA in the software simulation, and then goes through the way of HVA- > HPA again. From a process point of view, such access is very inefficient, especially when a page-missing interrupt occurs during the conversion from GVA to GPA, when an exception Guest exit is generated, HV catches the exception and calculates the physical address (allocates new memory to Guest), and then re-Entry. This process can lead to frequent Guest exits and the conversion process is too long. So KVM uses a technique called shadow page tables.
Virtualization of Shadow Page tables
The emergence of shadow page table is the technology of directly converting GVA to HVP in order to reduce the cost of address translation. In the memory conversion of software virtualization, the conversion from GVA to GPA is completed by querying CR3 registers. CR3 saves the base address of the page table in Guest, and then loads MMU to do address translation.
After adding the shadow page table technology, when accessing the CR3 register (which may be caused by the Guest process), KVM captures this operation. The CPU virtualization chapter EXIT_REASON_CR_ACCESS,qemu-kvm deceives Guest by loading the vulgar CR3 and shadow page table. This is the real CR3, and the later operation is consistent with the traditional way of accessing memory, when you need to access physical memory. Only one layer of shadow page table will be converted.
The shadow page table is maintained by the qemu-kvm process, which is actually a mapping from the Guest page table to the host page table. The hash value of the page table at each level corresponds to a directory of the shadow page table in qemu-kvm. In the initial conversion of GVA- > HPA, the shadow page table is not established, and the Guest produces a missing page interrupt, which is the same as the traditional conversion process. After two conversions (VA- > PA), the shadow page table records GVA- > GPA- > HVA- > HPA. This creates a direct relationship between GVA- > GPA, which is saved to the shadow page table.
EPT (extended page table) can be regarded as a hardware shadow page table. By adding EPT registers in Guest, when Guest generates access to CR3 and page tables, the page fault exception occurs because the access to the page table address in CR3 is GPA, and the local address is empty, that is, after Page fault. If in software simulation or shadow page table virtualization, VM exits at this time, and the qemu-kvm process takes over and gets the exception. However, in the virtualization mode of EPT, qemu-kvm ignores this exception, and Guest does not exit, but follows the traditional page-missing interrupt handling, in which EXIT_REASON_EPT_VIOLATION,Guest exit occurs in the process of page-missing interrupt handling. After qemu-kvm catches the exception, the physical address is assigned and the mapping of GVA- > HPA is established, and saved to EPT to load EPT into MMU. In the next conversion, the query directly queries the EPT table according to CR3 to complete the conversion of GVA- > HPA. The later conversion is done directly by hardware, which greatly improves the efficiency, and there is no need to maintain a set of page tables for each process, which reduces the memory overhead.
In the author's test, the memory access rate of Guest and HV is compared to 3756MB/s versus 4340MB/s. You can see that memory access is very close to the host level.
The virtualization of KVM memory is a process of converting virtual memory of virtual machine into physical memory of host. Guest still uses physical memory of host, but how to reduce the overhead caused by conversion becomes the main point of optimization in this process.
With the evolution of software simulation-> shadow page table-> EPT technology, KVM is becoming more and more efficient.
On how to analyze the principle of KVM virtualization memory virtualization is shared here, I hope that the above content can be of some help to you, can learn more knowledge. If you think the article is good, you can share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.