In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
Linux in the use of the global framework to achieve memory management, many novices are not very clear about this, in order to help you solve this problem, the following editor will explain in detail for you, people with this need can come to learn, I hope you can gain something.
one。 Address division.
1. CPU address.
CPU address refers to the range that the CPU address bus can address, the 32bit-CPU address range is 4G, this address is virtual, in fact, external physical memory will not use such a large amount of memory.
The 4G space of the CPU virtual address is usually divided into two parts, one is the kernel virtual address, usually between the 3G-4G, and the other is the user virtual address, usually between the 0G-3G. Obviously, the virtual address range that the user process can use is much larger than the virtual address space that the kernel can use, but the physical memory is only a few meters, several gigabytes, and how the kernel virtual address uses physical memory. How to use physical memory in user space is the key to linux memory management.
two。 Physical memory
Physical memory refers to the device that stores data externally, has an address bus that can be addressed by CPU, and is managed by Cache and TLB/MMU of CPU.
One concept needs to be clarified: any code runs on CPU, not on physical memory, which is a device that holds executable code in the user's process space or kernel critical data structures that will eventually be acquired by CPU through MMU addressing and Cache*** instruction data.
The full name of NUMA is inconsistent memory access, it is usually the concept of multi-core access, each CPU core will have a node corresponding to the use of a part of physical memory, the management of these nodes add these data structures: perCPU variables, list table concatenated node traversal, zone partition, zonelist management and so on. To simplify the problem, we analyze only one node of UMA, which also contains some of the data structure characteristics of NUMA, which will be described later.
The following figure is a thumbnail diagram of NUMA, as shown in figure 2-1.
Figure 2-1 schematic diagram of NUMA multicore physical memory zone
3. Kernel virtual address space partition.
If the reader only scratches the surface, he must think that the virtual address space of the kernel has only logical addresses. In fact, this is only a special case of the virtual address division of the memory kernel, not a complete representation of the whole. Now I draw a complete graph and change the name of the kernel virtual address space, as shown in figure 2-2.
Figure 2-2 Partition of kernel virtual address space and its mapping to physical memory
Next, let's change the name. We can call the directly mapped address as the kernel physical mapping address or the logical address. In principle, linux can only use 896m in virtual space 1G, and the remaining 128m is reserved for other purposes, so physical memory other than direct mapping is called high-end memory. The space between 128m is divided into multiple gap security gaps, virtual addresses, fixed mapping and persistent mapping. Note that the name of the virtual address here is usually mixed with the kernel virtual address mentioned above, which refers to the CPU kernel virtual address, which is a broader concept. Because the directly mapped part has a name called logical address, the virtual address space here often refers to this part.
The virtual address has the following uses, using the vm_struct structure to manage high-end memory through the kernel, and it can use kmap to obtain the space of high-end physical memory. It is also possible not to map the physical high-end memory, and directly use this address as the ioremap address of the external physical device, thus directly manipulating the device, of course, this also exposes the address space of the external device and is easy to cause interference, so it is usually impossible to directly access the address mapped by ioremap but to read and write with readb/writeb, and to optimize the barrier setting and release it with iounmap, because the mapped device often has a 'marginal effect'.
If there is no high-end memory, (of course, 32bit embedded systems usually do not use high-end memory, at least so many embedded applications I have seen about ARM,powerPC,MIPS32 do not use high-end memory), then fixed mapping and persistent mapping will probably not be used. Fixed mapping can specify the occupation of certain address pages that hold physical memory for a long time, and this mapping relationship can be configured at the initial stage, while persistent mapping establishes a mapping relationship with high-end memory physical pages when enabled, and it will not be released at other stages.
To emphasize that I don't care about high-end memory here, the kernel's direct mapping logical address can cover all physical memory.
4. Division of user virtual address space
The structure of the user virtual address space is not complicated, what is complicated is its application in the virtual memory space, how to map the file, how to organize the interval mapping, who is the associated process, and what is the most difficult part of the user virtual mapping, such as what the corresponding memory structure instance is. Just draw the diagram below, you can have a big understanding of the user virtual memory space, as shown in figure 2-3.
Figure 2-3 user space virtual memory layout
Since the user space is virtual, then how does it access the physical memory, of course, it is PGD,PUD,PMD,PTE,OFFSET and its TLB fast table query, the upper directory entry PUD and intermediate directory entry generally do not consider, consider the secondary directory on it. Figure 2-4 taken from the Internet:
Figure 2-4 the method of accessing physical memory in user process space
two。 Partner system
Partner system is a method of managing external physical memory by order. * there are 11 steps. Each level has a collection of one or more pages merged and connected with pointers, and at the same time, one or more page sets in the same order form their own partners. It should be emphasized that the partners of each level are equal to the number of pages, which is easier to understand with figure 2-5 below.
Figure 2-5 approximate model of partner system in memory
When the kernel applies for a piece of memory allocated by page but not by order, it is usually allocated according to the order of the application space using the principle of partner system. The extra pages are merged into linked lists of other levels according to the partner system algorithm to form new partners of other levels. When the memory space is freed, the free space will try to find a level that can be used as a partner to connect, if the size exceeds, split, and then look for other steps that can be used as a partner. It's a mouthful, but it's easy to understand. Later, the source code will be presented for detailed analysis with examples.
three。 Anti-fragmentation technology:
In fact, the anti-fragmentation mechanism is still before the partner system, it mainly divides the physical memory of each zone area into recyclable reclaimable but not removable unmovable, removable movable, immovable unmovable. These tags are managed in series according to a certain list, and when external conditions apply for physical memory resulting in a lot of fragmentation, it can reorganize and classify physical memory according to the flags of these data structures, thereby reducing fragmented or orphaned pages. Anti-fragmentation technology is rarely used in embedded systems, and most of it is occupied by partner systems, so it will not be analyzed in detail.
four。 Slab allocation mechanism.
As we all know, the operating system uses partner systems to manage memory, which will not only cause a large number of memory fragments, but also low processing efficiency. SLAB is a memory management mechanism, which not only has high processing efficiency, but also effectively avoids the generation of memory fragmentation. Its core idea is pre-allocation. It classifies and manages the memory according to SIZE. When applying for a piece of memory of SIZE size, the allocator allocates a memory block (BLOCK) from the SIZE collection. When a memory of SIZE size is released, the memory block is put back to the original set instead of being released to the operating system. When you need to apply for the same size of memory, you can reuse the previously reclaimed memory blocks (BLOCK), thus avoiding the generation of memory fragments. [note: as there are many details of the SLAB process, this is just an explanation of the principle.
1. General structure
Figure 1 SLAB memory structure
two。 Processing flow
As shown in figure 1: SLAB management mechanism generally divides memory into SLAB header, SLOT array, PAGES array, allocable space, wasted space and other modules to manage separately, in which the function and function of each module:
SLAB header: contains summary information of SLAB management, such as minimum allocation unit (min_size), corresponding displacement of minimum allocation unit (min_shift), page array address (pages), free page linked list (free), start address of allocable space (start), end address of memory block (end), and so on (as shown in code 1). In the process of memory management Memory allocation, recycling, positioning, and so on all depend on this data.
SLOT array: each member of the SLOT array is responsible for the allocation and collection of fixed-size memory blocks (BLOCK). In nginx, SLOT [0] ~ SLOT [7] is responsible for allocating memory in the range of [1: 8], [9: 16], [17: 32], [33: 64], [65: 128], [129: 256], [257: 512], [513: 1024] bytes, respectively, but in order to facilitate the allocation and recovery of memory blocks (BLOCK), the size of each memory block (BLOCK) is the upper limit of each interval (8, 16, 32, 64, 128, 256, 512, 1024). For example, if the application process requests 5 bytes of space, SLOT [0] is responsible for the allocation of the memory because 5 places are within the range of [1 to 8], but the upper limit of the interval [1 to 8] is 8, so even if the application process requests 5 bytes, 8 bytes are still allocated to the application process. And so on: if you apply for 12 bytes, 12 is between the interval [916], take the upper limit 16, so SLOT [1] allocates 16 bytes to the application process; if you apply for 50 bytes, 50 is between the interval [33364], take the upper limit 64, so SLOT [2] allocates 64 bytes to the application process; if the application 84 bytes, 84 is between the interval [65x128], take the upper limit 128, so SLOT [3] allocates 128 bytes If you apply for 722 bytes, 722 is between 513 and 1024, with an upper limit of 1024, so 1024 bytes are allocated by SLOT [7].
PAGES array: each member of the PAGES array is responsible for querying, allocating and recycling each page in the allocable space.
Allocable space: SLAB logically divides allocable space into M memory pages, each with a page size of 4K. Each page of memory corresponds to the members of the PAGES array, and each member of the PAGES array is responsible for the allocation and collection of each memory page.
Is it helpful for you to read the above content? If you want to know more about the relevant knowledge or read more related articles, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.