Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Governance method of memory non-continuous allocation

2025-01-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Network Security >

Share

Shulou(Shulou.com)06/01 Report--

Non-continuous allocation permits a program to be evacuated into non-adjacent memory partitions, which can be divided into paging storage governance methods and segmented storage governance methods according to the size of the partition.

In the paging storage management method, it is divided into the fundamental paging storage management method and the pleading paging storage management method according to whether all the pages of the homework can be loaded into memory when running the homework. The fundamental paging storage governance approach is introduced below.

Fundamental paging storage governance method

External fragmentation occurs in fixed partitions and internal fragmentation occurs in static partitions, both of which are relatively low in the application rate of memory. We hope that the use of memory can prevent the occurrence of fragments as far as possible, which introduces the idea of paging: divide the main memory space into equal and fixed blocks, the block is absolutely small, as the fundamental unit of main memory. Each process also stops the partition with the block as the unit, and when the process is performed, the block space in the main memory is requested one by one.

Paging approach in a way, like the fixed partitioning technique of equal partitions, paging governance does not cause internal fragmentation. But it also has a substantial divergence point: the absolute partition of the block size is much smaller, and the process is also divided according to the block stop, and when the process runs, it requests the free space of the main memory and performs it. In this way, main memory fragmentation occurs only when the process requests a main block space for the initial incomplete block, so although external fragmentation occurs, it is absolutely small for the process. Each process evenly produces only half a block of external fragments (also known as in-page fragments).

1) several basic concepts of paging storage

① page and page size. The blocks in the process are called Page, and the blocks in memory are called page frames (Page Frame, or page frames). External memory is also divided into different units, which is directly called Block. When the process is performed, the requirement to request the main memory space is to allocate the available page frames in the main memory for each page, which occurs the correspondence between the page and the page frame one by one.

To facilitate address translation, the page size should be an integer power of 2. At the same time, the page size should be moderate, if the page is too small, it will make too many pages in the process, so that the page table is too long and takes up a small amount of memory, and it will also add the cost of hardware address translation and reduce the effectiveness of page swapping / swapping out; too large pages will increase the fragmentation within the page, reducing the application rate of memory. So the size of the page should be moderate, thinking about the measurement of drooping effectiveness and time effectiveness.

② address construction. The logical address structure of paging storage governance is shown in figure 3-7.

Figure 3-7 address construction for paging storage governance

The address structure consists of two parts: the first part is the page number P, and the latter part is the intra-page offset W. The length of the address is 32 bits, in which 11 bits are the edge of the page, that is, each page is 4KB in size and 12 bits is the page number, with a maximum of 220 pages allowed in the address space.

③ page table. In order to find the physical block corresponding to each page of the process in memory, a page table is set up for each process, recording the physical block number of the page in memory, and the page table is generally stored in memory.

After setting up the page table of the equipment, when the process is performed, you can find the physical block number of each page in memory by looking up the table. It can be seen that the influence of the page table is to complete the address mapping from the page number to the physical block number, as shown in figure 3-8.

Figure 3-8 influence of the page table

2) basic address conversion mechanism

The obligation of the address translation institution is to convert the logical address into the physical address in memory, and the address conversion is done with the help of the page table. Figure 3-9 shows the address translation mechanism in paging storage governance.

Figure 3-9 address translation mechanism for paging storage governance

A page table holder (PTR) is set up on weekdays in bits and pieces to register the page table at the initial address F of memory and the page table length M. When the process is not fulfilled, the initial address and length of the page table are stored in the process control block, and when the process is performed, the initial address and length of the page table are stored in the page table memory. Set the page size to L, and the transformation process from logical address A to physical address E is as follows:

Calculate page number P (P=A/L) and intra-page offset W (W=A%L).

Compare the page number P with the page table length M. if P > = M, the cross-boundary suffix occurs, otherwise it will be performed continuously.

The address of the page table item corresponding to the page number P in the page table = the address at the beginning of the page table F + page number P * the length of the page table item, and the content b of the page table item is the physical block number.

Calculate the E=b*L+W and visit the memory with the lost physical address E.

All the above address translation processes are actively completed by the hardware.

For example, if the page size L is 1K bytes, and the physical block corresponding to page number 2 is bread8, the process of calculating the physical address E of the logical address Agg2500 is as follows: Packers 2500 pound 1Kbytes 2Magi Walls 2500% 1Kbytes 452, and the block number for finding the physical block corresponding to page number 2 is 8century 1024452 8644.

The following comments debate two minor achievements of the paging approach:

Each memory access operation needs to stop the translation from logical address to physical address, and the address translation process must be fast enough, otherwise the speed of memory access will decrease.

Each process introduces a page table to store the mapping mechanism. The page table should not be too large, otherwise the memory application rate will decrease.

3) address conversion mechanism with fast table

As can be seen from the address conversion process introduced below, if the page table is all in memory, it is necessary to visit the memory at most to access a data or instruction: once to visit the page table and confirm the physical address of the data or instruction accessed. The second time the data or instruction is accessed according to that address. Obviously, this method is half as slow as the usual speed of carrying out instructions.

For this reason, a cache with parallel lookup ability, fast table, also known as associative memory (TLB), is added to the address conversion mechanism, which is used to store several page table items visited later, so as to slow down the process of address transformation. Correspondingly, the page table in main memory is also often called slow table, and the address conversion mechanism with fast table is shown in figure 3-10.

Figure 3-10 address translation mechanism with fast table

In a paging mechanism with fast tables, the address transformation process:

After CPU gives the logical address, the hardware stops the address translation and sends the page number to the cache memory, and compares the page number with all the page numbers in the quick table.

If you find the matchmaking page number and clarify that the page list item you want to visit is in the quick table, you will directly take out the corresponding page frame number from the page and splice it with the offset within the page to form a physical address. In this way, access to data can be completed with only one access to memory.

If it is not found, you need to visit the page table in main memory, and after reading out the page table items, you should also store them in the quick table so that you can visit again. However, if the fast table is full, the exchange of the old page table items must be stopped according to a certain algorithm.

Note: some processors are designed to look up both the fast table and the slow table. If you look for victory in the fast table, the search for the slow table will be terminated.

The hit rate of an ordinary fast meter can reach more than 90%, so the speed loss caused by paging will drop to less than 10%. The invalidity of fast tables is based on a well-known part of the truth, which will be discussed in detail in the previous virtual memory.

4) two-level page table

The second achievement: because of the introduction of paging management, the process does not need to call all pages into the memory page frame, but only need to keep the mapped page table into memory. However, we still need to think about the size of the page table. Take 32-bit logical address space, page size 4KB, page table item size 4B as an example, if you want to complete the mapping of the whole logical address space, each process requires 220, about 1 million page table items. In other words, the page table alone in each process requires 4MB main memory space, which is obviously unrealistic. Even without thinking about the situation of stopping mapping of all logical address space, the size of the page table of a process with a slightly larger logical address space can be too large. Take a 40MB process as an example, page table items are a total of 40KB, if all the page table items are kept in memory, then 10 memory page frames are required to keep all page tables. The size of the whole process is about 10,000 pages, but in practice, only dozens of pages are required to enter the memory page frame to run, but if a page table with a size of 10 pages is requested, all the page tables must enter memory. In terms of the size of dozens of process pages in absolute practice, it must have reduced the memory application rate. On the other hand, the 10-page page table items do not need to be kept in memory at the same time, because for the most part, the page table items required for mapping are in a single page of the page table.

By further extending the idea of page table mapping, we can lose the second-level paging: stop the address mapping of the 10-page space of the page table and set up a higher-level page table to store the mapping relationship of the page table. Here, only 10 page items are required for the 10 pages of the page table to stop mapping, so one page is enough for the upper-level page table (210 pages and 1024 items can be stored). When the process is performed, only the upper-level page table of this page needs to be called into memory, and the page table of the process and the pages of the process itself can be entered into memory again in the previous implementation.

As shown in figure 3-11, this is the address translation process for hardware paging of the Intel processor 80x86 series. In 32-bit bits and pieces, all 32-bit logical address space can be divided into 220 (4GB/4KB) pages. These pages can further establish a top-level page table, requiring 210 top-level page table items to stop indexing, which is exactly the size of one page, so set up a secondary page table.

Figure 3-11 hardware paging address translation

For example, a 32-bit piecemeal paging task process: suppose the kernel once assigned a running process a logical address space from 0x20000000 to 0x2003FFFF, which consists of 64 pages. When the process is running, we do not need to know the physical address of the page frame of all these pages, and it is quite possible that many of these pages are not in main memory. Here we only notice how the hardware can figure out how to lose the physical address of the page frame when the process runs to a certain page. Now the process needs to read the byte contents of the logical address 0x20021406, which is disposed of as follows:

Logical address: 0x20021406 (0010 0000 0000 0010 0001 0100 0000 0110 B)

Top-level page table field: 0x80 (00 1000 0000 B)

Secondary page table field: 0x21 (00 0010 0001B)

On-page offset field: 0x406 (0100 0000 0110 B)

The 0x80 of the top-level page table field is used to select the 0x80 table entry of the top-level page table, which points to the second-level page table related to the page of the process; the second-level page table field 0x21 is used to select the 0x21 table item of the second-level page table, which points to the page frame including the desired page; and the initial intra-page offset field 0x406 is used to read the bytes in the destination page frame whose offset is 0x406.

This is an example of comparable practice in 32-bit bits and pieces. Seemingly complex examples help to deepen the astronomical solution, hoping that the reader can start to figure out the conversion process.

The goal of setting up a multi-level page table is to set up an index, so that there is no need to waste main memory space to store useless page table items, and there is no need to find page table items sequentially, while the request to set up an index is that the top-level page table item does not exceed the size of a page. In the 64-bit operation bits and pieces, the division of the page table needs to be reconsidered, which is a rare title in many textbooks and guidance books, but many of them give an analysis of the defects and need to pay attention to.

Let's assume that the 4KB page size is still used. The offset field is 12 bits, assuming the size of the page table item is 8B. In this way, when paging at the next level, each page frame can only store 29 (4KB/8B) page table items instead of 210, so the upper-level page table field is 9 bits. In the same way, we continue to page. 64, 12, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, Many books are still analyzed according to the 4B page table items, although it is strange to get the consequences of 6-level pagination, but it is obviously wrong. Here are two practical 64-bit operation piecemeal paging levels (note: there is no use of all 64-bit addressing outside, but because of address byte alignment design considerations, still use 8B size page table items), understand the grading method in Table 3-2, believe that the multi-level paging is very clear.

Table 3-2 two methods of grading bits and pieces

Platform page size, addressing digits, paging levels, detailed hierarchical Alpha8KB43313+10+10+10X86_644 KB48412+9+9+9+9 fundamental segmented storage governance method

The paging management method is considered and designed from the point of view of the computer, in order to improve the application rate of memory and promote the function of the computer, and the paging is completed by the hardware mechanism, and the paging method is put forward to consider the users and programmers, in order to meet the needs of convenient programming, information maintenance and sharing, static increase and static link.

1) Segment.

The segment governance method divides the logical space according to the natural segments in the user process. For example, a user process consists of a main program, two subroutines, a stack, and a piece of data, so the user process can be divided into five segments, each addressing from zero, and assigning a continuation address space (continuation is requested within the segment, no continuation is requested between segments, so the address space for all homework is two-dimensional). Its logical address consists of two parts: segment number S and intra-segment offset W.

In figure 3-12, if the paragraph number is 16 digits and the offset within the segment is 16 digits, there can be a maximum of 216,65536 segments in a homework, and the maximum length is 64KB.

Figure 3-12 logical address construction in piecewise bits and pieces

In page bits and pieces, the page number and intra-page offset of logical addresses are clear to the user, but in segment bits and pieces, the segment number and intra-segment offset must be displayed by the user. In the level of programming language, this task is completed by the compiler.

2) paragraph table.

Each process has a segment table mapped between logical space and memory space, in which each segment table item corresponds to a segment of the process, and the segment table entry records the address of the beginning of the segment in memory and the length of the segment. The contents of the segment table are shown in figure 3-13.

Figure 3-13 table items

After setting up the segment table of the equipment, the process of performance can find the memory area corresponding to each segment by looking up the segment table. It can be seen that the segment table is used to complete the mapping from logical segments to physical memory areas, as shown in figure 3-14.

Figure 3-14 apply segment table to complete address mapping

3) address conversion mechanism.

The piecemeal address translation process is shown in figure 3-15. In order to complete the transformation function from logical address to physical address, a segment table memory is set up in bits and pieces, which is used to register the initial address F and the length M of the segment table. The address translation process from logical address A to physical address E is as follows:

From the logical address A, the first few digits are the segment number S, and the last few bits are the intra-segment offset W.

Compare the segment number S with the length of the segment table M. if S is more than M, the cross-boundary infix occurs, otherwise it will be performed continuously.

The address of the paragraph table item corresponding to the paragraph number S in the section table = the address of the first end of the section table F + the length of the paragraph item S *, take out the first few bits of the paragraph table item and lose the segment length C. If the offset within the segment is > = C, an out-of-bounds infix occurs, otherwise it will be performed continuously.

Take out the address b of the beginning of the segment in the table entry of the segment, calculate E = b + W, and visit the memory with the lost physical address E.

Figure 3-15 piecemeal address translation process

4) sharing and maintenance of segments.

In piecemeal segments, the sharing of segments is accomplished through a unified physical original that points to the shared segments in the response table of the two lessons. When one lesson is reading data from a shared segment, another lesson must be avoided from correcting the data in the shared segment. Code that cannot be modified is called pure code or reentrant code (it does not belong to critical capital). Such code and unmodifiable data can be shared, while correctable code and data cannot be shared.

Similar to paging management, there are two main maintenance methods of segmented governance: one is access mastery maintenance, and the other is address cross-boundary maintenance. Address cross-boundary maintenance is that the segment table length in the segment table memory is compared with the segment number in the logical address. If the segment number is greater than the segment table length, the cross-boundary infix occurs; then the segment length in the segment table item is compared with the intra-segment displacement in the logical address. If the intra-segment displacement is greater than the segment length, the cross-boundary infix will also occur.

Segment-page governance method

Page storage governance can effectively improve the memory application rate, while segmented storage governance can reflect the logical structure of the program and facilitate segment sharing. If combine these two kinds of storage management method, formed paragraph page type storage management method.

In the piecemeal form, the address space of the homework is first distributed to several logical segments, each with its own paragraph number, and then each paragraph is paid out to a number of pages of fixed size. The governance of memory space is still the same as paging storage governance, which pays dividends to several storage blocks that are opposite to the size of the page, and the memory is allocated with storage blocks as units, as shown in figure 3-16.

Figure 3-16 page governance approach

In paragraph-page bits and pieces, the logical address of the homework is divided into three parts: the segment number, the page number, and the intra-page offset, as shown in figure 3-17.

Logical address construction of page fragments in paragraphs 3-17

In order to complete the address conversion, a segment table is set up for each process, and each segment has a page table. The section table entry contains at most the segment number, the length of the page table and the address at the beginning of the page table, and at most the page number and block number in the page table entry. In addition, there should be a paragraph table memory in the bits and pieces, indicating the address and length of the section table of the homework.

Note: in a process, there is only one segment table, while there can be more than one page table.

When the address conversion is stopped, the address at the beginning of the page table is found through the segment table, and then the page frame number is found through the page table, which initially constitutes a physical address. As shown in figure 3-18, stopping a visit requires visiting the main memory three times. Here, a quick table can be used to slow down the search speed. The crux word is composed of a segment number and a page number, and the value is the corresponding page frame number and maintenance code.

Figure 3-18 piecemeal address conversion mechanism

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Network Security

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report