Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Introduction of Linux memory mechanism

2025-04-03 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly introduces "introduction of Linux memory mechanism". In daily operation, I believe many people have doubts about the introduction of Linux memory mechanism. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful for you to answer the doubts about "introduction to Linux memory mechanism"! Next, please follow the editor to study!

One. Let's first look at an example of memory usage:

[oracle@db1] $free-m

Total used free shared buffers cached

Mem: 72433 67075 5357 0 558 62221

-/ + buffers/cache: 4295 68138

Swap: 72096 91 72004

The above results show a used of 67075m, but (- / + buffers/cache) minus buffers and cache can be seen, so the actual memory consumption of the current process is 4296m.

It can be understood this way: in linux's memory allocation mechanism, priority is given to the use of physical memory. When physical memory is still free (enough), it will not be released. Even if the program that takes up memory has been closed, the memory occupied by the program is used for caching, and it will be faster for programs that have been opened or read data that has just been accessed.

Such as the above example: the use of 72433m of memory, 67075m is occupied, but the buuffer and cached part as a cache, you can use the hit ratio to improve efficiency, and this part of the cache can be released at any time according to instructions, we can think that this part of memory is not actually used, we can also think that it is idle.

Therefore, if you look at the memory that is actually being used by the current process, it is used- (buffers+cache). It can also be considered that if swap is not widely used, mem is still sufficient, and only mem is actually occupied by the current process (without buffers and cache). Swap will be used.

II. The impact of Swap configuration on performance

Allocating too much Swap space wastes disk space, while too little Swap space causes an error. If the system runs out of physical memory, the system will run slowly but still run; if the Swap space is used up, the system will have an error. For example, the Web server can derive multiple service processes (or threads) according to the number of requests. If the Swap space is used up, the service process cannot be started, and a "application is out of memory" error usually occurs. In serious cases, it will cause deadlock of the service process. So the allocation of Swap space is very important.

In general, Swap space should be greater than or equal to the size of physical memory, the minimum should not be less than 64m, and usually the size of Swap space should be 2-2.5 times the size of physical memory. However, according to different applications, there should be different configurations: if it is a small desktop system, only a small Swap space is needed, while large server systems need different sizes of Swap space according to different circumstances. Especially for database servers and Web servers, with the increase of access volume, the requirements for Swap space will also increase. Generally speaking, for the physical memory below 4G, the configuration is twice as much as the configuration of swap,4G.

In addition, the number of Swap partitions has a significant impact on performance. Because the operation of Swap exchange is the operation of disk IO, if there are multiple Swap exchanges, the allocation of Swap space will operate on all Swap in turn, which will greatly balance the load of IO and accelerate the speed of Swap exchange. If there is only one switching area, all the switching operations will make the switching area very busy, leaving the system waiting most of the time and inefficient. Using the performance monitoring tool, you will find that the CPU is not very busy at this time, but the system is slow. This shows that the bottleneck lies in IO, and the problem cannot be solved by increasing the speed of CPU.

three。 Linux memory mechanism

Linux supports virtual memory (Virtual Mmemory), which refers to the use of disks as an extension of RAM, so that the size of available memory increases accordingly. The kernel writes the contents of blocks of memory that are temporarily unused to the hard disk so that the memory can be used for other purposes. When the original content is needed, it is read back into memory. These operations are completely transparent to the user; programs running under Linux only see a lot of memory available without noticing that some of them reside on the hard disk from time to time. Of course, reading and writing to a hard disk is much slower (thousands of times slower) than using real memory directly, so programs don't run as fast as they always do in memory. The portion of the hard disk used as virtual memory is called swap space (Swap Space).

In general, pages in swap space are first swapped into memory; if there is not enough physical memory to accommodate them, they will be swapped out (to other swap spaces). If there is not enough virtual memory to hold all these pages, the Linux will fluctuate and not normal; but after a long period of time, the Linux will recover, but by this time the system is no longer available.

Sometimes, although there is a lot of free memory, there is still a lot of swap space being used. This can happen, for example, if it is necessary to swap at some point, but then a large process that takes up a lot of physical memory ends and frees memory. The swapped data is not automatically swapped into memory unless necessary. At this point, physical memory remains idle for a period of time. There's nothing to worry about, but it doesn't matter if you know what's going on.

Many operating systems use virtual memory. Because they only need swap space at run time to solve the problem that swap space will not be used at the same time, it is a waste except for the swap space of the currently running operating system. So it would be more efficient for them to share a swap space.

Note: if there are several people using this system at the same time, they will all consume memory. However, if two people run the same program at the same time, the total memory consumption does not double, because there is only one copy of the code page and the shared library.

Linux systems often use swap space to maintain as much free physical memory as possible. Linux swaps out memory pages that are temporarily unused, even if there is nothing to do with memory. This avoids the time required to wait for the swap: when the disk is idle, the swap can be done in advance. Swap space can be spread over several hard drives. This can improve performance in terms of the speed of the relevant disks and the mode of access to the disks.

Compared with accessing physical memory, disk read and write is very slow. In addition, it is common to read the same part of the disk many times in a relatively short period of time. For example, someone might first read an E-mail message, then read the message into the editor in reply, and then make the mail program read it again when copying the message into a folder. Or consider how many times the ls command will be used in a system with many users. By reading information from disk only once and storing it in memory, you can speed up all reads except for the first time. This is called Disk Buffering, and the memory used for this purpose is called Buffer Cache. However, because memory is a limited and insufficient resource, caching cannot be very large (it cannot contain all the data to be used). When the buffer is full of data, the data that is not used for the longest time is discarded to free up memory space for new data.

Disk buffering is also effective for write disk operations. On the one hand, data written to disk is often read back quickly (for example, the original code file is saved to a file and then read in by the compiler), so it's a good idea to buffer the data to be written. On the other hand, programs can run faster by putting data in a buffer rather than writing it to disk immediately. In the future, the writing operation can be completed in the background without delaying the execution of the program.

Most operating systems have caches (although they may be called differently), but not all follow the above principles. Some are Write-Through: the data will be written to disk immediately (of course, the data will also be cached). If the write operation is done later, the cache is called background write (Write-Back). Writing in the background is more efficient than writing directly, but it is also error-prone: if the machine crashes or power goes off suddenly, the data that has been changed in the buffer is lost. If the data that has not yet been written contains important bookkeeping information, this may even mean that the file system, if any, is incomplete.

In view of the above reasons, there are a lot of log file systems. After the data is modified in the buffer, the modification information will be recorded by the file system at the same time, so that even if the system is powered off, the data will be recovered from the log record first after the system is rebooted to ensure that the data is not lost. Of course, these issues are beyond the scope of this article.

For these reasons, never turn off the power before using the appropriate shutdown process, and the Sync command Flushes buffering, that is, forcing all unwritten data to be written to disk, can be used to make sure that all write operations have been completed. In traditional UNIX systems, a program called update runs in the background and does sync every 30 seconds, so there is usually no need to use the sync command manually. Linux has another daemon, Bdflush, which performs more frequent but not comprehensive synchronization operations to avoid sudden disk freezes caused by sync's large number of disk I-O operations.

In Linux, Bdflush is started by update. There's usually no reason to worry about this, but if the bdflush process dies for some reason, the kernel warns you to start it manually (/ sbin/update).

Caches (Cache) are not actually buffered files, but buffered blocks, which are the smallest units of disk I Cache O operations (in Linux, they are usually 1KB). In this way, directory, super block, other file system bookkeeping data, and non-file system disk data can all be buffered. The effectiveness of the buffer is mainly determined by its size. If the buffer is too small, it is useless. It can only hold a little data, so when it is reused, all buffered data will be emptied. The actual size depends on how often the data is read and written and how often the same data is accessed. It can only be known by experiment.

If the cache has a fixed size, it is not good for the cache to be too large because it makes the free memory too small and causes swapping operations (which is also slow). To make the most efficient use of real memory, Linux automatically uses all free memory as a cache, and automatically reduces the size of the buffer as the program needs more memory.

At this point, the study of "introduction to Linux memory mechanism" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report