Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to understand buffer cache free in Linux

2025-01-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

Today, I would like to talk to you about how to understand buffer cache free in Linux. Many people may not know much about it. In order to make you understand better, the editor summarized the following content for you. I hope you can get something according to this article.

Enter free in linux to view server memory usage

1),-b,-bytes, showing memory usage in units of Byte

2),-k,-kilo, in KB, which is also the default value

3),-m,-mega, which displays the content usage in MB

4),-g,-giga, which displays memory usage in units of GB

5),-h,-human, which automatically converts values into human readable form

2),-c,-count, show the result count times, need to be used in conjunction with-s

3),-s,-seconds, the interval between dynamically refreshing memory usage

Total total memory: 251g

Number of memory used by used: 185G

Number of free memory in free: 12G

The total memory shared by multiple processes in shared is 4.1 GB.

Number of buffers Buffer cache memory: the memory is written to the middle data of the hard disk to solve the problem of space

Cached Page cache memory: CPU extracts data, available between memory and CPU: current available memory total = use + free + buff/cacheavailable = free + buff/cache-shared

2.3. The third part refers to the switching partition.

The role of   Swap space can be simply described as: when the physical memory of the system is insufficient, a part of the physical memory needs to be released for use by the currently running program. The freed space may come from programs that have been inactive for a long time, and the freed space is temporarily saved in Swap space, and when those programs are about to run, the saved data is recovered from Swap to memory. In this way, the system does not swap Swap until there is not enough physical memory. In fact, the adjustment of Swap is very important to the performance of Linux servers, especially Web servers. By adjusting the Swap, you can sometimes overcome the system performance bottleneck and save the cost of system upgrade.

The free command shows the current system memory usage, with data taken from the / proc/meminfo file. Here, let's cat. As you can see, in this file, the data is stored in KB, so the default value of free is KB.

The difference between Buffer and Cache in Linux

1. Cache: cache area, which is a cache, is a small but fast memory located between CPU and main memory, because the speed of CPU is much higher than that of main memory, CPU needs to wait a long time to read data from memory, while Cache keeps the data just used by CPU or part of the data that is recycled, so reading data from Cache will be faster, reducing the waiting time of CPU and improving the performance of the system.

Cache is not for caching files, but for caching blocks (blocks are the smallest read and write units for Cache); Cache is generally used in Imax O requests. If multiple processes want to access a file, they can read the file into Cache, so that the next process acquires control of CPU and accesses the file to read directly from Cache, which improves system performance.

2. Buffer: buffer used to store data transfer between devices whose speeds are out of sync or between devices with different priorities Through buffer, the waiting time for inter-process communication can be reduced. When the device with fast storage speed communicates with the device with slow storage speed, the data with slow storage speed is first stored in buffer, and to a certain extent, the device with fast storage speed reads the data from buffer. During this period, the device with fast storage speed, CPU, can do other things.

Buffer: generally used for writing to disk, for example, a process requires multiple fields to be read, and when all the required fields are read in, the fields that have been read before will be put into the buffer first.

Cache explanation

Suppose there is a natural disaster (such as an earthquake) and residents are short of food and clothing, so fire engines are sent to deliver water to several settlements.

The fire engine arrived at the first settlement, opened the gate to release water, and the people came with pots and pots to pick up the water.

Let's say the fire engine stops at one settlement for 100 minutes and then restores water for half an hour before driving to the next settlement. Such a day back and forth, that is, 4-5 settlements.

But let's think about how fire engines exist. If the faucet is fully turned on, its strong water pressure can easily rush up to more than 10 floors, and all the water can be discharged in 10 minutes. But because residents pick up pots and pots to pick up water, 100% turning on the faucet is to give people a bath, so they can only turn on a small part (for example, 10% of the flow). But this reduces the efficiency of water discharge (only 10% of the original), changing 10 minutes to 100 minutes.

In that case, can we improve the process of releasing water so that fire engines can discharge water with maximum efficiency and rush to the next residential area as soon as possible?

The way is to build cisterns in residential areas.

The fire engine puts the water in the reservoir because it is discharged at 100% efficiency, ends for 10 minutes and then leaves. Residents pick up water bit by bit from the reservoir.

If we analyze this example, we can see the meaning of Cache.

Fire engines have to deliver water to residents, and residents have to pick up water from fire engines, that is to say, there is interaction and connection between residents and fire engines.

But fire engines are "high-speed equipment", residents are "low-speed equipment", low-speed residents can not keep up with high-speed fire engines, so fire engines are forced to slow down the speed of water release to adapt to residents.

In order to avoid this situation, there is an extra layer of "Cache" between the fire engine and the residents, which deals with the fire engine with 100% efficiency on the one hand and the residents at 10% inefficient on the other, which liberates the fire engine and allows it to run with maximum efficiency without being held back by low-speed residents, so the fire engine only needs to stay in a settlement for 10 minutes.

Therefore, the reservoir is a "living Lei Feng", leaving the efficiency to others and the inefficiency to himself. Save 10 minutes for the fire engine and 100 minutes for yourself.

As can be seen from the above examples, the so-called Cache is an intermediate layer set up "to make up for the contradiction between high-speed equipment and low-speed equipment". Because in reality, it often occurs that high-speed equipment has to deal with low-speed equipment, resulting in being held back by low-speed equipment.

Take PC as an example. CPU is very fast, but the instructions executed by CPU are taken out of memory, and the results of calculations are written back to memory, but the response speed of memory is not as fast as CPU.

CPU said to memory, "you send me instructions for so-and-so address." Memory heard, but because of the slow speed, do not see the instruction return, during this time, CPU can only do nothing to wait. In this way, no matter how fast the CPU is, it will not be efficient.

What should I do? Add a "reservoir" between CPU and memory, that is, Cache (on-chip cache). This Cache is faster than memory, and you don't have to wait to fetch instructions from Cache. When CPU wants to read memory instructions, read Cache first and then read memory, but at first Cache is empty and can only be accessed from inside, which is really slow and CPU needs to wait. But what is fetched from memory is not only the instructions needed by CPU, but also other instructions that are not currently needed, and then store these instructions in Cache for backup.

When CPU fetches instructions, read Cache first to see if there are any required instructions. If there happens to be one, take it directly from Cache and return (hit) without waiting. This liberates CPU and improves efficiency. (of course it won't be 100% hit, because the capacity of the Cache is smaller than memory.)

CPU's Cache, which can have several layers, and also divides data Cache and instruction Cache

The same is true of disk cache. I just said that memory is a slow device, so it needs on-chip cache, but this "slow" is relative to CPU, and the speed of memory is much faster than that of mechanical hard disk HDD.

For the read and write operation of the disk, a long time ago, the read and write process required CPU to participate, and then the emergence of "DMA/ direct memory access" no longer needed CPU, but even so, high-load, long-time disk reading and writing is also very time-consuming, because the disk is a mechanical rotating part, its read and write speed is faster than the binary voltage change speed of CPU and memory, that is the difference between steam engine and rocket speed.

In order to speed up the reading and writing of data, a layer of Cache is also inserted between the disk and memory (Windows divides an area into Cache in memory, and the hard disk also has on-board Cache. When writing data, it is written to Cache first; because Cache is very fast, the data is written quickly. For example, 1 gigabyte of data takes 10 seconds to write directly to the hard disk, but only 1 second to write to Cache (that is, system memory). In this way, users have the illusion that the system is very fast. But this is just a cover, the data is temporarily stored in Cache and is not really written to disk, and then slowly written when the system is idle. Similarly, when reading data, in addition to the required data, there is also a pile of data that is not currently needed is also read out and put into the Cache of memory. The next time you read it, if you happen to have the required data in Cache, you can read it directly (hit), which avoids the embarrassment of reading data from slow HDD. The user experience is also very fast. (again, it won't hit 100%, because the capacity of the RAM is much smaller than the capacity of the hard drive.)

Buffer explanation

For example, when the grapes in Turpan are ripe, they have to be shipped out in big trucks to sell.

Of course, when a girl in the orchard picks grapes, she does not pick the grapes with her front hand and put them on the truck. Instead, she needs a "big basket" process: → picks grapes in baskets and → pours grapes in baskets into the truck.

In other words, although the ultimate goal is to "pour the grapes into the truck", there must be a "lot of baskets" to change hands, the basket here is Buffer. It is a "space for temporary storage".

Pay attention to 2 keywords: temporary, space

In other words, in order to achieve the ultimate goal: to put the grapes in the space of the truck, you need to put the grapes in the space of the basket for a while.

In BT, for example, BT download requires a long time to hang up, the computer is likely to spin 24 hours a day, but BT download data is fragmented, reflected in the hard disk write is also fragmented, because the hard disk is a mechanical addressing device, this kind of fragmentation write will cause the hard disk long time and high load of mechanical movement, causing hard disk premature aging damage, when there were a large number of hard disk damage as a result of BT download.

So the new BT software opens up Buffer in memory, and the data is temporarily written to Buffer, and then saved to a certain size (such as 512m) and then written to the hard disk at once, which greatly reduces the load on the hard disk.

This is: in order to achieve the ultimate goal: to write the data to the hard disk space, you need to temporarily write to the Buffer space.

Taking programming as an example, suppose you want to implement a function: accept a string typed by the user and assign a value to a string variable

The process is as follows:

1: open a "keyboard buffer" in memory to accept strings typed by users

2: copy the string in the buffer to the memory space pointed to by the string variable defined in the program (that is, the assignment process)

That is, in order to achieve the ultimate goal: to put the string into the space pointed to by the string variable, you need to temporarily put the string into the space of the "keyboard buffer".

The above three chestnuts: baskets, Buffer of BT, Buffer of keyboard buffer

What problem does Buffer exist to solve? Find a temporary storage space!

Summary:

What Cache and Buffer have in common: both are the middle layer between the two layers, and both are memory.

The difference between Cache and Buffer: Cache solves the problem of time, while Buffer solves the problem of space.

In order to improve the speed, the middle layer of Cache is introduced.

In order to find a temporary storage space for information, Buffer is introduced as an intermediate layer.

In order to solve the problem of two different dimensions (time and space), we happen to take the same solution: add a middle tier, write the data to the middle tier first, and then write to the target.

This middle layer is the memory "RAM". Since it is a memory, there are two parameters: the speed of writing has multiple blocks (speed), and how many things can be held (capacity).

Cache takes advantage of the high read and write speed provided by RAM, while Buffer takes advantage of the storage capacity (space) provided by RAM.

1. Buffer (buffer) is used when the processing speed at both ends of the system is balanced (on a long time scale). It is introduced to reduce the short-term impact of sudden I-map O, and play a role in traffic shaping. For example, the producer-consumer problem, which generates and consumes resources at roughly the same rate, adding a buffer can offset sudden changes in the initial generation / consumption of resources.

2. Cache (cache) is a compromise strategy when the processing speed of both ends of the system does not match. Because of the increasing speed difference between CPU and memory, people make full use of the locality (locality) of data and use the strategy of storage system classification (memory hierarchy) to reduce the impact of this difference.

3. Assuming that memory access becomes as fast as CPU computing in the future, cache can disappear, but buffer still exists. For example, when downloading something from the network, the instantaneous rate may change greatly, but it is stable in the long run, so that the rate at which the OS receives data can be more stable by introducing a buffer, further reducing the damage to the disk.

1. Buffer (buffering) is designed to improve the speed of data exchange between memory and hard disk (or other Icano devices).

Buffers is designed according to the read and write of the disk, which centralizes the decentralized write operations to reduce disk fragmentation and repeated search of the hard disk, so as to improve system performance. Linux has a daemon that periodically clears the contents of the buffer (that is, writes to disk), or you can clear the buffer manually with the sync command.

To put it simply, buffer is about to be written to disk, while cache is read from disk. Buffer is allocated by various processes and is used in areas such as input queues. A simple example is that a process requires multiple fields to be read in, and the process saves the previously read fields in the buffer before all the fields are read completely. Cache is often used in disk IUnip O requests. If multiple processes want to access a file, the file is made into cache to facilitate next access, which can improve system performance.

2. Cache (cache)

From the CPU point of view, it is designed to improve the speed of data exchange between cpu and memory, such as first-level cache, second-level cache and third-level cache. The instructions and read data used by cpu to execute the program are aimed at memory, that is, they are obtained from memory. Because the reading and writing speed of memory is slow, in order to improve the speed of data exchange between cpu and memory, cache is added between cpu and memory, which is faster than memory, but the cost is high, and because too many integrated circuits cannot be integrated in cpu, the cache is generally small. In order to further improve the speed, companies such as intel have added a second-level cache or even a third-level cache, which is designed according to the principle of locality of the program. That is, the instructions executed by cpu and the data accessed are often in a certain part of the set, so after putting this piece of content into cache, cpu no longer has to access memory, which improves the access speed. Of course, if you don't have what cpu needs in cache, you still have to access memory.

From the perspective of memory reading and disk reading, cache can be understood to mean that the operating system uses more memory to cache data that may be accessed again in order to achieve higher reading efficiency.

After reading the above, do you have any further understanding of how to understand buffer cache free in Linux? If you want to know more knowledge or related content, please follow the industry information channel, thank you for your support.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report