Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the difference between memory buffer and cache in Linux

2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

Editor to share with you what is the difference between memory buffer and cache in Linux, I believe most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's go to know it!

Careful friends will notice that when you access files frequently under linux, the physical memory will soon be used up, when the program is finished, the memory will not be released normally, but will always be used as caching. It'seems that many people are asking this question, but they don't see any good solution. So let me talk about it.

The difference between cache and buffer:

Cache: cache is a small but high-speed memory located between CPU and main memory. Because the speed of CPU is much higher than that of main memory, CPU has to wait for a certain period of time to access data directly from memory. Part of the data just used or recycled by CPU is saved in Cache, which can be called directly from Cache when CPU uses this part of data again, which reduces the waiting time of CPU and improves the efficiency of the system. Cache is divided into first-level Cache (L1 Cache) and second-level Cache (L2 Cache). L1 Cache is integrated in CPU. L2 Cache was usually welded on the motherboard in the early days, but now it is also integrated in CPU. The common capacities are 256KB or 512KB L2 Cache.

Buffer: a buffer, an area used to store data transfer between devices with out-of-sync speeds or devices with different priorities. Through the buffer, the mutual waiting between processes can be reduced, so that the operation process of the fast device will not be interrupted when the data is read from the slow device.

Buffer and cache in Free: (they both take up memory):

Buffer: memory as buffer cache, read and write buffer for block devices

Cache (noun): memory as page cache, cache of file system

If the value of cache (noun) is large, it means that cache lives in a large number of files. If frequently accessed files can be held by cache (verb), then the read IO of the disk must be very small.

Let's start with the free command.

[root@server] # free-mtotal used free shared buffers cachedMem: 249 163 86 0 10 94 Melody + buffers/cache: 58 191Swap: 511 0511

Where:

Total total memory

Number of memory already used by used

Number of free memory in free

Total memory shared by multiple processes in shared

Size of buffers Buffer Cache and cached Page Cache disk caches

-memory of buffers/cache: used-buffers-cached

Memory of + buffers/cache: free + buffers + cached

Available memory=free memory+buffers+cached

With this foundation, we can know that I now have a used of 163MB, a free buffer of 86MB, and a cached of 1094m.

So let's see what happens to memory if I copy the file.

[root@server ~] # cp-r / etc ~ / test/ [root@server ~] # free-mtotal used free shared buffers cachedMem: 249 244 40 0 8 174 Muhammad + buffers/cache: 62 187Swap: 511 0511

After the execution of my command, the used was 244MB, free, 4MB, buffers, 8MB, cached, 174MB. Oh, my God, it was eaten by cached. Don't worry, this is to improve the efficiency of file reading.

In order to improve the efficiency of disk access, Linux has made some careful designs, in addition to caching dentry (for VFS to speed up the conversion of file pathnames to inode), but also adopts two main Cache methods: Buffer Cache and Page Cache. The former is for the read and write of disk blocks, and the latter is for the read and write of file inode. These Cache effectively shorten the time it takes to make system calls such as read,write,getdents. "

So someone said that for a while, linux will automatically release the memory used. Let's use free to try again to see if there is any release.

[root@server test] # free-mtotal used free shared buffers cachedMem: 249 244 50 0 8 174 buffers/cache + buffers/cache: 61 188Swap: 511 0511

There is no change in MS, so can I release the memory manually? The answer is yes!

/ proc is a virtual file system, and we can read and write it as a means to communicate with kernel entities. In other words, the current kernel behavior can be adjusted by modifying the file in / proc. Then we can free up memory by adjusting / proc/sys/vm/drop_caches. Do the following:

[root@server test] # cat / proc/sys/vm/drop_caches

0

First, the value of / proc/sys/vm/drop_caches, which defaults to 0

[root@server test] # sync

Execute the sync command manually (description: the sync command runs the sync subroutine. If the system must be stopped, run the sync command to ensure the integrity of the file system. The sync command writes all unwritten system buffers to disk, including the modified i-node, deferred block Imax O, and read-write mapping file)

[root@server test] # echo 3 > / proc/sys/vm/drop_caches

[root@server test] # cat / proc/sys/vm/drop_caches

three

Set the / proc/sys/vm/drop_caches value to 3

[root@server test] # free-mtotal used free shared buffers cachedMem: 249 66 182 0011 Melody + buffers/cache: 55 194Swap: 511 0511

Then run the free command, and find that the current used is 66MB, free, 182MB, buffers, 0MB, cached, 11MB. So effectively released buffer and cache.

The usage of / proc/sys/vm/drop_caches is explained below

/ proc/sys/vm/drop_caches (since Linux 2.6.16)

Writing to this file causes the kernel to drop clean caches

Dentries and inodes from memory, causing that memory to become

Free.

To free pagecache, use echo 1 > / proc/sys/vm/drop_caches; to

Free dentries and inodes, use echo 2 > / proc/sys/vm/drop_caches

To free pagecache, dentries and inodes, use echo 3 >

/ proc/sys/vm/drop_caches.

Because this is a non-destructive operation and dirty objects

Are not freeable, the user should run sync (8) first.

The difference between buffer and cache

A buffer is something that has yet to be "written" to disk. A cache is something that has been "read" from the disk and stored for later use.

For a more detailed explanation, refer to: Difference Between Buffer and Cache

For shared memory (Shared memory), it is mainly used to share data between different processes in UNIX environment, which is a method of inter-process communication. General applications do not apply to use shared memory, and the author does not verify the impact of shared memory on the above equation. If you are interested, please refer to: What is Shared Memory?

The difference between cache and buffer:

Cache: cache is a small but high-speed memory located between CPU and main memory. Because the speed of CPU is much higher than that of main memory, CPU has to wait for a certain period of time to access data directly from memory. Part of the data just used or recycled by CPU is saved in Cache, which can be called directly from Cache when CPU uses this part of data again, which reduces the waiting time of CPU and improves the efficiency of the system. Cache is divided into first-level Cache (L1 Cache) and second-level Cache (L2 Cache). L1 Cache is integrated in CPU. L2 Cache was generally welded on the motherboard in the early days, but now it is also integrated in CPU. The common capacity is 256KB or 512KB L2 Cache.

Buffer: a buffer, an area used to store data transfer between devices with out-of-sync speeds or devices with different priorities. Through the buffer, the mutual waiting between processes can be reduced, so that the operation process of the fast device will not be interrupted when the data is read from the slow device.

Buffer and cache in Free: (they both take up memory):

Buffer: memory as buffer cache, read and write buffer for block devices

Cache: memory as page cache, cache of file system

If the value of cache is large, it means that cache lives in a large number of files. If frequently accessed files can be held by cache, the read IO bi of the disk will be very small.

Cache is a cache used for buffering between CPU and memory

Buffer is the Icano cache, used for memory and hard disk buffering

Cache was originally used in cpu cache, mainly because cpu and memory, because cpu is fast, memory can not keep up, and some values are used many times, so put in

In cache, the main purpose is to reuse, and the first-level / second-level physical cache is fast.

Buffer is mainly used for disk and memory, mainly to protect the hard disk or reduce the number of network transfers (memory data shows dataSet). Of course, it can also improve speed (data that is not immediately written to the hard disk or read directly from the hard disk is immediately displayed), reuse, the main purpose at first is to protect disk

Asp.net 's cache has outputcahe and data cache. The main purpose is to reuse and improve speed. Outputcache mainly stores pages after Reader, usually using the same HTML multiple times. It is recommended that you do not use varybyparam or save multiple version.

Data cache, such as dataSet, dataTable, etc.

@ page buffer= "true", use buffer, let buffer full and then show read or write, (c file output is the same, the main purpose is to protect the hard disk), can also improve the next access speed. On the client browse side, the true is displayed at one time, or not displayed, in the middle, etc., and the false shows some at a time.

This is also the case in the network output.

Buffer = true is used by default in file access c, which is the same as asp.net

Equivalent to Response.write (); output when the buffer is full, to reduce the number of network transmissions

The HTML generated by asp.net is cached, and there is no need to regenerate html, control.ascx. There is also a component cache (htmlCach). The same is true of dataSet. DataCache

Both cache and buffer are buffers, and it is better for cache to translate into high-speed buffers (because it is mainly accelerated for the next visit), and for buffer to translate into buffers. All are buffers, but the purpose is a little different, mainly to understand, there is no need to be too literal.

The difference between cache and buffer

1, Buffer is the buffer

2. Cache is a cache, divided into library cache; data dictionary cache; database buffer cache

Buffer cache buffer cache, which is used to cache the data read from the hard disk.

3. Buffer has shared SQL area and PL/SQL area, and database buffer cache has independent subcache.

4. Pool is a shared pool for storing recently executed statements, etc.

5, cache:

A cache is a smaller, higher-speed component that is used to speed up the

Access to commonly used data stored in a lower-speed, higher-capacity

Component.

Database buffer cache:

The database buffer cache is the portion of the SGA that holds copies of data

Blocks

Read from data files. All user processes concurrently (simultaneous, concurrent) connected

To the instance share access to the database buffer cache.

Buffer cache is read and written in units of block.

Cache (cache) is to save the read data, if the re-read hit (to find the needed data), do not read the hard disk, if not hit the hard disk. The data is organized according to the reading frequency, the most frequently read content is placed in the most easily found position, and the content that is no longer read is arranged back and forth until it is deleted.

Buffers is designed according to the read and write of the disk, which centralizes the decentralized write operations to reduce disk fragmentation and repeated search of the hard disk, so as to improve system performance. Linux has a daemon that periodically empties the buffers (that is, writes to disk), or you can manually empty the buffers with the sync command. For example:

I have an ext2 flash drive here. I cp a 3M MP3 into it, but the light of the USB disk does not jump, after a while (or manually enter the light of the sync) USB disk light

It just jumped. The buffer is emptied when the device is uninstalled, so sometimes it takes a few seconds to uninstall a device.

Modify the number to the right of vm.swappiness in / etc/sysctl.conf to adjust the swap usage policy the next time you turn it on.

A little. The range of the number is 0,100, and the larger the number, the more likely you are to use swap. The default is 60, you can try it.

-

Both are data in RAM. To put it simply, buffer is about to be written to disk, while cache is read from disk.

Buffer is assigned by various processes and is used in areas such as input queues. A simple example is that a process requires multiple fields to be read. Before all fields are read in, the process saves the previously read fields in buffer.

Cache is often used in disk IUnip O requests. If multiple processes have to access a file, the file is made cache to be accessed next time, which provides system performance.

A buffer is something that has yet to be "written" to disk. A cache issomething that has been "read" from the disk and stored for later use.

For a more detailed explanation, refer to: Difference Between Buffer and Cache

For shared memory (Shared memory), it is mainly used to share data between different processes in UNIX environment, which is a method of inter-process communication. General applications do not apply to use shared memory, and the author does not verify the impact of shared memory on the above equation. If you are interested, please refer to: What is Shared Memory?

The difference between cache and buffer:

Cache: cache is a small but high-speed memory located between CPU and main memory. Because the speed of CPU is much higher than that of main memory, CPU has to wait for a certain period of time to access data directly from memory. Cache holds part of the data that CPU has just used or recycled. When CPU uses this part of data again, it can be directly transferred from Cache.

In this way, the waiting time of CPU is reduced and the efficiency of the system is improved. Cache is divided into first-level Cache (L1 Cache) and second-level Cache (L2 Cache). L1 Cache is integrated in CPU. L2 Cache was generally welded on the motherboard in the early days, but now it is also integrated in CPU. The common capacity is 256KB or 512KB L2 Cache.

Buffer: a buffer, an area used to store data transfer between devices with out-of-sync speeds or devices with different priorities. Through the buffer, the mutual waiting between processes can be reduced, so that the operation process of the fast device will not be interrupted when the data is read from the slow device.

Buffer and cache in Free: (they both take up memory):

Buffer: memory as buffer cache, read and write buffer for block devices

Cache: memory as page cache, cache of file system

If the value of cache is large, it means that cache lives in a large number of files. If frequently accessed files can be accessed by

Cache live, then the disk read IO bi will be very small.

=

# sync

# echo 1 > / proc/sys/vm/drop_caches

Echo 2 > / proc/sys/vm/drop_caches

Echo 3 > / proc/sys/vm/drop_caches

Cache release:

To free pagecache:

Echo 1 > / proc/sys/vm/drop_caches

To free dentries and inodes:

Echo 2 > / proc/sys/vm/drop_caches

To free pagecache, dentries and inodes:

Echo 3 > / proc/sys/vm/drop_caches

Note: it is best to sync before release to prevent data loss.

Because of the kernel mechanism of LINUX, there is generally no need to specifically release cache that is already in use. The contents of these cache can increase the speed of reading and writing files as well.

These are all the contents of this article entitled "what's the difference between memory buffer and cache in Linux". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report