Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Hardware caching strategy

2025-03-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Network Security >

Share

Shulou(Shulou.com)06/01 Report--

The memory in the operating system forms a pyramid. The higher the memory is, the faster it is, but the more expensive it is, so it is smaller. In order to solve the contradiction between the high-speed processor and the low-speed memory, the memory of the upper layer serves as the cache of the next layer of memory.

On modern CPU (most), all memory access needs to be done through layers of caching. The read / write (and fetch instruction) unit of CPU normally does not even have direct access to memory-this is determined by the physical structure; CPU does not have pins directly connected to memory. In contrast, CPU communicates with level 1 cache (L1 Cache), while level 1 cache can communicate with memory. About twenty years ago, first-tier caching could transfer data directly to memory. Today, with more levels of caching added to the design, the first-tier cache can no longer communicate directly with memory, it communicates with the second-tier cache-- and the second-tier cache can communicate with memory. Or there may be a three-level cache. As long as you know what that means.

For example, when you need to manipulate an area of memory, the processor will not go directly to the memory to read, but will go to the cache to see if the area has been called in, and if not, call the area into the cache. Then the processor reads and writes directly in the cache.

Similarly, for reading disk data, the processor uses a certain area of memory as the disk cache. Then you can read and write directly in memory.

The processor reads and writes the data in the cache, but still needs to write the data to the original area, which then involves a certain strategy.

Write caching strategy

(1) the first strategy is called nowrite, which means that the cache does not cache any writes. When writing to the data in the cache, the cache is skipped and written directly to disk, while the cached data is marked as invalid. If you need to read later, you need to re-read the data from disk.

(2) the second strategy is called write-through cache (write-through cache), where the write operation automatically updates the cache as well as disk files. This operation is good for maintaining cache consistency, so there is no need to mark the cache as invalid, and it is relatively simple to implement.

(3) the third strategy, also adopted by Linux, is called write-back. Under this strategy, the program writes directly to the cache, but does not update the disk directly, but marks the pages written in the cache as "dirty" and adds them to the list of dirty pages. Then a process (writeback process) periodically writes the pages of the dirty page list to disk, so that the data in the disk is consistent with the data in the cache, and finally clears the cached "dirty" page flag. "dirty" does not mean that the data is not clean, but that the data is not synchronized to disk.

Cache consistency policy

In today's multiprocessor computers, each CPU has its own register and cache. Then a multi-threaded program will have this problem, thread A changed the data in cache A, but the data in cache B is still the original data, then thread B to cache the data read in B is the wrong data. This is the problem of cache consistency. (note: the problem is caused by multiple caches)

Since the problem is caused by multiple caches, why not let all processors share one cache? Then only one processor in an instruction cycle can run its instructions through a first-level cache. This kind of efficiency is really too low. So there is a cache consistency protocol.

There are many cache consistency protocols, but most of the computer devices you deal with every day use the snooping protocol. The basic idea behind snooping is that all memory transfers take place on a shared bus that all processors can see: the cache itself is independent, but memory is a shared resource. All memory accesses are arbitrate: only one cache can read and write memory in the same instruction cycle. The idea of the snooping protocol is that the cache not only deals with the bus during memory transfer, but constantly snoops on the data exchange on the bus and tracks what other caches do. So when a cache reads and writes memory on behalf of the processor to which it belongs, other processors are notified to keep their caches synchronized. As soon as one processor writes memory, other processors immediately know that the corresponding segment in their own cache has expired.

In direct write mode, this is straightforward, because as soon as a write occurs, its effect is "published". But if you mix write-back mode, there will be a problem. Because it is possible that long after the write instruction is executed, the data is actually written back to physical memory-during which time, the caches of other processors may foolishly write the same memory address, causing conflicts. In the writeback model, it is not enough to simply broadcast information about memory writes to other processors; what we need to do is to inform other processors before modifying the local cache. When we understand the details, we find the simplest way to deal with the problem of writeback mode, which we usually call the MESI protocol. MESI is an acronym for Modified, Exclusive, Shared, and Invalid, representing the four cache states.

The Exclusive indicates an exclusive cache segment, and when the processor wants to write to a cache segment, if it does not have exclusive rights, it must first send an "I want exclusive" request to the bus, which will inform other processors to invalidate copies of the same cache segment they own (if they have one). Only after gaining exclusive rights can the processor start to modify the data-- and at this point, the processor knows that there is only one copy of this cache segment, in my own cache, so there will be no conflicts.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Network Security

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report