In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
Editor to share with you how to implement the LRU cache algorithm, I believe most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's go to know it!
I. what is caching
The cache mentioned here is a broad concept. In the computer storage hierarchy, the lower layer of memory can be regarded as the higher layer of cache. For example, Cache is the cache of memory, memory is the cache of hard disk, hard disk is the cache of network, and so on.
Caching can effectively resolve the contradiction between memory performance and capacity, but it is by no means as simple as it seems. If the cache algorithm is not designed properly, it will not improve the access speed, but will make the system slower.
In essence, caching is effective because of the locality of programs and data. The program will be executed in a fixed order, and the data will be stored in a continuous memory space and read and written repeatedly. These features enable us to cache the frequently used data, thus improving the speed of reading and writing.
The size of the cache is fixed, and it should hold only the data that is most frequently accessed. However, the future is unpredictable, and we can only predict from past access sequences, so there are a variety of cache replacement strategies. This article introduces a simple caching strategy called the least recently used (LRU,Least Recently Used) algorithm.
Second, the realization of LRU
Let's take memory access as an example to explain how caching works. Assume that the size of the cache is fixed and the initial state is empty. Every time a read memory operation occurs, it first looks up whether the data to be read exists in the cache, if so, the cache hits and returns the data; if not, the cache misses, reads the data from memory and adds the data to the cache. When adding data to the cache, if the cache is full, you need to delete the data with the earliest access time. This method of updating the cache is called LRU.
When implementing LRU, we need to pay attention to its read performance and write performance. The ideal LRU should be able to read or update a piece of data in O (1) time, that is to say, the time complexity of reading and writing is O (1).
At this point, it is easy to think of using HashMap, according to the key of the data access data can reach O (1) speed. However, the cache cannot be updated at O (1) speed, because you need to determine which piece of data has the earliest access time, which requires traversing all caches to find it.
Therefore, we need a data structure which is not only sorted by access time, but also can be accessed randomly in constant time.
This can be achieved through HashMap+ two-way linked lists. HashMap guarantees that the time to access data through key is O (1), and the two-way linked list passes through each data in order of access time. The reason for choosing a two-way linked list instead of a single linked list is that the structure of the linked list can be modified from any node in the middle without having to traverse from the beginning node.
As shown in the following figure, the black part is the structure of HashMap, and the red arrow is the forward join of the two-way linked list (the reverse join is not drawn). You can clearly see that the order of data access is 1-> 3-> 5-> 6-> 10. We just need to change the join order of the linked list after each visit.
HashMap+ two-way linked list
The implementation code is as follows:
Each method and member variable is preceded by a Chinese comment, so there is no need to explain too much.
It is worth mentioning that there are already data types in Java API that provide the functionality we need, which is the class LinkedHashMap. The interior of this class is also implemented using HashMap+ bi-directional linked lists. Using this class to implement LRU is much more concise.
You only need to override the removeEldestEntry method of LinkedHashMap, return true if the cache is full, and the oldest element will be automatically deleted internally. These are all the contents of the article "how to implement the LRU caching algorithm". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.