Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to understand LRU algorithm

2025-03-11 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article introduces the relevant knowledge of "how to understand the LRU algorithm". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

01, preface

We often use cache to improve the speed of data query. due to the limited cache capacity, when the cache capacity reaches the upper limit, we need to delete part of the data to set aside space, so that new data can be added. Cached data cannot be deleted randomly. In general, we need to delete cached data according to some algorithm. The common elimination algorithm is LRU,LFU,FIFO.

02. Introduction to LRU

LRU is an acronym for Least Recently Used, and this algorithm assumes that the recently used data is hot and will most likely be used again next time. There is a good chance that data, which is rarely used recently, will not be used next time. When the cache capacity is full, priority is given to eliminating data that is rarely used recently.

Suppose the internal data is now cached as shown in the figure:

Here we call the first node in the list the header node and the last node the tail node.

When calling the cache to get the data of key=1, the LRU algorithm needs to move the node 1 to the header node, and the rest of the nodes remain the same, as shown in the figure.

Then we insert a key=8 node, and the cache capacity reaches the limit, so we need to delete the data before joining. Because each query will move the data to the head node, the unqueried data will sink to the tail node, and the tail data can be considered as the least accessed data, so delete the data of the tail node.

Then we add the data directly to the header node.

Here is a summary of the specific steps of LRU algorithm:

The new data is inserted directly into the list header

The cache data is hit and the data is moved to the list header

Remove the data at the end of the list when the cache is full.

03. Implementation of LRU algorithm

As you can see in the above example, the LRU algorithm needs to add a header node and delete the tail node. The time complexity of adding / deleting nodes in the linked list is O (1), which is very suitable to be used as a storage cache data container. However, ordinary one-way linked lists cannot be used. One-way linked lists have several disadvantages:

Hongmeng official Strategic Cooperation to build HarmonyOS Technology Community

Every time the data of any node is obtained, it needs to be traversed from the header node, which leads to the complexity of the acquisition node is O (N).

When we move the intermediate node to the head node, we need to know the information of the previous node in the intermediate node, and the one-way linked list has to traverse again to get the information.

In view of the above problems, can be combined with other data structures to solve.

Using hash table storage nodes, the complexity of getting nodes will be reduced to O (1). The node movement problem can add a leading pointer to the node to record the last node information, so that the linked list changes from an one-way linked list to a two-way linked list.

In summary, a combination of two-way linked list and hash table is used, and the data structure is shown in the figure:

Two Sentinel nodes are specially added to the two-way linked list so as not to store any data. Using Sentinel nodes, you can add / delete nodes without considering the non-existence of boundary nodes, simplify programming difficulty and reduce code complexity.

The implementation code of the LRU algorithm is as follows. In order to simplify the key, val considers the int type.

Public class LRUCache {Entry head, tail; int capacity; int size; Map cache; public LRUCache (int capacity) {this.capacity = capacity; / / initialize the linked list initLinkedList (); size = 0; cache = new HashMap (capacity + 2);} / * * returns-1 if the node does not exist. If present, move the node to the header node and return the node's data. * * @ param key * @ return * / public int get (int key) {Entry node = cache.get (key); if (node = = null) {return-1;} / / Mobile node moveToHead (node); return node.value } / * add the node to the header node. If the capacity is full, the tail node * * @ param key * @ param value * / public void put (int key, int value) {Entry node = cache.get (key); if (node! = null) {node.value = value; moveToHead (node); return;} / / does not exist. Add first, then remove tail node / / delete tail node if (size = = capacity) {Entry lastNode = tail.pre; deleteNode (lastNode); cache.remove (lastNode.key); size--;} / / add header node Entry newNode = new Entry (); newNode.key = key; newNode.value = value; addNode (newNode); cache.put (key, newNode); size++ } private void moveToHead (Entry node) {/ / first delete the relationship of the original node deleteNode (node); addNode (node);} private void addNode (Entry node) {head.next.pre = node; node.next = head.next; node.pre = head; head.next = node;} private void deleteNode (Entry node) {node.pre.next = node.next; node.next.pre = node.pre;} public static class Entry {public Entry pre Public Entry next; public int key; public int value; public Entry (int key, int value) {this.key = key; this.value = value;} public Entry () {}} private void initLinkedList () {head = new Entry (); tail = new Entry (); head.next = tail; tail.pre = head;} public static void main (String [] args) {LRUCache cache = new LRUCache (2); cache.put (1,1) Cache.put (2,2); System.out.println (cache.get (1)); cache.put (3,3); System.out.println (cache.get (2));}}

04. Analysis of LRU algorithm

The cache hit rate is a very important indicator of the cache system. If the cache hit rate of the cache system is too low, it will cause the query to flow back to the database and increase the pressure on the database.

Combined with the above analysis of the advantages and disadvantages of LRU algorithm.

The advantage of LRU algorithm is that it is not difficult to implement, for hot data, LRU efficiency will be very good.

The disadvantage of LRU algorithm is that for occasional batch operations, such as batch query of historical data, it may make the hot data in the cache be replaced by these historical data, resulting in cache pollution, resulting in a decline in cache hit rate and slowing down the normal data query.

05. Improvement scheme of LRU algorithm

The following solution comes from the improved MySQL InnoDB LRU algorithm

Split the linked list into two parts, divided into hot data area and cold data area, as shown in the figure.

After the improvement, the algorithm flow will be as follows:

Hongmeng official Strategic Cooperation to build HarmonyOS Technology Community

If the access data is located in the hot data area, it is moved to the head node of the hot data area, as in the previous LRU algorithm.

When inserting data, if the cache is full, eliminate the data of the tail node. Then insert the data into the head node of the cold data area.

Each time the data in the cold data zone is accessed, the following judgment needs to be made:

If the data has been in the cache for more than a specified time, such as 1 s, it is moved to the head node of the hot data area.

If the data exists at a time less than the specified time, the position remains unchanged.

For occasional batch queries, the data will only fall into the cold data area and will soon be eliminated. The data in the hot data area will not be affected, which solves the problem of the decline of cache hit rate of LRU algorithm.

This is the end of the content of "how to understand LRU algorithm". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report