Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Method tutorial of handwritten LRU cache elimination algorithm

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article introduces the "handwritten LRU cache elimination algorithm method tutorial" related knowledge, in the actual case operation process, many people will encounter such a dilemma, then let the editor lead you to learn how to deal with these situations! I hope you can read it carefully and be able to achieve something!

Background of handwritten LRU cache elimination algorithm

In our increasingly efficient world, our waiting for anything appears to be very impetuous, the web page can not be refreshed, it is annoying, the computer is slow to open and run the program, and it is so annoying! So what to do, the emergence of technology is what we serve, today we will talk about caching technology, and use the data structure we are familiar with-- using linked lists to implement LRU cache elimination algorithm.

Before learning how to use linked lists to implement the LRU cache elimination algorithm, let's ask a few questions. Think about them. The questions are as follows:

What is caching and what is the role of caching?

What are the elimination strategies for caching?

How to use linked lists to achieve LRU cache elimination algorithm, what are the characteristics, how to optimize?

1. What is caching and what is the function of caching?

Caching can be simply understood as saving a copy of the data so that it can be quickly accessed later. Take the computer usage scenario as an example. When cpu wants to access a piece of data in memory, it will first look for it in the cache. If it can find it, it will use it directly. If it cannot find it, it needs to look it up in memory.

Similarly, in the database access scenario, when the project system needs to query a piece of data in the database, it can first let the request query cache. If it is hit, it will directly return the cached result. If it does not hit, query the database and put the query result into the cache. When the next request is made, the query cache will be hit and the result will be returned directly, so there is no need to query the database again.

Through the above two examples, we find that in any scenario, there is such an order: first cache, then memory; first cache, then database. However, the existence of cache also takes up a part of memory space, so cache is a typical exchange of space for time, at the expense of real-time data, but meets the high efficiency of computer operation.

If you think about it, there are many examples of caching in our daily development.

Operating system cache

Reduce interaction with disk

Database caching

Reduce queries to the database

Web server cache

Reduce requests to the application server

The cache of the customer browser

Reduce access to the website

2. What are the elimination strategies for caching?

The essence of the cache is to trade space for time, so the capacity of the cache must be limited. When the cache is full, which data in the cache should be cleaned out and which data should be retained? This needs to be decided by the cache elimination strategy.

In fact, there are three commonly used cache elimination strategies: first-in, first-out (First in First out FIFO), pages that have been visited least in a certain period of time (Least Frequently Used LFU), and pages that have not been used for the longest time (Least Recently Used LRU).

These algorithms have different efficiency when executing on different levels of cache, and need to be selected according to the specific scene.

2.1 FIFO algorithm

FIFO algorithm is a first-in, first-out algorithm, which is often implemented by queues. In caching, its design principle is that if a data is first entered into the cache, it should be eliminated first.

The newly accessed data is inserted into the tail of the FIFO queue, where the data moves sequentially from the queue to the head of the queue.

Delete the data at the head of the queue when the queue is full

2.2 LRU algorithm

The LRU algorithm eliminates data according to the number of historical visits to the data, and is usually implemented using linked lists. In the cache, its design principle is that if the data has been accessed recently, it has a high chance of being accessed in the future.

The newly added data is inserted at the end of the queue (reference count is 1

After the data in the queue is accessed, the reference count increases and the queue is reordered.

When the data needs to be eliminated, the last block of the sorted list is deleted.

3. How to use linked lists to achieve cache elimination, what are the characteristics, and how to optimize them?

In the above article, we understand the concept of cache and elimination strategy, in which the LRU algorithm is frequently examined in the written test / interview. When I hired in the fall, many companies asked me to handwrite this algorithm, in order to avoid mining, below, we will handwrite a LRU cache elimination algorithm.

We all know that there is more than one form of linked list. Which one should we choose?

Think for three minutes.

All right, announce the answer!

In fact, linked lists can be divided into single linked lists, circular linked lists and two-way linked lists according to different connection structures.

Single linked list

Each node contains only one pointer, the successor pointer.

A single linked list has two special nodes, namely, the head node and the tail node. The whole linked list is represented by the address of the head node, and the subsequent pointer of the tail node points to the empty address null.

Performance characteristics: the time complexity of inserting and deleting nodes is O (1), and the time complexity of searching is O (n).

Cyclic linked list

It is consistent with the single linked list except that the subsequent pointer of the tail node points to the address of the first node.

It is suitable for storing data with cyclic characteristics, such as Joseph problem.

Bidirectional linked list

In addition to storing data, the node also has two pointers to the previous node address (precursor pointer prev) and the next node address (subsequent pointer next).

The leading pointer prev of the head node and the subsequent pointer of the tail node both point to the null address.

One of the advantages of two-way linked list over single linked list is that the time complexity of finding the precursor node is O (1), while the single linked list can only be found slowly down from the header node, so it is still O (n). Also, there are optimizations for inserts and deletions.

We may have a problem: isn't the insertion and deletion of a single linked list O (1)?

Yes, but in general, if we want to insert and delete, we still have to find it first, and then insert or delete it. It can be seen that it is actually O (n), and then O (1).

Students who are not familiar with the linked list problem solving can first take a look at my last algorithm parsing article brushing the LeetCode linked list project, I found a secret.

Because we need to delete operation, deleting a node not only needs to get the pointer of the node itself, but also needs to manipulate the pointer of other precursor nodes, while the bi-directional linked list can find the precursor directly, which ensures that the operation time complexity is O (1). Therefore, using the bi-directional linked list as the structure of the LRU cache elimination algorithm will be more efficient.

Algorithm idea

Maintains a two-way linked list that holds all cached values, with the oldest values at the end of the linked list.

When the accessed value is in the linked list: the value in the linked list will be found to delete it, and the value will be re-added in the header of the linked list (ensure that the order of the values in the linked list is from new to old)

When the accessed value is not in the linked list: when the linked list is full: delete the last value of the linked list, put the value to be added in the chain header when the linked list is not full: add it directly in the chain header

3.1 LRU cache elimination algorithm

Geek time King's "Beauty of data structure and algorithm" gives an algorithm to eliminate LRU cache using ordered single linked list. The code is as follows:

Public class LRUBaseLinkedList {/ * * default linked list capacity * / private final static Integer DEFAULT_CAPACITY = 10; / * header node * / private SNode headNode; / * * linked list length * / private Integer length; / * linked list capacity * / private Integer capacity; public LRUBaseLinkedList () {this.headNode = new SNode () This.capacity = DEFAULT_CAPACITY; this.length = 0;} public LRUBaseLinkedList (Integer capacity) {this.headNode = new SNode (); this.capacity = capacity; this.length = 0;} public void add (T data) {SNode preNode = findPreNode (data) / / exists in the linked list, delete the original data, and insert it into the header of the linked list if (preNode! = null) {deleteElemOptim (preNode); intsertElemAtBegin (data);} else {if (length > = this.capacity) {/ / delete the tail node deleteElemAtEnd ();} intsertElemAtBegin (data) Delete the next element of the preNode node * * @ param preNode * / private void deleteElemOptim (SNode preNode) {SNode temp = preNode.getNext (); preNode.setNext (temp.getNext ()); temp = null; length-- * * @ param data * / private void intsertElemAtBegin (T data) {SNode next = headNode.getNext (); headNode.setNext (new SNode (data, next)); length++ } / * get the previous node of the element found * * @ param data * @ return * / private SNode findPreNode (T data) {SNode node = headNode; while (node.getNext ()! = null) {if (node.getNext (). GetElement () {return node } node = node.getNext ();} return null;} / * * Delete tail node * / private void deleteElemAtEnd () {SNode ptr = headNode; / / empty linked list directly returns if (ptr.getNext () = = null) {return The penultimate node while (ptr.getNext (). GetNext ()! = null) {ptr = ptr.getNext ();} SNode tmp = ptr.getNext (); ptr.setNext (null); tmp = null; length--;} private void printAll () {SNode node = headNode.getNext () While (node! = null) {System.out.print (node.getElement () + ","); node = node.getNext ();} System.out.println ();} public class SNode {private T element; private SNode next; public SNode (T element) {this.element = element } public SNode (T element, SNode next) {this.element = element; this.next = next;} public SNode () {this.next = null;} public T getElement () {return element;} public void setElement (T element) {this.element = element } public SNode getNext () {return next;} public void setNext (SNode next) {this.next = next;}} public static void main (String [] args) {LRUBaseLinkedList list = new LRUBaseLinkedList (); Scanner sc = new Scanner (System.in); while (true) {list.add (sc.nextInt ()) List.printAll ();}

This code needs to traverse the linked list regardless of whether the cache is full or not, so the time complexity of cache access is O (n).

3.2 optimize LRU using hash tables

In fact, this idea can continue to be optimized, we can replace a single linked list with a two-way linked list and introduce a hash table.

The two-way linked list supports finding the precursor, which ensures that the time complexity of the operation is O (1).

A hash table is introduced to record the location of each data, which reduces the time complexity of cache access to O (1).

Hash tables can be looked up quickly, but the data have no fixed order; linked lists can be divided into different orders. Insert and delete are faster, but lookup is slow. By combining them, a new data structure-hash linked list (LinkedHashMap) can be formed.

image-20210227203448255

The LRU caching mechanism can be used to practice. The picture is as follows:

Title:

Use the data structures you know to design and implement a LRU (least recently used) caching mechanism.

Implement the LRUCache class:

LRUCache (int capacity) initializes the LRU cache int get (int key) with a positive integer as the capacity capacity. If the keyword key exists in the cache, it returns the value of the keyword, otherwise-1. Void put (int key, int value) if the keyword already exists, change its data value; if the keyword does not exist, insert the set of keyword-values. When the cache capacity reaches the limit, it should delete the longest unused data values before writing new data, leaving room for new data values.

Train of thought:

Our idea is hash table + two-way linked list.

The hash table is used to meet the requirement of time complexity O (1) of the topic, and the two-way linked list is used to store the order.

Hash table key value type: the key of the hash table is used to store the input key, and the value of the hash table is used to store the nodes of the two-way linked list

The nodes of the two-way linked list need to contain key in addition to value, because when deleting the longest unused data, you need to use the linked list to locate the key-value pairs that should be deleted in the hashmap.

Some operations: in a two-way linked list, the subsequent node indicates that it has been recently accessed

The newly added node is placed at the end of the linked list, addNodeToLast (node)

If the capacity reaches the limit and removes the longest unused data, removeNode (head.next)

If the data has been accessed recently, such as by get or put, move the node to the end of the linked list, moveNodeToLast (node)

For convenience of operation, a head node and a tail node are defined at the head and tail of the two-way chain table, respectively.

Code

Class LRUCache {private int capacity; private HashMap hashmap; private ListNode head; private ListNode tail; private class ListNode {int key; int val; ListNode prev; ListNode next; public ListNode () {} public ListNode (int key, int val) {this.key = key; this.val = val } public LRUCache (int capacity) {this.capacity = capacity; hashmap = new HashMap (); head = new ListNode (); tail = new ListNode (); head.next = tail; tail.prev = head;} private void removeNode (ListNode node) {node.prev.next = node.next; node.next.prev = node.prev } private void addNodeToLast (ListNode node) {node.prev = tail.prev; node.prev.next = node; node.next = tail; tail.prev= node;} private void moveNodeToLast (ListNode node) {removeNode (node); addNodeToLast (node) } public int get (int key) {if (hashmap.containsKey (key)) {ListNode node = hashmap.get (key); moveNodeToLast (node); return node.val;} else {return-1 }} public void put (int key, int value) {if (hashmap.containsKey (key)) {ListNode node = hashmap.get (key); node.val = value; moveNodeToLast (node); return;} if (hashmap.size () = = capacity) {hashmap.remove (head.next.key) RemoveNode (head.next);} ListNode node = new ListNode (key, value); hashmap.put (key, node); addNodeToLast (node);}} "handwritten LRU cache elimination algorithm method tutorial" ends here, thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report