In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article introduces the principle and implementation of LRU to you, the content is very detailed, interested friends can refer to, hope to be helpful to you.
Hard disk capacity is large, access speed is slow, memory space is small access speed block, how to achieve elimination mechanism of memory space? LFU 、 LRU 、 ARC 、 FIFO 、 MRU
The least frequent use of algorithms (LFU): in general standard operating system textbooks, the principle of LRU will be demonstrated in the following way, assuming that memory can only hold 3 page sizes and access pages in the order of 7 0 1 2 0 3 04. Suppose memory describes the access time in terms of stack, the one at the top is the most recently accessed, and the one at the bottom is the farthest access, which is how LRU works.
But if we design a LRU-based cache ourselves, there may be a lot of problems with the design. This memory is sorted by access time, and there will be a lot of memory copy operations, so performance is definitely unacceptable.
So how to design a LRU cache so that both put and remove are O (1), we need to maintain the access order, but can not be reflected by the real sort in memory, one solution is to use a two-way linked list.
Realization of LRU based on HashMap and two-way linked list
The overall design idea is that you can use HashMap to store key, so that the time of both save and get key is O (1), while the Value of HashMap points to the Node node of the LRU implemented by the two-way linked list, as shown in the figure.
LRU storage is based on bidirectional linked lists, and the following figure demonstrates how it works. Where head represents the header of the two-way linked list and tail represents the tail. First of all, the capacity of the LRU is set in advance. If the storage is full, the tail of the two-way linked list can be eliminated in O (1) time. Every time you add and access data, you can add new nodes to their peers or move existing nodes to the head of the line through O (1) efficiency.
Save ("key1", 7) save ("key2", 0) save ("key3", 1) save ("key4", 2) get ("key2") save ("key5", 3) get ("key2") save ("key6" 4)
The following shows the change of LRU storage during storage and access with a default size of 3. In order to simplify the complexity of the diagram, the changes in the HashMap section are not shown in the diagram, but only the changes in the LRU bi-directional linked list shown above. The operation sequence for this LRU cache is as follows: the corresponding LRU bi-directional linked list changes as follows:
S = save, g = get
Summarize the steps of the core operation:
Save (key, value), first find the node corresponding to Key in HashMap, update the value of the node if the node exists, and move the node to the head of the line. If it does not exist, you need to construct a new node and try to cram the node to the head of the line. If there is not enough LRU space, the node at the end of the line is eliminated through tail, and the Key is removed from the HashMap.
Get (key), find the LRU linked list node through HashMap, because according to LRU principle, this node is newly accessed, so insert the node into the head of the line and return the cached value.
The complete Java-based code reference is as follows
Class DLinkedNode {String key; int value; DLinkedNode pre; DLinkedNode post;}
LRU Cache
Public class LRUCache {private Hashtable cache = new Hashtable (); private int count; private int capacity; private DLinkedNode head, tail; public LRUCache (int capacity) {this.count = 0; this.capacity = capacity; head = new DLinkedNode (); head.pre = null; tail = new DLinkedNode (); tail.post = null; head.post = tail Tail.pre = head;} public int get (String key) {DLinkedNode node = cache.get (key); if (node = = null) {return-1; / / should raise exception here. } / / move the accessed node to the head; this.moveToHead (node); return node.value;} public void set (String key, int value) {DLinkedNode node = cache.get (key); if (node = = null) {DLinkedNode newNode = new DLinkedNode (); newNode.key = key; newNode.value = value; this.cache.put (key, newNode) This.addNode (newNode); + + count; if (count > capacity) {/ / pop the tail DLinkedNode tail = this.popTail (); this.cache.remove (tail.key);-- count;}} else {/ / update the value. Node.value = value; this.moveToHead (node);}} / * * Always add the new node right after head; * / private void addNode (DLinkedNode node) {node.pre = head; node.post = head.post; head.post.pre = node; head.post = node;} / * Remove an existing node from the linked list. * / private void removeNode (DLinkedNode node) {DLinkedNode pre = node.pre; DLinkedNode post = node.post; pre.post = post; post.pre = pre;} / * Move certain node in between to the head. * / private void moveToHead (DLinkedNode node) {this.removeNode (node); this.addNode (node);} / / pop the current tail. Private DLinkedNode popTail () {DLinkedNode res = tail.pre; this.removeNode (res); return res;}}
So the latter part of the question is how to implement Redis. There must be a hole in asking this question, that is, redis is definitely not implemented in this way.
LRU implementation of Redis
If you follow the HashMap and two-way linked list implementation, you need additional storage to store next and prev pointers, at the expense of relatively large storage space, which is obviously not cost-effective. Therefore, Redis adopts an approximate approach, that is, several key are randomly selected, and then sorted by access time, the least frequently used ones are eliminated. The specific analysis is as follows:
To support the use of a global LRU clock, server.lruclock, in LRU,Redis 2.8.19, the definition is as follows
# define REDIS_LRU_BITS 24unsigned lruclock:REDIS_LRU_BITS; / * Clock for LRU eviction * /
The default LRU clock resolution is 1 second, which can be changed by changing the value of the REDIS_LRU_CLOCK_ resolution macro. Redis will call updateLRUClock in serverCron () to update the LRU clock periodically. The update frequency is related to the hz parameter. The default is 100ms once, as shown below.
# define REDIS_LRU_CLOCK_MAX ((1 = o-> lru) {return (server.lruclock-o-> lru) * REDIS_LRU_CLOCK_RESOLUTION;} else {return ((REDIS_LRU_CLOCK_MAX-o-> lru) + server.lruclock) * REDIS_LRU_CLOCK_RESOLUTION;}}
Redis support and LRU related phase-out strategies include
Volatile-lru sets the expiration time of key to participate in an approximate lru elimination strategy.
All key of allkeys-lru participate in similar lru phase-out strategy.
When LRU is phased out, Redis is carried out as follows
. / * volatile-lru and allkeys-lru policy * / else if (server.maxmemory_policy = = REDIS_MAXMEMORY_ALLKEYS_LRU | | server.maxmemory_policy = = REDIS_MAXMEMORY_VOLATILE_LRU) {for (k = 0; k)
< server.maxmemory_samples; k++) { sds thiskey; long thisval; robj *o; de = dictGetRandomKey(dict); thiskey = dictGetKey(de); /* When policy is volatile-lru we need an additional lookup * to locate the real key, as dict is set to db->Expires. * / if (server.maxmemory_policy = = REDIS_MAXMEMORY_VOLATILE_LRU) de = dictFind (db- > dict, thiskey); o = dictGetVal (de); thisval = estimateObjectIdleTime (o) / * Higher idle time is better candidate for deletion * / if (bestkey = = NULL | | thisval > bestval) {bestkey = thiskey; bestval = thisval;}.
Redis selects a fixed number of key based on server.maxmemory_samples configuration, then compares their lru access time, and then eliminates the key,maxmemory_samples that has not been accessed for the longest time recently. The higher the value of the key,maxmemory_samples, the closer the approximate LRU algorithm of Redis is to the strict LRU algorithm, but the corresponding consumption is also higher, which has a certain impact on performance. The sample value defaults to 5.
There are two expiration policies of Redis: periodic deletion + lazy deletion.
It is easy to understand on a regular basis. By default, some key with expiration time set at random will be selected by default to check whether they have expired, and delete them if they have expired.
Why not scan all key with expiration time set?
If all the key in Redis have expiration time, scan them all? That's horrible, and we basically set a certain expiration time online. Full scan is the same as you go to check the database without where condition without index full table scan, 100s once, Redis is tired to death.
If there is not a lot of key at random, there will be a lot of invalid key in it.
Good question, lazy deletion, knowing the meaning of the name, inertia, I do not take the initiative to delete, I am lazy, I wait for you to inquire, I see if you are out of date, delete and do not return to you, do not expire what you should do.
Finally, if, if not deleted regularly, I did not query, then what can be done?
Memory elimination mechanism!
The memory elimination mechanisms given on the official website are as follows:
Noeviction: returns an error when the memory limit is reached and the client tries to execute commands that allow more memory to be used (most write instructions, but DEL and a few exceptions)
Allkeys-lru: try to recycle the least used key (LRU) so that the newly added data has room to store.
Volatile-lru: try to recycle the least used keys (LRU), but only those keys in expired collections, so that newly added data has room to store.
Allkeys-random: reclaim random keys to make room for newly added data.
Volatile-random: reclaiming random keys gives room for newly added data, but only for keys in expired collections.
Volatile-ttl: reclaim keys in expired collections, and give priority to keys with a short time to live (TTL) so that newly added data can be stored.
If there are no keys to meet the prerequisites for recycling, the strategies volatile-lru, volatile-random, and volatile-ttl are almost the same as noeviction.
The reason why Redis doesn't use a real LRU implementation is because it requires too much memory. However, the approximate LRU algorithm should be equivalent for applications. Using a real LRU algorithm and an approximate algorithm can be compared with the following image.
In fact, the LRU algorithm is also implemented in the familiar LinkedHashMap.
In LinkedHashMap. We just need to extend it. The code example is as follows: / * Constructs an empty LinkedHashMap instance with the * specified initial capacity, load factor and ordering mode. * * @ param initialCapacity the initial capacity * @ param loadFactor the load factor * @ param accessOrder the ordering mode-true for * access-order, false for insertion-order * @ throws IllegalArgumentException if the initial capacity is negative * or the load factor is nonpositive * / public LinkedHashMap (int initialCapacity, float loadFactor, boolean accessOrder) {super (initialCapacity) LoadFactor) This.accessOrder = accessOrder;} / / method is protected, which makes it clear that you want to inherit and rewrite protected boolean removeEldestEntry (Map.Entry eldest) {return false;}
Use accessOrder to identify whether to use the access order or the insertion order. The default is the insertion order. When accessOrder is the access order and the capacity is fixed, it is LRU.
Examples are as follows:
When the capacity exceeds 100, start implementing the LRU policy: evict the least recently unused objects.
Map LRUCollection = Collections.synchronizedMap (new LinkedHashMap (100,.75F, true) {@ Override protected boolean removeEldestEntry (Map.Entry eldest) {return size () > 100;}})
You will be asked to write the LUR algorithm in the real interview, but don't do the original one. There are so many TM that you can't finish it. Either against the above one or the following one, it's easy to find a data structure to implement the Java version of LRU. Just know what principle it is.
Class LRULinkedHashMap extends LinkedHashMap {/ * / private static final long serialVersionUID = 1882839504956564761L; private int capacity; public LRULinkedHashMap (int capacity) {super (capacity,0.75f,true); this.capacity = capacity } @ Override public boolean removeEldestEntry (Map.Entry eldest) {System.out.println ("the least recently used element will be deleted according to the LRU algorithm: key=" + eldest.getKey () + "value=" + eldest.getValue () + "."); / / this line of code is the key. Once the capacity exceeds the limit, delete return size () > capacity according to LRU. }} public class Test {public static void main (String [] args) {testLinkedHashMap (); testLRULinkedHashMap () } public static void testLinkedHashMap () {/ / capacity is fixed, accessOrder=true Map map = new LinkedHashMap (5,0.75f, true); map.put (1,1); map.put (2,2); map.put (3,3) Map.put (4,4); map.put (5,5); / / output 1 for 2 it.next 3 4 for (Iterator it = map.entrySet (). Iterator (); it.hasNext ();) {System.out.println (it.next (). GetValue ()) } map.put (4,4); map.put (6,6); / / output at this time 1 Iterator it 2 3 5 5 5 (automatic expansion) for (Iterator it = map.entrySet (). Iterator (); it.hasNext () ) {System.out.println (it.next (). GetValue ());}} public static void testLRULinkedHashMap () {/ / capacity is fixed, accessOrder=true Map map = new LRULinkedHashMap (5); map.put (1,1) Map.put (2, 2); map.put (3, 3); map.put (4, 4); map.put (5, 5); / / at this time, output 1 for (Iterator it = map.entrySet (). Iterator (); it.hasNext () ) {System.out.println (it.next (). GetValue ());} map.put (4,4); map.put (6,6) / / at this time, the output is 2System.out.println (capacity lock, delete) for (Iterator it = map.entrySet (). Iterator (); it.hasNext ();) {System.out.println (it.next (). GetValue ()) } about the principle and implementation of LRU is shared here, I hope that the above content can be of some help to you, can learn more knowledge. If you think the article is good, you can share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.