Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to implement the caching algorithm of LRU and LFU in C++

2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article will explain in detail how C++ implements the caching algorithms for LRU and LFU. The editor thinks it is very practical, so I share it with you for reference. I hope you can get something after reading this article.

I. LRU (Least Recently Used) cache

See LeetCode Q146 for details

Https:// leetcode.com/problems/l ru-cache/

Https:// leetcode-cn.com/problem s/lru-cache/

Problem description:

LRUCache (int capacity) initializes the LRU cache with a positive integer as the capacity capacity

Int get (int key) returns the value of the keyword if the keyword key exists in the cache, otherwise it returns-1.

Void put (int key, int value) if the keyword already exists, change its data value; if the keyword does not exist, insert the set of keyword-values. When the cache capacity reaches the limit, it should delete the longest unused data values before writing new data, leaving room for new data values.

Complete these two operations within O (1) time complexity

Data structure used:

In order to make the average time complexity of get and put operations O (1)

Use two-way linked list (STL list) to store cached content (represented by STL pair {key, value})

Using a hash table (STL unordered_map) to store the relational mapping from "key" to "pair iterator"

Typedef std::unordered_map CacheMap;typedef std::list LRUList

Flow chart:

Get function

Put function

Code implementation:

# include # include # include typedef std::unordered_map CacheMap;typedef std::list LRUList;class LRUCache {public: LRUCache (int capacity) {_ capacity = capacity;} int get (int key) {CacheMap::iterator cache_itr = _ cacheMap.find (key); if (cache_itr = = _ cacheMap.end ()) {return-1;} makeMostRecent (key, _ cacheMap [key]-> second) LRUList::iterator list_itr = _ LRUList.end ();-- list_itr; return list_itr- > second;} void put (int key, int value) {if (_ cacheMap.find (key)! = _ cacheMap.end ()) {makeMostRecent (key, value); return } if (_ LRUList.size () > = _ capacity) {removeLeastRecentTask (key);} addMostRecentTask (key, value);} private: void makeMostRecent (int key, int value) {_ LRUList.erase (_ cacheMap [key]); _ LRUList.push_back (std::make_pair (key, value)); LRUList::iterator list_itr = _ LRUList.end () _ int keyToRemove map [key] =-- list_itr;} void removeLeastRecentTask (int key) {int keyToRemove = _ LRUList.begin ()-> first; _ LRUList.erase (_ LRUList.begin ()); _ cacheMap.erase (keyToRemove);} void addMostRecentTask (int key, int value) {_ LRUList.push_back (std::make_pair (key, value)) LRUList::iterator list_itr = _ LRUList.end (); _ cacheMap [key] =-- list_itr;} int _ capacity; LRUList _ LRUList; CacheMap _ cacheMap;}; / / n = item number of the LRU list, aka capacity// Time: O (1) / / Space: O (n)

Run the test:

Accepted

22 cases passed (412 ms)

Your runtime beats 69.45% of cpp submissions

Your memory usage beats 48.08% of cpp submissions (174MB)

II. LFU (Least Frequently Used) cache

See LeetCode Q460 for details

Https:// leetcode.com/problems/l fu-cache/

Https:// leetcode-cn.com/problem s/lru-cache/

Problem description:

LFUCache (int capacity)-initializes the object with the capacity capacity of the data structure

Int get (int key)-gets the value of the key if it exists in the cache, otherwise returns-1.

Void put (int key, int value)-change the value of the key if it already exists; if the key does not exist, insert the key-value pair. When the cache reaches its capacity, you should invalidate the least frequently used items before inserting new items. In this problem, when there is a draw (that is, two or more keys have the same frequency of use), the most recently unused keys should be removed.

The number of uses of an item is the sum of the number of times the get and put functions have been called on the item since it was inserted. The number of uses will be set to 0 after the corresponding item is removed.

To determine which keys are least frequently used, you can maintain a usage counter for each key in the cache. The key with the lowest count is the one that has not been used for the longest time.

When a key is first inserted into the cache, its usage counter is set to 1 (due to the put operation). If you get or put a key in the cache, the value of the counter will be incremented.

Data structure used:

In order to make the average time complexity of get and put operations O (1)

Use a hash table (STL unordered_map) to store the relational mapping from "key" to "value and frequency" (represented by STL pair {value, frequency})

Use a hash table (STL unordered_map) to store the relational mapping from "frequency" to "corresponding to all key" (key uses bidirectional linked lists, that is, STL list storage)

Use a hash table (STL unordered_map) to store the relational mapping from "key" to "the corresponding iterator in the list used to store key in 2"

Std::unordered_map _ keyToValFreq;std::unordered_map _ freqToKeyList;std::unordered_map _ keyToKeyListItr

Flow chart:

Get function

Put function

Code implementation:

# include # include # include class LFUCache {public: LFUCache (int capacity) {_ capacity = capacity;} int get (int key) {/ / If key doesn't exist if (_ keyToValFreq.find (key) = = _ keyToValFreq.end ()) {return-1;} / if key exists, increse frequency and reorder increaseFreq (key); return _ keyToValFreq [key] .first } void put (int key, int value) {if (_ capacity = _ capacity) {int keyToRmove = _ freqToKeyList [_ minFreq] .back (); _ freqToKeyList [_ minFreq] .pop _ back (); _ keyToKeyListItr.erase (keyToRmove); _ keyToValFreq.erase (keyToRmove);} / / Then add new item with frequency = 1 addNewTask (key, value) } void increaseFreq (int key) {/ / Update the freq in the pair int oldFreq = _ keyToValFreq [key] .second + +; / Detele the old freq by itr _ freqToKeyList.erase (_ keyToKeyListItra [key]); / / Add the new freq and re-assign the itr _ freqToKeyList [oldFreq + 1] .emplace _ front (key); _ keyToKeyListItra [key] = _ freqToKeyList [oldFreq + 1] .begin () / / Update minFreq if (_ freqToKeyList [_ minFreq] .empty ()) {_ minFreq = oldFreq + 1;}} void addNewTask (int key, int value) {/ / Add new key-value/freq to all hashmaps _ minFreq = 1; _ keyToValFreq [key] = std::make_pair (value, _ minFreq); _ freqToKeyList [_ minFreq] .emplace _ front (key) _ keyToKeyListIt [key] = _ freqToKeyList [_ minFreq] .begin ();} private: int _ capacity; int _ minFreq; std::unordered_map _ keyToValFreq; std::unordered_map _ freqToKeyList; std::unordered_map _ keyToKeyListItr;}; / / n = item number of the LFU, aka capacity// Time: O (1) / / Space: O (n)

Run the test:

Accepted

24 cases passed (464 ms)

Your runtime beats 72.37% of cpp submissions

Your memory usage beats 45.99% of cpp submissions (186.7 MB)

This is the end of this article on "how C++ implements the caching algorithm for LRU and LFU". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, please share it out for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report