In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "what is the method of caching lru_cache in Python". The content of the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "what is the method of caching lru_cache in Python".
I. Preface
The word cache, which we often talk about, is more similar to storing the data in the hard disk in memory so as to improve the reading speed. For example, redis is often used to cache data.
Python's lru_cache is a kind of cache decorator that decorates the executed function and caches the result of its execution. When the next request, if the parameter passed by the function remains unchanged, it directly returns the cached result instead of executing the function.
Second, give some examples
1. Now let's not use the cache to write a function that finds the sum of two numbers, and call it to execute it twice:
Def test (a, b): print ('start calculating the value of astatb.') Return a + bprint ('1x 2 equals:', test (1,2)) print ('1x 2 equals:', test (1,2))
Execution result
Start calculating the value of astatb.
1 / 2 equals: 3
Start calculating the value of astatb.
1 / 2 equals: 3
You can see that the test has been executed twice, and now we add the cache to execute:
From functools import lru_cache@lru_cachedef test (a, b): print ('start calculating the value of astatb.') Return a + bprint (test (1,2)) print (test (1,2))
Execution result
Start calculating the value of astatb.
1 / 2 equals: 3
1 / 2 equals: 3
You can see that the test function is executed only once, and the second call outputs the result directly, using the cached value.
two。 When we use recursion to find the Fibonacci sequence (Fibonacci series refers to a series of numbers: 0meme 1je 1je 2je 3je 5re8, it starts with item 3, each of which is equal to the sum of the first two), the performance improvement of cache is particularly obvious:
Do not use the cache to find the Fioraci series of item 40
Import datetimedef fibonacci (num): # when the cache is not used, the function return num if num < 2 else fibonacci (num-1) + fibonacci (num-2) start = datetime.datetime.now () print (fibonacci (40)) end = datetime.datetime.now () print ('execution time', end-start) is repeated
Execution time
The execution time is 00000VOG 29.004424.
Use the cache to find the Fibonacci series for item 40:
Import datetimedef fibonacci (num): # when the cache is not used, the function return num if num < 2 else fibonacci (num-1) + fibonacci (num-2) start = datetime.datetime.now () print (fibonacci (40)) end = datetime.datetime.now () print ('execution time', end-start) is repeated
Execution time
Execution time 0:00:00
The gap between the two is obvious, because when you don't use caching, you have to repeat a lot of functions, while using lru_cache caches the results of previously executed functions, so you don't need to execute them again.
Third, lru_cache usage 1. Detailed explanation of parameters
If you look at the lru_cache source code, you will find that it can pass two parameters: maxsize, typed:
Def lru_cache (maxsize=128, typed=False): "Least-recently-used cache decorator. If * maxsize* is set to None, the LRU features are disabled and the cache can grow without bound."
1) maxsize
Represents the maximum number of results that can be cached by the method decorated by lru_cache (if the parameters are passed by different methods, the results are different; if the parameters are not specified, the default value is 128, indicating that a maximum of 128results are cached. When there are 128results to be saved, the oldest result will be deleted. If maxsize is passed as None, unlimited results can be cached.
2) typed
The default is false, which means there is no distinction between data types. If set to True, parameters are cached according to the type of passing parameters. This is how it is officially described:
If typed is True, different types of parameters will be cached separately
For example, f (3.0) and f (3) will be considered to have obvious results.
However, when testing under the python3.9.8 version, if typed is false, the results obtained by the official test method will still be treated as different results. At this time, whether the typed is false or true will be cached differently, which is different from the description in the official document:
From functools import lru_cache@lru_cachedef test (a): print ('function called...') Return aprint (test (1)) print (test (1))
Execution result
The function was called.
1.0
The function was called.
However, in the case of multiple parameters, it will be treated as a result:
From functools import lru_cache@lru_cachedef test (a, b): print ('function called...') Return a, bprint (test (1.0,2.0)) print (test (1,2))
Execution result
The function was called.
(1.0, 2.0)
(1.0, 2.0)
When typed is set to true, the cache will be distinguished:
From functools import lru_cache@lru_cache (typed=True) def test (a, b): print ('function called...') Return a, bprint (test (1.0,2.0)) print (test (1,2))
Execution result
The function was called.
(1.0, 2.0)
The function was called.
(1, 2)
It is only in line with the official statement when the number of parameters is greater than 1. It is not clear whether the official example is wrong.
2. Lru_cache does not support variable parameters.
When the passed parameter is a variable parameter of dict, list, etc., lru_cache does not support it, and an error will be reported:
From functools import lru_cache@lru_cachedef test (a): print ('function executed...') Return aprint (test ({'aqvvl1}))
Report the wrong result
TypeError: unhashable type: 'dict'
4. The difference between lru_cache and redis whether the cache location supports variable parameters, whether it supports distributed data structures that support expiration time settings, whether the data structures that are supported need to be installed separately in the memory managed by redis, whether they support five data structures that are cached in the memory of the application process, and whether the application is cleared if it is closed (parameter: key, result: value) No thank you for your reading The above is the content of "what is the method of caching lru_cache in Python". After the study of this article, I believe you have a deeper understanding of what the method of caching lru_cache in Python is, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 247
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.