Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to improve the efficiency of python using its own caching mechanism

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/01 Report--

This article will explain in detail how python uses its own caching mechanism to improve efficiency. The editor finds it very practical, so I share it with you as a reference. I hope you can get something after reading this article.

Improve efficiency by using built-in caching mechanism

Caching is a way to save quantitative data to meet the needs of subsequent acquisition, which aims to speed up the speed of data acquisition.

The data generation process may need to be calculated, organized, remote acquisition and other operations. If the same data needs to be used many times, it will be a waste of time to regenerate each time. Therefore, if the data obtained by operations such as calculations or remote requests are cached, the subsequent data acquisition requirements will be accelerated.

To implement this requirement, Python 3.2 + provides us with a mechanism that can be easily implemented without requiring you to write such logic code.

This mechanism is implemented in the lru_cache decorator in the functool module.

@ functools.lru_cache (maxsize=None, typed=False)

Parameter interpretation:

Maxsize: the maximum number of calls to this function can be cached. If None, there is no limit. When set to a power of 2, the performance is the best.

Typed: if True, calls of different parameter types are cached separately.

For instance

From functools import lru_cache @ lru_cache (None) def add (x, y): print ("calculating:% s +% s"% (x, y)) return x + y print (add (1,2)) print (add (1,2)) print (add (2,3))

The output is as follows, you can see that the second call does not really execute the function body, but directly returns the result in the cache

Calculating: 1 + 2 3 3 calculating: 2 + 3 5

The following is the classic Fibonacci series, and when you specify a large n, there will be a lot of double counting.

Def fib (n): if n < 2: return n return fib (n-2) + fib (n-1)

Timeit, introduced in the sixth point, can now be used to test how much efficiency can be improved.

Without lru_cache, the runtime is 31 seconds

Import timeit def fib (n): if n < 2: return n return fib (n-2) + fib (n-1) print (timeit.timeit (lambda: fib (40), number=1)) # output: 31.2725698948

Because I ran so fast with lru_cache, I changed the value of n from 30 to 500, but even so, the running time was only 0.0004 seconds. The increase in speed is very significant.

Import timeit from functools import lru_cache @ lru_cache (None) def fib (n): if n < 2: return n return fib (n-2) + fib (n-1) print (timeit.timeit (lambda: fib (500), number=1)) # output: 0.0004921059880871326 on how python uses its own caching mechanism to improve efficiency. I hope the above can be helpful to you. So that you can learn more knowledge, if you think the article is good, please share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report