In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly shows you "how to achieve Caffeine+Redis secondary cache based on Spring Cache", the content is easy to understand, clear, hope to help you solve your doubts, the following let the editor lead you to study and learn "how to achieve Caffeine+Redis secondary cache based on Spring Cache" this article.
The details are as follows:
First, talk about what is hard-coding using caching?
Before learning Spring Cache, I often use caching in a hard-coded way.
Let's take a practical example. In order to improve the efficiency of querying user information, we use caching for user information. The sample code is as follows:
I believe that many students have written similar style of code, this style is in line with process-oriented programming thinking, very easy to understand. But it also has some disadvantages:
The code is not elegant enough. There are four typical actions of business logic: store, read, modify, and delete. Each operation needs to define the cache Key, call the API of the cache command, and generate more repetitive code.
The code coupling between the cache operation and the business logic is high, and it is intrusive to the business logic. The invasiveness is mainly reflected in the following two points:
In the development and debugging phase, you need to remove the cache, and you can only comment or temporarily delete the cache operation code, which is also prone to errors.
In some scenarios, cache components need to be replaced, and each cache component has its own API, which is expensive to replace.
If it is like this, isn't it much more elegant?
@ Mapperpublic interface UserMapper {/ * obtain user information according to user id * * if the cached data is returned directly in the cache, if not, then go to the database to query, and then insert it into the cache, where the key prefix is cache_user_id_. + incoming user ID * / @ Cacheable (key = "'cache_user_id_' + # userId") User getUserById (Long userId) }
Let's look at the implementation class.
@ Autowire private UserMapper userMapper; / / query user public User getUserById (Long userId) {return userMapper.getUserById (userId);}
To see if it is completely separate from the cache, if you need to remove the cache during the development phase, then it would be nice to comment out the comments directly, isn't it perfect?
And this whole set of implementation does not have to write by hand, Spring Cache has already helped me define the relevant annotations and interfaces, we can easily implement the above functions.
II. Brief introduction of Spring Cache
Spring Cache is an annotation-based caching component provided in the Spring-context package. It defines some standard interfaces, and by implementing these interfaces, you can use the
Method is added to implement caching. This avoids the problem of coupling cached code with business processing.
There are only two interfaces for the Spring Cache core: Cache and CacheManager
1. Cache interface
This API defines the specific operations that provide cache, such as cache placement, reading, and cleaning:
Package org.Springframework.cache;import java.util.concurrent.Callable;public interface Cache {/ / cacheName, the name of the cache. In the default implementation, cacheName String getName () is passed when CacheManager creates the bean of Cache; / / gets the cache used by the underlying layer, such as Ehcache Object getNativeCache () / / get the cache value through key. Note that ValueWrapper is returned. In order to be compatible with storing null values, the return value is wrapped in a layer, and the actual value ValueWrapper get (Object key) is obtained by the get method; / / the cache value is obtained through key, and the actual value is returned, that is, the return value type of the method T get (Object key, Class type) / / to get the cache value through key, you can use valueLoader.call () to call the @ Cacheable annotation method. Use this method when the sync property of the @ Cacheable annotation is configured to true. / / therefore, the synchronization of origin-pull to the database needs to be ensured in the method. Avoid sending a large number of requests back to the database T get (Object key, Callable valueLoader) when the cache expires; / / putting the data returned by the @ Cacheable annotation method into the cache void put (Object key, Object value); / / putting it into the cache only when there is no key in the cache. The return value is the original data ValueWrapper putIfAbsent (Object key, Object value) when the key exists; / / delete the cache void evict (Object key); / / clear the cache void clear (); / / the wrapper of the cache return value interface ValueWrapper {/ / returns the actual cached object Object get ();}} 2, CacheManager API
Cache is mainly provided to realize the creation of bean. Cache can be isolated through cacheName in each application, and each cacheName corresponds to a Cache implementation.
Package org.Springframework.cache;import java.util.Collection;public interface CacheManager {/ / create the implementation bean of Cache through cacheName. In the specific implementation, you need to store the created Cache implementation bean to avoid repeated creation and avoid the loss of the original cache content after the in-memory cache / / object (such as Caffeine) is re-created Cache getCache (String name); / / return all cacheName Collection getCacheNames () } 3. Notes on common use
@ Cacheable: mainly applied to the method of querying data.
/ / cached condition, supports SpEL expression, and caches data only when the satisfied condition is reached. String condition () default "" is judged before and after the method is called; / / the cache is not updated when the condition is met, and the SpEL expression is supported, but only String unless () default "" is judged after the method is called; / / whether to keep synchronization when fetching data from the origin to the actual method. If it is false, the Cache.get (key) method is called If true, the Cache.get (key, Callable) method boolean sync () default false;} is called
@ CacheEvict: clear the cache, which is mainly applied to the method of deleting data. Two more attributes than Cacheable
Public @ interface CacheEvict {/ /... For the same attribute description, please refer to the instructions in @ Cacheable / / whether to clear all cached data. For false, the Cache.evict (key) method is called; for true, the Cache.clear () method boolean allEntries () default false; / / clears the cache boolean beforeInvocation () default false;} before or after calling the method.
@ CachePut: put it into the cache, mainly used to update the data. Property description reference @ Cacheable
@ Caching: used to configure multiple annotations on a method
@ EnableCaching: enable Spring cache caching. As a general switch, you need to add this comment to the startup class or configuration class of SpringBoot to take effect.
Third, some questions that need to be considered when using secondary cache?
We know that relational database (Mysql) data is eventually stored on disk. If you read it from the database every time, it will affect the reading speed because the IO of the disk itself affects the reading speed.
Memory caching like redis.
It is true that the query speed can be greatly improved through memory caching, but if the concurrency of the same query is very large and frequent query redis, there will be obvious consumption on the network IO.
Then for this kind of data that is queried very frequently (hot key), can we consider storing it in the application cache, such as caffeine?
When there is qualified data in the application cache, it can be used directly instead of going to the redis through the network to obtain it, thus forming a two-level cache.
In-application cache is called first-level cache, and remote cache (such as redis) is called second-level cache.
The whole process is as follows
The process looks fresh, but in fact, the second-tier cache still has a lot to consider.
1. How to ensure the consistency of cache at the first level of distributed nodes?
We say that the first-level cache is the in-application cache, so when your project is deployed in multiple nodes, how to ensure that when you modify and delete a key, make other nodes
Is the first-level cache consistent?
two。 Are null values allowed to be stored?
This is indeed a point to consider. Because if a query is not in the cache or in the database, it will lead to frequent queries to the database, resulting in database Down, which is also what we
It is often said that cache penetration.
However, if a null value is stored, a large number of null values may be stored, resulting in a larger cache, so it is best to be configurable and decide whether to open it according to the business.
3. Do you need cache warm-up?
In other words, we will think that some key will be very hot from the beginning, that is, hot data, so whether we can store it in the cache in the first place to avoid cache breakdown.
4. What is the consideration of the upper limit of first-level cache storage?
Since the first-level cache is an intra-application cache, do you consider giving a limited maximum value to the data stored in the first-level cache to avoid storing too much first-level cache leading to OOM?
5. What is the consideration of the first-level cache expiration policy?
We say that redis is a secondary cache, and redis is managed by an elimination policy. For more information, please refer to the 8 elimination strategies of redis. What about your first-tier caching strategy? It's like setting up a first-level cache.
The maximum number is 5000, so what do you do when the 5001 comes in? Is it not to save directly, or to customize the LRU or LFU algorithm to eliminate the previous data?
6. How to clear the first-level cache when it expires?
We say that redis is a secondary cache, and we have its cache expiration strategy (timing, periodicity, inertia). What about your primary cache? how to clear the expiration?
Here 4, 5, and 6 dots are obviously hard to implement with our traditional Map, but now there is a better primary cache that is Caffeine.
IV. Brief introduction of Caffeine
Caffeine, a high-performance cache for Java.
A fundamental difference between caching and Map is that caching cleans up stored items.
1. Write caching strategy
Caffeine has three cache write strategies: manual, synchronous, and asynchronous.
2. Cleanup strategy for cache values
Caffeine has three cleaning strategies for caching values: size-based, time-based, and reference-based.
Capacity-based: recycling occurs when the cache size exceeds the configured size limit.
Based on time:
Expiration policy after writing.
Expiration policy after access.
The expiration time is calculated independently by the Expiry implementation.
Reference-based: enables garbage collection based on cache key values.
There are four kinds of references in Java: strong reference, soft reference, weak reference and virtual reference. Caffeine can encapsulate values into weak references or soft references.
Soft references: if an object has only soft references, the garbage collector will not reclaim it if there is enough memory space; if there is not enough memory space, it will reclaim the memory of those objects.
Weak references: when the garbage collector thread scans the memory area under its jurisdiction, once an object with only weak references is found, regardless of whether the current memory space is sufficient or not
Reclaim its memory.
3. Statistics
Caffeine provides a method to record cache usage statistics, which can monitor the current status of the cache in real time to evaluate the health of the cache and the cache hit rate.
Continue to adjust the parameters.
4. Efficient cache elimination algorithm
The function of cache elimination algorithm is to identify which data will be reused in a short time within limited resources, so as to improve the hit rate of cache. Commonly used cache elimination algorithms are
LRU, LFU, FIFO, etc.
FIFO: first in, first out. Choose the first data to enter and give priority to elimination. LRU: least used recently. Choose the least recently used data to be eliminated first. LFU: used least often. Choose the data that is least used over a period of time to be eliminated first.
The LRU (Least Recently Used) algorithm believes that recently accessed data is also more likely to be accessed in the future.
LRU is usually implemented using linked lists. If data is added or accessed, the data is moved to the head of the linked list, the head of the linked list is hot data, and the tail of the linked list is like cold data, when
When the data is full, eliminate the tail data.
LFU (Least Frequently Used) algorithm eliminates data according to its historical access frequency. Its core idea is "if the data has been accessed many times in the past, it will be accessed in the future."
According to LFU, if you want to implement this algorithm, you need an additional set of storage to store the number of visits to each element, which will result in a waste of memory resources.
Caffeine adopts an algorithm that combines the advantages of LRU and LFU: W-TinyLFU, which is characterized by high hit rate and low memory footprint.
5. Other instructions
The underlying data storage of Caffeine uses ConcurrentHashMap. Because Caffeine is oriented to JDK8, ConcurrentHashMap adds red-black trees in jdk8, and conflicts in hash
It can also have good reading performance in severe cases.
Fifth, implement two-level cache (Caffeine+Redis) based on Spring Cache
As mentioned earlier, with the use of redis cache, there will also be a certain degree of network transmission consumption, so intra-application cache will be considered, but it is important to remember:
Intra-application cache can be understood as a resource that is more cherished than redis cache. Therefore, caffeine is not suitable for business scenarios with large data volume and low cache hit rate, such as caching in user dimensions.
The current project has deployed multiple nodes for the application, and the first-level cache is the cache within the application, so when data is updated and cleared, all nodes need to be notified to clean up the cache.
There are many ways to achieve this effect, such as zookeeper, MQ, etc., but since redis cache is used, redis itself supports subscription / publish, so
Instead of relying on other components, you can directly use redis's channel to notify other nodes to clean up the cache.
When a key updates and deletes, it is OK to notify other nodes by publishing subscriptions to delete the local first-level cache of the key.
The specific project code will no longer be pasted out here, so it will only paste how to reference the starter package.
1. Maven introduces the use of com.jincou redis-caffeine-cache-starter 1.0.02, application.yml
Add secondary cache related configuration
# second-level cache configuration # Note: caffeine is not suitable for business scenarios with a large amount of data and a very low cache hit rate, such as caching in the user dimension. Please choose carefully. L2cache: config: # whether to store null values, default true, prevent cache penetration allowNullValues: true # combined cache configuration composite: # whether all first-level caching is enabled, default false l1AllOpen: false # whether to manually enable first-level caching, default false l1Manual: true # manually configure the cache key collection of first-level caching Manually configure a collection of cache names for a single key dimension:-userCache:user01-userCache:user02-userCache:user03 # For cacheName dimension l1ManualCacheNameSet:-userCache-goodsCache # first level cache caffeine: # whether to automatically refresh expired cache true is false No autoRefreshExpireCache: false # cache refresh scheduling thread pool size refreshPoolSize: 2 # cache refresh frequency (seconds) refreshPeriod: 10 # post write expiration time (seconds) expireAfterWrite : 180 # Expiration time after access (seconds) expireAfterAccess: 180 # initialization size initialCapacity: 1000 # maximum number of cache objects Caches placed before this amount will expire maximumSize: 3000 # secondary cache redis: # global expiration time (in milliseconds). Default does not expire defaultExpiration: 300000 # expiration time per cacheName (in milliseconds) Higher priority than defaultExpiration expires: {userCache: 300000 goodsCache: 50000} # notify other nodes of the topic name when cache update default cache:redis:caffeine:topic topic: cache:redis:caffeine:topic3, add @ EnableCaching/** * startup class * / @ EnableCaching@SpringBootApplicationpublic class CacheApplication {public static void main (String [] args) {SpringApplication.run (CacheApplication.class, args) 4. Add @ Cacheable annotation / * test * / @ Servicepublic class CaffeineCacheService {private final Logger logger = LoggerFactory.getLogger (CaffeineCacheService.class) to the method that needs to be cached; / * to simulate db * / private static Map userMap = new HashMap (); {userMap.put ("user01", new UserDTO ("1", "Zhang San")) UserMap.put ("user02", new UserDTO ("2", "Li Si"); userMap.put ("user03", new UserDTO ("3", "Wang Wu"); userMap.put ("user04", new UserDTO ("4", "Zhao Liu")) } / * get or load cache items * / @ Cacheable (key = "'cache_user_id_' + # userId", value = "userCache") public UserDTO queryUser (String userId) {UserDTO userDTO = userMap.get (userId); try {Thread.sleep (1000) / / simulate the time taken to load data} catch (InterruptedException e) {e.printStackTrace ();} logger.info ("load data: {}", userDTO); return userDTO;} / * * get or load cache items *
* Note: since the underlying layer is based on caffeine to implement first-level cache, the synchronization mechanism of caffeine itself is used to implement * sync=true means that cache items are loaded synchronously in concurrent scenarios. * sync=true, where cache items are obtained or loaded through get (Object key, Callable valueLoader), valueLoader (specific logic of loading cache items) is cached, so when CaffeineCache refreshes expired cache regularly, cache items expire will be reloaded. * sync=false is used to obtain cache items through get (Object key). Since there is no valueLoader (specific logic for loading cache items), when CaffeineCache refreshes the expired cache regularly, the expiration of cache items will be eliminated. *
* / @ Cacheable (value = "userCache", key = "# userId", sync = true) public List queryUserSyncList (String userId) {UserDTO userDTO = userMap.get (userId); List list = new ArrayList (); list.add (userDTO); logger.info ("load data: {}", list); return list } / * Update cache * / @ CachePut (value = "userCache", key = "# userId") public UserDTO putUser (String userId, UserDTO userDTO) {return userDTO;} / * eliminate cache * / @ CacheEvict (value = "userCache", key = "# userId") public String evictUserSync (String userId) {return userId }} above is all the content of the article "how to implement Caffeine+Redis secondary cache based on Spring Cache". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.