In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly introduces "how to support distributed mybatis secondary cache through redis in Springboot2.0". In daily operation, it is believed that many people have doubts about how to support distributed mybatis secondary cache through redis in Springboot2.0. The editor consulted all kinds of materials and sorted out simple and useful operation methods. I hope it will be helpful to answer the question of "how to support distributed mybatis secondary cache through redis in Springboot2.0"! Next, please follow the editor to study!
Recently, the leader asked to add mybatis second-level cache to the project. Since the current project is a distributed micro-service and deployed with multiple nodes, and redis used by the company's cache middleware, it is natural to use redis as distributed cache support to avoid problems such as cache data inconsistency caused by the direct use of native mybatis second-level cache. The following will be a brief introduction to the redis-based mybatis two-tier cache implementation, involving some concepts, while some potholes are sorted out.
1. First-level cache
The first-level cache is at the SqlSession level, and MyBatis enables the first-level cache by default. That is, for the same SqlSession object, when the same Mapper method is called multiple times with the same parameter, the SQL is executed only once, and the data is cached after the first query. In the case of no cache refresh or timeout, the subsequent calls directly fetch data from the cache and no longer check the database. Caching is isolated between different SqlSession.
In addition, in the actual project development, the first-level cache has great limitations. Our project is generally Spring+Mybatis integrated development, while the transaction management of Spring is in the logical layer, and each service corresponds to a different SqlSession (this is automatically injected into the service through the creation of SqlSession through the MapperScannerConfigurer class), SqlSession will be closed after each query, and the cached data will be emptied. So after Spring integration, first-level caching is meaningless if there are no transactions.
two。 Second-level cache
Second-level cache is Mapper-level cache. Mybatis does not enable second-level cache by default. The scope of the second-level cache is the namespace of mapper, that is, two mapper with the same namespace will share the same cache region; cross-SqlSession is supported, that is, multiple SqlSession can share a mapper cache. The implementation is based on PerpetualCache HashMap for local storage, but also supports custom three-party storage such as ehcache, redis, memcache, etc., to support distribution. When using HashMap to store cache locally, key is a hashCode+sqlId+Sql statement (the query parameters seem to participate, and the selectAll used by demo does not pay much attention to it), and key is similar for other three-party storage.
Note: after the secondary cache is enabled
All select statements in the mapping file will be cached.
All insert,update and delete statements in the mapping file will clear the cache.
By default, the cache uses the "recently rarely used" LRU algorithm to recycle
The cache will not be emptied by the set time.
Each cache can store 1024 references to lists or objects (regardless of the results of the query).
The cache will act as a "read / write" cache, meaning that the acquired object is not shared and is safe for the caller. No other calls will interfere with potential modifications made by other callers or threads
Implementation steps:
1. Set the global cache-enable switch, which defaults to true (it is proved to be OK if it is not set in practice)
Create a configuration file for mybatis-config.xml
Load the configuration when Mybatis configures SqlSessionFactory
Factory.setConfigLocation (new ClassPathResource ("mybatis-config.xml"))
Note: configure the cache switch through mybatis-config.xml to verify that the start and stop is normal, but the setting of the configuration property mybatis.configuration.cache-enabled=true does not work, the reason remains to be explored.
2. Enable the cache tag in mapper.xml
This is the key to enabling secondary caching. The following configuration is mybatis local cache, which works on all queries of the entire mapper. If one does not need caching, you can set useCache=false.
To customize three-party storage, you need to implement the org.apache.ibatis.cache.Cache interface and specify a custom implementation in the tag. In addition, with regard to the ApplicationContext context acquisition of Spring, it is simply mentioned that you can cut into the context by implementing the ApplicationContextAware interface.
@ Slf4jpublic class MybatisRedisCache implements Cache {/ / RedisTemplate instance wrapper class RedisUtilHandler redisUtilHandler; private String id; private final ReadWriteLock readWriteLock = new ReentrantReadWriteLock (); public MybatisRedisCache (final String id) {if (id = = null) {throw new IllegalArgumentException ("Cache instances require an ID");} this.id = id } / / get the redis operation class through the Spring context / / individuals understand that some configuration loading of mybatis works at the interceptor layer, initialization will be earlier than some bean loading of the IOC container, which will be automatically injected through spring / / the proxy object will not be obtained, so post-delay processing is done here. Get Bean private RedisUtilHandler getRedisHandler () {if (redisUtilHandler = = null) {redisUtilHandler = SpringContextHolder.getBean ("redisUtilHandler") from the context when calling } return redisUtilHandler;} @ Override public void clear () {try {RedisUtilHandler redisUtilHandler = getRedisHandler (); redisUtilHandler.flushCache (id);} catch (Exception e) {log.error ("clear Exception: {}", e);}} @ Override public String getId () {return this.id } @ Override public void putObject (Object key, Object value) {try {RedisUtilHandler redisUtilHandler = getRedisHandler (); redisUtilHandler.setCache (key.toString (), value, 1, TimeUnit.DAYS);} catch (Exception e) {log.error ("putObject Exception: {}", e) } @ Override public Object getObject (Object key) {Object result = null; try {RedisUtilHandler redisUtilHandler = getRedisHandler (); result = redisUtilHandler.getCache (key.toString (), Object.class);} catch (Exception e) {log.error ("getObject Exception: key### {}", key, e) } return result;} @ Override public Object removeObject (Object key) {Object result = null; try {RedisUtilHandler redisUtilHandler = getRedisHandler (); redisUtilHandler.delete (key.toString ());} catch (Exception e) {log.error ("clear Exception: {}", e);} return result } @ Override public int getSize () {RedisUtilHandler redisUtilHandler = getRedisHandler (); Long size = (Long) redisUtilHandler.getInstance () .execute (new RedisCallback () {@ Override public Long doInRedis (RedisConnection connection) throws DataAccessException {return connection.dbSize ();}}); return size.intValue () } @ Override public ReadWriteLock getReadWriteLock () {return this.readWriteLock;}}... SELECT * FROM table SELECT * FROM table WHERE user_id = # {userId}
3. Model entity classes need to be serialized
Public class User implements Serializable {private static final long serialVersionUID =-6596381461353742505L;.}
This article uses redis as the storage medium, and the serialization mode of key and value is specified when redis is configured, so I am doing this when serializing on entity classes is optional (some people say that even local caching is not needed)
Execute the sample results:
Potholing:
In the second step, I use the tag to enable the secondary cache. Here, you can also use the equivalent annotation on the mapper API class of mybatis to enable the secondary cache, as follows:
@ CacheNamespace (implementation = com.xxx.xxx.configs.MybatisRedisCache.class)
However, annotations and tags in xml cannot work at the same time, that is, when annotations are used, SQL can only be executed with @ Select annotation binding on the methods of the Mapper API, and caching can only be effective. Similarly, tags can only be defined in mapper.xml for cached queries. If both exist at the same time, there is only one kind of effect, which can be tried by yourself.
Defect analysis:
Mybatis's own cache does not support distribution naturally, so it is necessary to integrate other third-party caches.
Fortunately, this paper uses redis as a custom cache to solve this problem.
Because the secondary cache is based on the mapper level and is isolated by namespace, it may cause dirty reading of the data of the join table query.
This will happen when you do a join table query. When the tables participating in the federation query are cached by one or more namespace, they are all stored data mirrors when they are first associated with each other. After that, the data under each namespace table is updated, and the secondary cache is unknown, which results in dirty reading of the data.
Imaginable ways to deal with it:
1. Join table query. All associated tables must operate on the same namespace. It's hard to guarantee.
2. To reduce the cache validity time, it is currently a three-party cache based on redis. You can set the expiration time by yourself. You should shorten the cache validity time as much as possible without affecting the business performance. But in fact, the problem is also to cure the symptoms rather than the root causes.
At this point, the mybatis secondary cache should be a relatively complete implementation. Based on the flaw, do we really need to use mybatis's secondary cache for our project? As other users have said, mybatis disables secondary cache by default, so you should consider it. If it is necessary, such as a large single table query with high access frequency, or the table data update frequency will not be too high, cache aging can override the change frequency, secondary cache is still a good choice. For other complex scenarios, it is recommended to make your own business cache or go directly to Spring Cache.
Attached: description of the parameters configured in mapper:
Eviction (available recall policy) defaults to LRU
LRU-least recently used: remove objects that have not been used for the longest time.
FIFO-FIFO: removes objects in the order in which they enter the cache.
SOFT-soft reference: removes objects based on garbage collector state and soft reference rules.
WEAK-weak references: more actively remove objects based on garbage collector state and weak reference rules.
FlushInterval (refresh interval) can be set to any positive integer, and they represent a reasonable millisecond period of time. The default is not set, that is, there is no refresh interval, and the cache is only refreshed when the statement is called.
Size (number of references) can be set to any positive integer, keeping in mind the number of objects you cache and the number of memory resources available in your runtime environment. The default is 1024.
The readOnly (read-only) property can be set to true or false. The read-only cache returns the same instance of the cache object to all callers. Therefore, these objects cannot be modified. This provides an important performance advantage. The read-write cache returns a copy of the cache object (by serialization). This is slower, but secure, so the default is false.
At this point, the study on "how to support distributed mybatis secondary cache through redis in Springboot2.0" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.