In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly introduces "what are the implementation steps of SpringBoot integration Ehcache3". In daily operation, I believe that many people have doubts about the implementation steps of SpringBoot integration Ehcache3. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts of "what are the implementation steps of SpringBoot integration Ehcache3?" Next, please follow the editor to study!
Preface
The old project of the company department needs cache-related operations to migrate and upgrade the java version, which is not supported by the original framework. After investigation, java-related cache schemes are roughly divided into ehcache and redis. The maximum value of redis is 500mb and exceeding 1mb will have an impact on access performance. Business systems need to support list query cache, which inevitably involves a large number of data access filtering. Ehcache supports memory + disk cache without worrying about cache capacity, so the preliminary version of the framework decided to integrate ehcache3. The design process structure is shown in the following figure.
Maven reference org.springframework.boot spring-boot-starter-cache org.ehcache ehcache Personalization configuration # Cache configuration cache: ehcache: heap: 1000 offheap: 100 disk: 500 diskDir: tempfiles/cache/@Component@ConfigurationProperties ("frmae.cache.ehcache") public class EhcacheConfiguration { / * * ehcache heap size * number of key cached in jvm memory * / private int heap / * * ehcache offheap size * out-of-heap memory size, in MB * / private int offheap; / * * disk persistence directory * / private String diskDir; / * * ehcache disk * persisted to disk size, in MB * diskDir valid * / private int disk Public EhcacheConfiguration () {heap = 1000; offheap = 100; disk = 500; diskDir = "tempfiles/cache/";}} Code injection configuration
Because springboot default cache first injects redis configuration, you need to manually declare bean for injection. At the same time, the value of ehcache must support serialization interface and cannot be replaced by Object. A cache base class is declared here, and all cached value objects must inherit this class.
Public class BaseSystemObject implements Serializable {} @ Configuration@EnableCachingpublic class EhcacheConfig {@ Autowired private EhcacheConfiguration ehcacheConfiguration; @ Autowired private ApplicationContext context @ Bean (name = "ehCacheManager") public CacheManager getCacheManager () {/ / Resource Pool Generator configuration persistence ResourcePoolsBuilder resourcePoolsBuilder = ResourcePoolsBuilder.newResourcePoolsBuilder () / / on-heap cache size .heap (ehcacheConfiguration.getHeap (), EntryUnit.ENTRIES) / / out-of-heap cache size .offheap (ehcacheConfiguration.getOffheap ()) MemoryUnit.MB) / / File cache size .disk (ehcacheConfiguration.getDisk (), MemoryUnit.MB) / / build configuration ExpiryPolicy expiryPolicy = ExpiryPolicyBuilder.noExpiration (); CacheConfiguration config = CacheConfigurationBuilder.newCacheConfigurationBuilder (String.class, BaseSystemObject.class, resourcePoolsBuilder) / / Settings never expire. WithExpiry (expiryPolicy) .build (); CacheManagerBuilder cacheManagerBuilder = CacheManagerBuilder.newCacheManagerBuilder () .with (CacheManagerBuilder.persistence (ehcacheConfiguration.getDiskDir () Return cacheManagerBuilder.build (true);}} Cache operation cache prefetch
The double write strategy selected for the cache framework, that is, the database and the cache are written at the same time, so the database data needs to be loaded into the cache in advance when the system starts.
Custom annotations are declared for a single table, and personalized cache defines custom interfaces.
@ Target ({ElementType.TYPE}) @ Retention (RetentionPolicy.RUNTIME) @ Documented@Inheritedpublic @ interface HPCache {} public interface IHPCacheInitService {String getCacheName (); void initCache ();}
Perform cache initialization synchronously during system initialization, scan annotated entity classes and interfaces to achieve Bean
@ Async public void initCache (Class runtimeClass, List extraPackageNameList) {List > cacheEntityList = new ArrayList (); if (! runtimeClass.getPackage (). GetName (). Equals (Application.class.getPackage (). GetName ()) {cacheEntityList.addAll (ScanUtil.getAllClassByPackageName_Annotation (runtimeClass.getPackage (), HPCache.class));} for (String packageName: extraPackageNameList) {cacheEntityList.addAll (ScanUtil.getAllClassByPackageName_Annotation (packageName, HPCache.class)) } for (Class clazz: cacheEntityList) {cacheManagerBuilder = cacheManagerBuilder.withCache (clazz.getName (), config);} / / Custom cache Map res = context.getBeansOfType (IHPCacheInitService.class); for (Map.Entry en: res.entrySet ()) {IHPCacheInitService service = (IHPCacheInitService) en.getValue (); cacheManagerBuilder = cacheManagerBuilder.withCache (service.getCacheName (), config) } Update operation
Get the bean object of ehcache manually and call the put,repalce,delete method to operate
Private CacheManager cacheManager = (CacheManager) SpringBootBeanUtil.getBean ("ehCacheManager"); public void executeUpdateOperation (String cacheName, String key, BaseSystemObject value) {Cache cache = cacheManager.getCache (cacheName, String.class, BaseSystemObject.class); if (cache.containsKey (key)) {cache.replace (key, value);} else {cache.put (key, value) }} public void executeDeleteOperation (String cacheName, String key) {Cache cache = cacheManager.getCache (cacheName, String.class, BaseSystemObject.class); cache.remove (key);} query operation
Cache storage single table is stored in the form of primary key-object, personalized cache is stored in the form of key-object, a single record can be queried by getCache method, and list query needs to take out the entire cache and filter according to conditions.
Public Object getCache (String cacheName, String key) {Cache cache = cacheManager.getCache (cacheName, String.class, BaseSystemObject.class); return cache.get (key);} public List getAllCache (String cacheName) {List result = new ArrayList (); Cache cache = cacheManager.getCache (cacheName, String.class, BaseSystemObject.class); Iterator iter = cache.iterator () While (iter.hasNext ()) {Cache.Entry entry = (Cache.Entry) iter.next (); result.add (entry.getValue ());} return result;} cache is consistent with database data
The sequence of database data operation and cache operation is to operate the data first and then the cache. There is no problem with a single operation for a single data when the database transaction is turned on. If it is a combined operation, once an abnormal rollback occurs in the database operation, the cache does not roll back, which will lead to data inconsistency. For example, if the execution order is dbop1= "cacheop1=" dbop2= "cacheop2,dbop2 exception, the operation of cacheop1 has changed the cache.
The scheme chosen here is to operate the cache uniformly after all the database has been executed. One disadvantage of this scheme is that if the cache operation is abnormal or the above problems will occur, in the actual process, the cache only has a small probability of abnormal operation on memory. We are optimistic about the cache operation. At the same time, we provide the function of manually resetting the cache, which can be regarded as a compromise. An implementation of this scheme is summarized below.
Declare custom cache transaction comments
@ Target ({ElementType.METHOD}) @ Retention (RetentionPolicy.RUNTIME) @ Documented@Inheritedpublic @ interface CacheTransactional {}
Declare aspect monitoring, perform Redis identification before the execution of the method marked with CacheTransactional annotations, and perform caching operations after uniformly executing the method body
Serialize the cache operation into the queue to be executed by thread id and serialize it to redis, providing a unified operation method
Public class CacheExecuteModel implements Serializable {private String obejctClazzName; private String cacheName; private String key; private BaseSystemObject value; private String executeType;} private CacheManager cacheManager = (CacheManager) SpringBootBeanUtil.getBean ("ehCacheManager"); @ Autowired private RedisUtil redisUtil; public void putCacheIntoTransition () {String threadID = Thread.currentThread () .getName (); System.out.println ("init threadid:" + threadID); CacheExecuteModel cacheExecuteModel = new CacheExecuteModel (); cacheExecuteModel.setExecuteType ("option") RedisUtil.redisTemplateSetForCollection (threadID,cacheExecuteModel, GlobalEnum.RedisDBNum.Cache.get_value ()); redisUtil.setExpire (threadID,5, TimeUnit.MINUTES, GlobalEnum.RedisDBNum.Cache.get_value ());} public void putCache (String cacheName, String key, BaseSystemObject value) {if (checkCacheOptinionInTransition ()) {String threadID = Thread.currentThread (). GetName () CacheExecuteModel cacheExecuteModel = new CacheExecuteModel ("update", cacheName,key, value.getClass (). GetName (), value); redisUtil.redisTemplateSetForCollection (threadID,cacheExecuteModel, GlobalEnum.RedisDBNum.Cache.get_value ()); redisUtil.setExpire (threadID,5, TimeUnit.MINUTES, GlobalEnum.RedisDBNum.Cache.get_value ());} else {executeUpdateOperation (cacheName,key,value) }} public void deleteCache (String cacheName, String key) {if (checkCacheOptinionInTransition ()) {String threadID = Thread.currentThread (). GetName (); CacheExecuteModel cacheExecuteModel = new CacheExecuteModel ("delete", cacheName, key); redisUtil.redisTemplateSetForCollection (threadID,cacheExecuteModel, GlobalEnum.RedisDBNum.Cache.get_value ()); redisUtil.setExpire (threadID,5, TimeUnit.MINUTES, GlobalEnum.RedisDBNum.Cache.get_value ()) } else {executeDeleteOperation (cacheName,key);}} public void executeOperation () {String threadID = Thread.currentThread (). GetName (); if (checkCacheOptinionInTransition ()) {List executeList = redisUtil.redisTemplateGetForCollectionAll (threadID, GlobalEnum.RedisDBNum.Cache.get_value ()); for (LinkedHashMap obj:executeList) {String executeType = ConvertOp.convert2String (obj.get ("executeType")) If (executeType.contains ("option")) {continue;} String obejctClazzName = ConvertOp.convert2String (obj.get ("obejctClazzName")); String cacheName = ConvertOp.convert2String (obj.get ("cacheName")); String key = ConvertOp.convert2String (obj.get ("key")) LinkedHashMap valueMap = (LinkedHashMap) obj.get ("value"); String valueMapJson = JSON.toJSONString (valueMap); try {Object valueInstance = JSON.parseObject (valueMapJson,Class.forName (obejctClazzName)); if (executeType.equals ("update")) {executeUpdateOperation (cacheName,key, (BaseSystemObject) valueInstance) } else if (executeType.equals ("delete")) {executeDeleteOperation (cacheName,key);}} catch (Exception e) {e.printStackTrace ();}} redisUtil.redisTemplateRemove (threadID,GlobalEnum.RedisDBNum.Cache.get_value ()) }} public boolean checkCacheOptinionInTransition () {String threadID = Thread.currentThread (). GetName (); System.out.println ("check threadid:" + threadID); return redisUtil.isValid (threadID, GlobalEnum.RedisDBNum.Cache.get_value ());} at this point, the study on "what are the implementation steps for SpringBoot to integrate Ehcache3" is over, hoping to solve everyone's doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.