In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces the relevant knowledge of "how to achieve Redis multi-level cache based on Java". The editor shows you the operation process through an actual case. The operation method is simple, fast and practical. I hope this article "how to achieve Redis multi-level cache based on Java" can help you solve the problem.
First, multi-level cache 1. Traditional caching scheme
After the request arrives at tomcat, first go to redis to get the cache, and if it fails, go to mysql to get it.
two。 Multi-level cache scheme
The number of concurrent requests of tomcat is much smaller than that of redis, so tomcat will become a bottleneck.
Each step of request processing is used to add cache separately to reduce tomcat pressure and improve service performance.
2. JVM local cache
The cache is stored in memory, and the data reading speed is fast, which can greatly reduce the access to the database and reduce the pressure on the database.
Distributed cache, such as redis
-advantages: large storage capacity, good reliability, and can be shared in the cluster
-disadvantages: access to the cache has network overhead
-scenario: cache the data that needs to be shared in the cluster with a large amount of data and high reliability
Process local cache, such as HashMap, GuavaCache
-advantages: read local memory, no network overhead, faster
-disadvantages: limited storage capacity, low reliability (such as lost after reboot), unable to share in the cluster
-scenario: high performance requirements and small amount of cached data
1. Practical case
Caffeine is a java8-based high-performance local cache that provides near-optimal hit rates.
At present, this is what the internal cache of spring uses.
Com.github.ben-manes.caffeine caffeine 3.0.5 package com.erick.cache;import com.github.benmanes.caffeine.cache.Cache;import com.github.benmanes.caffeine.cache.Caffeine;import java.time.Duration;public final class CacheUtil {private static int expireSeconds = 2; public static Cache cacheWithExpireSeconds; private static int maxPairs = 1; public static Cache cacheWithMaxPairs Static {/ * Expiration policy, which expires after 60s * / cacheWithExpireSeconds = Caffeine.newBuilder () .expireAfterWrite (Duration.ofSeconds (expireSeconds)) .build (); / * Expiration policy, delete * 1 when the maximum value is reached. It will not be deleted immediately. It will take a while to delete * 2. The previously stored data will be deleted * / cacheWithMaxPairs = Caffeine.newBuilder () .maximumSize (maxPairs) .build ();} / * get the data from the cache * 1. If there is one in the cache, return * 2 directly from the cache. If not in the cache, go to the data query and return the result * / public static String getKeyWithExpire (String key) {return cacheWithExpireSeconds.get (key, value-> {return getResultFromDB ();});} public static String getKeyWithMaxPair (String key) {return cacheWithMaxPairs.get (key, value-> {return getResultFromDB ();}) } private static String getResultFromDB () {System.out.println ("database query"); return "db result";}} package com.erick.cache;import java.util.concurrent.TimeUnit;public class Test {@ org.junit.Test public void test01 () throws InterruptedException {CacheUtil.cacheWithExpireSeconds.put ("name", "erick"); System.out.println (CacheUtil.getKeyWithExpire ("name")) TimeUnit.SECONDS.sleep (3); System.out.println (CacheUtil.getKeyWithExpire ("name"));} @ org.junit.Test public void test02 () throws InterruptedException {CacheUtil.cacheWithMaxPairs.put ("name", "erick"); CacheUtil.cacheWithMaxPairs.put ("age", "12"); System.out.println (CacheUtil.getKeyWithMaxPair ("name")) System.out.println (CacheUtil.getKeyWithMaxPair ("age")); TimeUnit.SECONDS.sleep (2); System.out.println (CacheUtil.getKeyWithMaxPair ("name")); / / cannot query System.out.println (CacheUtil.getKeyWithMaxPair ("age"));}} III. Cache consistency 1. Common scenarios 1.1 set the validity period
Set the validity period for the cache and delete it automatically when it expires. You can update when you query again
Advantages: simple and convenient
Disadvantages: poor timeliness, cache may be inconsistent before expiration
Scenario: businesses with low update frequency and low timeliness requirements
1.2 synchronous double write
Modify the cache directly while modifying the database
Advantages: code intrusion, strong consistency between cache and database
Disadvantages: code entry, high coupling
Scenario: cached data with high requirements for consistency and invalidity
1.3 Asynchronous Notification
Send an event notification when you modify the database, and modify the cached data after the relevant service listens.
Advantages: low coupling, can notify multiple caching services at the same time
Disadvantages: due to timeliness, there may be cache inconsistencies
Scenario: general timeliness, there are multiple services that need to be synchronized
two。 Asynchronous notification based on Canal
Is an open source project under Ali, based on java development
Provide incremental data subscription and consumption based on database incremental log parsing
The idea of master-slave backup based on mysql
2.1 mysql Master-Slave replication
2.2 how canal works
Canal simulates the interaction protocol of MySQL slave, disguises itself as MySQL slave, and sends dump protocol to MySQL master
MySQL master receives the dump request and starts to push binary log to slave (i.e. canal)
Canal parses binary log objects (originally byte streams)
This is the end of the introduction on "how to implement Redis multi-level cache based on Java". Thank you for reading. If you want to know more about the industry, you can follow the industry information channel. The editor will update different knowledge points for you every day.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.