In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
In this issue, the editor will bring you about how to keep the data unified by Redis and MySQL. The article is rich in content and analyzes and narrates it from a professional point of view. I hope you can get something after reading this article.
1. The first scheme: using delayed double deletion strategy
Redis.del (key) operation is performed before and after writing the library, and a reasonable timeout is set.
The pseudo code is as follows:
Public void write (String key,Object data) {redis.delKey (key); db.updateData (data); Thread.sleep (500); redis.delKey (key);}
two。 The specific steps are:
1) delete the cache first
2) rewrite the database
3) dormant for 500 milliseconds
4) delete the cache again
So, how do you determine this 500 milliseconds, and how long should it be dormant?
You need to evaluate the time-consuming business logic of reading data for your project. The purpose of this is to ensure that the read request ends, and the write request removes the cache dirty data caused by the read request.
Of course, this strategy also takes into account the time consuming of redis and database master-slave synchronization. The final dormancy time for writing data: add a few hundred ms on the basis of reading the data business logic. For example: dormant for 1 second.
3. Set cache expiration time
In theory, setting an expiration time for the cache is the solution to ensure ultimate consistency. All write operations are based on the database, and as long as the cache expiration time is reached, subsequent read requests will naturally read the new value from the database and backfill the cache.
4. The disadvantages of the scheme
Combined with the double delete policy + cache timeout setting, the worst-case scenario is that the data is inconsistent during the timeout and increases the time-consuming of the write request.
2. The second scheme: update cache asynchronously (synchronization mechanism based on subscription binlog)
1. The overall idea of technology:
MySQL binlog incremental subscription consumption + message queuing + incremental data update to redis
1) read Redis: the hot data is basically in Redis.
2) write MySQL: all additions, deletions and modifications operate MySQL
3) Update Redis data: MySQ's data operation binlog to update to Redis
2.Redis update
1) data operations are mainly divided into two parts:
One is full (write all data to redis at once)
One is incremental (real-time updates)
Here we are talking about increments, which refers to mysql's update, insert, and delate change data.
2) analyze after reading binlog, and use message queue to push and update the redis cache data of each station.
In this way, once there are new write, update, delete and other operations in MySQL, the messages related to binlog can be pushed to Redis,Redis, and then the Redis can be updated according to the records in binlog.
In fact, this mechanism is very similar to the master-slave backup mechanism of MySQL, because the master and slave of MySQL also achieve data consistency through binlog.
Here you can use canal (an open source framework of Ali), through which you can subscribe to MySQL's binlog, and canal imitates the backup request of mysql's slave database, which makes the data update of Redis achieve the same effect.
This is how the Redis and MySQL shared by the editor keep the data unified. If you happen to have similar doubts, you might as well refer to the above analysis to understand. If you want to know more about it, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.