In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
The knowledge points of this article "Redis caching and MySQL data consistency method" are not understood by most people, so the editor summarizes the following contents, detailed content, clear steps, and has a certain reference value. I hope you can get something after reading this article. Let's take a look at this "Redis caching and MySQL data consistency method" article.
Demand cause
In high concurrency business scenarios, the database is the weakest link for users to access concurrently in most cases. Therefore, you need to use redis to do a buffering operation so that the request accesses the redis first, rather than directly accessing a database such as MySQL.
In this business scenario, the read data is cached from Redis, and the business operation is generally carried out according to the process shown in the figure below.
There is generally no problem with reading Redis steps, but when it comes to data updates: database and cache updates, it is easy to have data consistency problems between cache (cache) and database (cache).
Whether you write the MySQL database first, then delete the Redis cache, or delete the cache first, and then write to the library, data inconsistencies may occur. To give an example:
1. If the cache Redis is deleted, another thread reads it before writing to the library MySQL, and finds that the cache is empty, then read data from the database and write to the cache. At this time, the cache is dirty.
two。 If you write the library first, the thread writing to the library goes down before deleting the cache, and the cache is not deleted, then data inconsistencies will also occur.
Because write and read are concurrent and the order cannot be guaranteed, there will be data inconsistencies between the cache and the database.
The Tathagata? Here are two solutions, easy before difficult, combined with business and technical costs to choose to use.
Cache and database consistency solution
1. The first scheme: using delayed double deletion strategy
Redis.del (key) operation is performed before and after writing the library, and a reasonable timeout is set.
The pseudo code is as follows
Public void write (String key,Object data) {
Redis.delKey (key)
Db.updateData (data)
Thread.sleep (500)
Redis.delKey (key)
}
two。 The specific steps are:
1) delete the cache first
2) rewrite the database
3) dormant for 500 milliseconds
4) delete the cache again
So, how do you determine this 500 milliseconds, and how long should it be dormant?
You need to evaluate the time-consuming business logic of reading data for your project. The purpose of this is to ensure that the read request ends, and the write request removes the cache dirty data caused by the read request.
Of course, this strategy also takes into account the time consuming of redis and database master-slave synchronization. The final dormancy time for writing data: add a few hundred ms on the basis of reading the data business logic. For example: dormant for 1 second.
3. Set cache expiration time
In theory, setting an expiration time for the cache is the solution to ensure ultimate consistency. All write operations are based on the database, and as long as the cache expiration time is reached, subsequent read requests will naturally read the new value from the database and backfill the cache.
4. The disadvantages of the scheme
Combined with the double delete policy + cache timeout setting, the worst-case scenario is that the data is inconsistent during the timeout and increases the time-consuming of the write request.
2. The second scheme: update cache asynchronously (synchronization mechanism based on subscription binlog)
1. The overall idea of technology:
MySQL binlog incremental subscription consumption + message queuing + incremental data update to redis
1) read Redis: the hot data is basically in Redis.
2) write MySQL: all additions, deletions and modifications operate MySQL
3) Update Redis data: MySQ's data operation binlog to update to Redis
2.Redis update
1) data operations are mainly divided into two parts:
One is full (write all data to redis at once)
One is incremental (real-time updates)
Here we are talking about increments, which refers to mysql's update, insert, and delate change data.
2) analyze after reading binlog, and use message queue to push and update the redis cache data of each station.
In this way, once there are new write, update, delete and other operations in MySQL, the messages related to binlog can be pushed to Redis,Redis, and then the Redis can be updated according to the records in binlog.
In fact, this mechanism is very similar to the master-slave backup mechanism of MySQL, because the master and slave of MySQL also achieve data consistency through binlog.
Here you can use canal (an open source framework of Ali), through which you can subscribe to MySQL's binlog, and canal imitates the backup request of mysql's slave database, which makes the data update of Redis achieve the same effect.
Of course, the message push tool here you can also use other third parties: kafka, rabbitMQ, etc., to implement the push update Redis.
The above is the content of this article on "methods of Redis caching and MySQL data consistency". I believe we all have some understanding. I hope the content shared by the editor will be helpful to you. If you want to know more about the relevant knowledge, please follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.