In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article shows you how to cache MySQL in Redis, which is concise and easy to understand, which will definitely brighten your eyes. I hope you can get something through the detailed introduction of this article.
Redis is used to read and write data, and queue processor is used to write data to mysql regularly.
At the same time, we should pay attention to avoid conflicts. When redis starts, read all the table key values to mysql and store them in redis. When writing data to redis, the redis primary key is incremented and read. If the mysql update fails, we need to clear the cache and synchronize the redis primary key in time.
In this way, redis is mainly read and written in real time, while mysql data is processed asynchronously through queues to relieve the pressure on mysql. However, the application scenario of this method is mainly based on high concurrency, and the high availability cluster architecture of redis is relatively more complex, which is generally not recommended.
How to synchronize redis with mysql Database
[option 1]
Http://www.zhihu.com/question/23401553?sort=created
Program to achieve mysql update, add, delete redis data will be deleted.
The program queries redis. If it doesn't exist, query mysql and save redi.
For synchronization of redis and mysql data, the code level can be roughly as follows:
Read: read redis- > No, read mysql- > write mysql data back to redi
Write: write mysql- > successful, write redis (capture all mysql modification, write and delete events, operate on redis)
[option 2]
Http://www.linuxidc.com/Linux/2015-01/380.htm
Get mysql binlog in real time for parsing, and then modify redi
MySQL to Redis data scheme
Both MySQL and Redis have the mechanism of data synchronization. For example, the commonly used Master/Slave mode of MySQL is realized by analyzing the binlog of Master. This kind of data is actually an asynchronous process, but when the servers are all in the same private network, the asynchronous delay is almost negligible.
So in theory, we can also analyze the binlog file of MySQL and insert the data into Redis in the same way. However, this requires a very in-depth understanding of binlog files and MySQL, and because there are many forms of Statement/Row/Mixedlevel in binlog, the workload of analyzing binlog synchronization is very large.
So here we choose a cheaper way to develop, borrow the mature MySQL UDF, put the MySQL data into Gearman first, and then synchronize the data to Redis through a PHP Gearman Worker written by yourself. It adds a lot of processes than the way of analyzing binlog, but it is cheaper to implement and easier to operate.
[option 3]
Use udf of mysql. For more information, please see MySQL:: MySQL 5.1 Reference Manual:: 22.3 Adding New Functions to MySQL, and then use trigger to call the function after the table update and insert, and write it to redis. It looks something like this.
[http://www.zhihu.com/question/27738066]
1. First of all, it is clear whether it is necessary to cache, where is the bottleneck of the current architecture, and if the bottleneck is really a database operation, then move on.
two。 Make clear the difference between memcached and redis and which one to use. The former is a cache after all, it is impossible to save data (LRU mechanism) and supports distribution, while the latter not only supports caching, but also supports persistence of data to disk. Redis has to implement its own distributed cache (looks like the integration of the * * version) and implement its own consistent hash. Because I don't know your application scenario, I don't know if I have to use memcache or redis. Maybe it would be better to use mongodb, for example, in storing logs.
3. Cache large but infrequent data, such as comments.
4. Your idea is right, clearly, before reading the DB, read the cache first, if there is a direct return, if you do not read the DB, then write to the cache layer and return.
5. Consider the need for master-slave, read-write separation, distributed deployment, and subsequent horizontal scaling.
6. If you want to make it easy to maintain and expand once and for all, optimize the existing code architecture and change a lot of code to replace database components as you said, indicating that there are problems with the current architecture. You can use some existing frameworks, such as SpringMVC, to decouple your application layer from the business layer to the database layer. Do this before you cache it again.
7. The read cache and other operations are made into service components to provide services to the business layer, and the business layer provides services to the application layer.
8. Retain the original database components and optimize them into service components to facilitate the subsequent business layer to flexibly invoke the cache or database.
9. It is not recommended to cache all at once, and do not move the core business at first. You can replace the edge business with the cache component, and then switch to the core business step by step.
10. Refresh memory, take memcached as an example, add, modify and delete operations, generally adopt the lazy load strategy, that is, when you add new ones, you will only write to the database, and the Memcached will not be updated immediately, but will not be loaded into the Memcached until it is read again, and the modify and delete operations will also update the database, and then mark the data in the Memcached as invalid and wait for the next read before loading.
The above is how to cache MySQL in Redis. Have you learned any knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.