In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article focuses on "how to achieve MySQL and Redis cache synchronization", interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn how to synchronize MySQL and Redis caches.
1. Scenario analysis of scenario 1 (UDF):
When we perform data operations on the MySQL database, we synchronize the corresponding data to the Redis, and after synchronizing to the Redis, the query operation is found in the Redis.
The process is roughly as follows: set the trigger Trigger for the data to be operated in MySQL, and when the listening operation client (NodeServer) writes data to MySQL, the trigger will be triggered. After triggering, calling the UDF function UDF function of MySQL can write the data to Redis, thus achieving the effect of synchronization.
Scenario analysis:
This scheme is suitable for scenarios where there is more reading and less writing, and there is no concurrent writing, because the MySQL trigger itself will reduce the efficiency. If a table is manipulated frequently, this scheme is not appropriate. Welcome to follow: brother Wu talks about programming demonstration cases.
Here is MySQL's table
Here is the parsing code for UDF
Define the corresponding trigger
2. Scheme 2 (parsing binlog)
Before introducing scenario 2, let's introduce the principle of MySQL replication, as shown in the following figure:
The master server operates the data and writes the data to Bin log. The Bin log of the master server is read by the I / O thread from the server, and written to its own Relay log, and then the SQL thread is called to parse the data from the Relay log, thus synchronizing to its own database.
Option 2 is:
The whole replication process of the above MySQL can be summed up in one sentence, that is: read the data from the master server Bin log from the server and synchronize it to our own database. Our scenario 2 is to conceptually change the master server to MySQL and the slave server to Redis (as shown in the following figure). When data is written in MySQL, we parse the Bin log of MySQL, and then write the parsed data to Redis. In order to achieve the effect of synchronization
For example, the following is an example of a cloud database:
There is a master-slave relationship between cloud database and local database. The cloud database as the master database mainly provides write, the local database reads the data from the master database, the local database reads the data from the master database, parses the Bin log, and then synchronizes the data writes to the Redis, and then the client reads the data from the Redis
The difficulty of this technical solution is how to parse the Bin Log of MySQL. But this requires a very in-depth understanding of binlog files and MySQL. At the same time, because there are many forms of Statement/Row/Mixedlevel in binlog, the workload of analyzing binlog synchronization is very large.
Canal open source technology
Canal is an open source project under Alibaba, pure Java development. Provides incremental data subscription & consumption based on database incremental log parsing. Currently, it mainly supports MySQL (also supports mariaDB).
The open source reference address is: https://github.com/liukelin/canal_mysql_nosql_sync
How it works (imitating MySQL replication): canal simulates the interaction protocol of mysql slave, disguises itself as mysql slave, sends dump protocol mysql master to mysql master, receives dump request, and starts pushing binary log to slave (that is, canal) canal parsing binary log object (originally byte stream) architecture:
Server represents a canal running instance, corresponding to a jvm
Instance corresponds to a data queue (1 server corresponds to 1.. n instance)
Instance module:
EventParser (data source access, simulating slave protocol to interact with master, protocol parsing) eventSink (Parser and Store linker for data filtering, processing, distribution) eventStore (data storage) metaManager (incremental subscription & consumption Information Manager)
The general parsing process is as follows:
Parse parses the Bin log of MySQL, then puts the data into sink, sink filters, processes, distributes store to read the parsed data from sink, stores the parsed data, and then writes the data in store into Redis synchronously with design code, where parse/sink is encapsulated by the framework, and we do the data reading step of store.
More about Cancl can be searched by Baidu
The following is the running topology diagram
The synchronization of MySQL tables adopts the responsibility chain mode, and each table corresponds to a Filter. For example, the class design to be used in zvsync is as follows:
The following are the classes to be used in the specified zvsync. Whenever you add or delete a table, you can simply add or delete it.
At this point, I believe you have a deeper understanding of "how to achieve the synchronization of MySQL and Redis cache". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.