Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to synchronize RDS binlog data with Kafka Connect

2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

Kafka Connect how to achieve synchronization of RDS binlog data, many novices are not very clear about this, in order to help you solve this problem, the following editor will explain for you in detail, people with this need can come to learn, I hope you can gain something.

Here is how to use Kafka Connect to synchronize RDS binlog data on E-MapReduce

1. Background

In our business development, we often encounter the following scenario:

Business update data is written to the database

Business update data needs to be passed to downstream dependency processing in real time.

So the traditional processing architecture might look like this:

This article demonstrates how to synchronize RDS binlog to a Kafka cluster in real time on E-MapReduce.

two。 Environmental preparation

The VPC network environment is used in the experiment, and the following examples are created under the VPC environment by default.

2.1 prepare a test RDS database

Create an instance of RDS with version 5.7. I won't go into details on how to create RDS here. Please refer to the RDS documentation for details. The creation is as shown in the figure:

Note: the RDS instance and the E-MapReduce Kafka cluster should be in the same VPC, otherwise the network between the two VPC needs to be opened.

3. Kafka Connect

3.1 Connector

Kafka Connect is a tool for data transmission between Kafka and other data systems. It can realize Kafka-based data pipeline and open upstream and downstream data sources. All we need to do is run a Connector on the Kafka Connect service that specifically implements how to read / write data from / to the data source. Confluent provides many Connector implementations, which you can download here. Today, however, we use a MySQL Connector plug-in provided by Debezium to download the address.

Download the plug-in and copy all the extracted jar packages to the kafka lib directory. Note: these jar packages need to be copied to all machines in the Kafka cluster.

Restart the Kafka Connect component in the list of services in the Kafka cluster.

Log in to the Kafka cluster, configure and create a connector, with the following command:

3.3 points for consideration

How much is server_id?: you can find it in RDS by executing "SELECT @ @ server_id;".

A connection failure may occur when creating a connector. Make sure that the whitelist of the RDS has authorized the access of the Kafka cluster machine.

4 Test

4.1 create a table

Insert several pieces of data

The result is shown in the figure:

Is it helpful for you to read the above content? If you want to know more about the relevant knowledge or read more related articles, please follow the industry information channel, thank you for your support.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report