Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Installation and use of canal+mysql+kafka in mac Environment

2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

In order to realize real-time data synchronization, a set of canal,mysql,kafka process is set up in mac environment.

Use canal plus mysql plus kafka to transfer data

Mysql data source

Canal imitates slave to fetch data from mysql. It's a pipe.

Kafka puts the data obtained by canal into kafka. Then consume (the program gets the queue of kafka. Consumption data)

Mysql installation

I won't elaborate on this. Do not install the latest version of mysql. I personally tested that version 8.0 and canal may not be very compatible.

So. Mysql5.7 is installed

Installation command (version 8.0 is installed in this way)

Brew install mysql

2.java installation

Both canal and kafka need java environment.

You can't use the latest version of java. It's recommended to use java8, that is, jdk8.

It is best to download it from the official website oracle

Installation method:

Https://blog.csdn.net/oumuv/article/details/84064169

3.canal installation

Make sure the mysql and java8 environment are installed before installing calal

Download address: https://github.com/alibaba/canal/releases

Download canal.deployer-1.1.4.tar.gz

Create a new canal directory under the home directory. Just decompress it.

The official document has been compiled, so there is no need to compile it.

Need to add account (canal@% and canal@local)

There are several script files in the canal/bin directory for startup.sh to start the service and stop.sh to stop the service

In the canal/logs directory are the log files.

Under the canal/conf directory is the configuration file.

Instance configuration file

Canal/conf/example/instance.properties

Configuration file for the instance. Determines which instance's database you connect to can be accurate to the table.

More important parameters in instance.properties

# instance

Canal.instance.master.address = 127.0.0.1purl 3306

# db

Canal.instance.dbUsername = canal

Canal.instance.dbPassword = canal

Canal.instance.defaultDatabaseName = test

Canal.instance.connectionCharset = UTF-8

# table

# table regex

Canal.instance.filter.regex = test.ttt

# mq topic

Canal.mq.topic=test12345

The configuration above represents. Connect the ttt table of the test library in 127.0.0.1 through canal and put it in a topic called test12345

To verify whether canal is connected to mysql, you only need to check if there is a replication connection in the process of mysql (because canal mimics a slave).

There is also a global file

Indicates the kafka zookeeper and so on that need to be connected

Canal/conf/canal.properties

List the important parameters for canal to connect to kafka

Canal.id= clusters if there is more than one canal, this value cannot conflict with the canal in the cluster.

Ip of canal.ip=172.17.61.113#canal

Canal.port= 11111

Ip:port of canal.zkServers=172.17.61.113:2181#zookeeper

Canal.serverMode = kafka

Canal.destinations = example

# mq

Canal.mq.servers = ip:port of 172.17.61.113:9092#kafka

Some of the above parameters about kafka need to be installed by kafka before they can be accompanied (because they are already installed, so list them first)

4.kafka installation

Zookeeper is required for kafka installation.

But in general, zookeeper is integrated into kafka, so you don't need to install it separately (or you can install it separately).

The mac environment installation commands are as follows

Brew install kafka

Just wait until the installation is complete.

Running kafka depends on zookeeper, so zookeeper is included when you install kafka.

Kafka installation directory: / usr/local/Cellar/kafka/2.0.0/bin

Kafka configuration file directory: / usr/local/Cellar/kafka/2.0.0/bin

Configuration file for kafka service: / usr/local/etc/kafka/server.properties

Zookeeper configuration file: / usr/local/etc/kafka/zookeeper.properties

# parameters that need to be modified for important configurations in server.properties

Listeners=PLAINTEXT://172.17.61.113:9092

Advertised.listeners=PLAINTEXT://172.17.61.113:9092

I didn't make any changes to the zookeeper configuration file during installation.

Then connect to the kafka by configuring canal.properties above

Basic commands for kafka (you can find a specific location through find /-name zookeeper-server-start)

First, start zookeeper:

Zookeeper-server-start / usr/ local / etc/kafka/zookeeper.properties

Then, start kafka

Kafka- server-start / usr/local/etc/kafka/ server .properties

Create a topic for a theme named "test" with "use a single partition" and "only one copy"

Kafka-topics-- create-- zookeeper localhost:2181-- replication-factor 1-- partitions 1-- topic test (create topic, this topic. We already have a configuration in canal)

Create a topic using the following command

Kafka-topics-create-zookeeper localhost:2181/kafka-replication-factor 1

-- partitions 1-- topic topic1 to view the created topic, and run the list topic command:

Kafka-topics-- list-- zookeeper 172.17.61.113Ru 2181 production message

Kafka-console-producer-- broker-list 172.17.61.113-- topic test

(message data is sent automatically by canal. So we just have to understand the orders here.)

Consumption message (this place can finally check whether the mysql+canal+kafka has been linked)

Kafka-console-consumer-- bootstrap-server 172.17.61.113-- topic test12345-- from-beginning

I added a record to the ttt table of the test library in mysql

Then the reaction in kafka is as follows

{"data": [{"id": "13", "var": "ded"}], "database": "test", "es": 1575448571000, "id": 8, "isDdl": false, "mysqlType": {"id": "int (11)", "var": "varchar (5)"}, "old": null, "pkNames": ["id"], "sql": "," sqlType ": {" id ": 4," var ": 12}," table ":" ttt " "ts": 1575448571758, "type": "INSERT"}

This means that the whole mysql+canal+kafka has been successful.

Next, just wait for the program to consume the queue information.

#

A possible error report

Server: com.alibaba.otter.canal.parse.exception.CanalParseException: can't find start position for example

It is because you changed the configuration file that the locus information saved in meta.dat is inconsistent with the locus information in the database; the action that canal cannot fetch the database is caused.

Solution: delete meta.dat, then restart canal to solve the problem.

See for details

Https://www.cnblogs.com/shaozhiqi/p/11534658.html

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report