In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Tunnel synchronizes PG data to kafka
Open source components from Hello Bike. Support for synchronizing PG data to kafka or ES.
Https://github.com/hellobike/tunnel
The overall deployment of tunnel is relatively simple.
Zk and kafka need to be deployed in advance (I'm going to demonstrate zk and kafka on a single node)
Node deployment relationship:
192.168.2.4 deploy zk, kafka, pg10 running on port 1921
192.168.2.189 deploy tunnel
Ensure that logical replication of PG is turned on
Wal_level = 'logical'
Max_replication_slots = 20
Note that this setting restarts the PG process's
Then, create a test library table and an account for synchronization
CREATE DATABASE test_database
\ C test_database
Create table test_1 (id int primary key, name char (40))
Create table test_2 (id int primary key, name char (40))
CREATE ROLE test_rep LOGIN ENCRYPTED PASSWORD 'xxxx' REPLICATION
GRANT CONNECT ON DATABASE test_database to test_rep
Vim pg_hba.conf adds 2 lines of configuration:
Host all test_rep 192.168.2.0/24 md5
Host replication test_rep 192.168.2.0/24 md5
Then PG under reload
Go to the 192.168.2.189 machine to compile tunnel:
Note: oracle jdk 1.8 needs to be installed in advance to start tunnel
Git clone https://github.com/hellobike/tunnel
Cd tunnel
Mvn clean package-Dmaven.test.skip=true
Cd target
Unzip AppTunnelService.zip
Cd AppTunnelService
The vim conf/test.yml content is as follows:
Tunnel_subscribe_config:
Pg_dump_path:'/ usr/local/pgsql-10.10/bin/pg_dump'
Subscribes:
-slotName: slot_for_test
PgConnConf:
Host: 192.168.2.4
Port: 1921
Database: test_database
User: test_rep
Password: xxxx
Rules:
-{table: test_1, pks: ['id'], topic: test_1_logs}
-{table: test_2, pks: ['id'], topic: test_2_logs}
KafkaConf:
Addrs:
-192.168.2.4purl 9092
Tunnel_zookeeper_address: 192.168.2.4:2181
Start at the front desk:
Java-server-classpath conf/*:lib/* com.hellobike.base.tunnel.TunnelLauncher-u false-c cfg.properties-p 7788 # exposes prometheus metric on port 7788 (configuration monitoring is not the focus here, and it is very simple, skip it for the time being)
Then, we randomly create some data in the two tables of test_database on PG10, and then we can see that there is already data in kafka (the following figure is the result of kafkamanager and kafka-eagle).
Formatted, the data looks like this:
What UPDATE's record looks like:
{
"dataList": [{
"dataType": "integer"
"name": "id"
"value": "1111"
}, {
"dataType": "character"
"name": "name"
"value": "Big Dog egg"
}]
"eventType": "UPDATE"
"lsn": 10503246616
"schema": "public"
"table": "test_1"
}
What DELETE's record looks like:
{
"dataList": [{
"dataType": "integer"
"name": "id"
"value": "3"
}]
"eventType": "DELETE"
"lsn": 10503247064
"schema": "public"
"table": "test_1"
}
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.