In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains how Mysql data is synchronized to Greenplum. Interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Let Xiaobian take you to learn "Mysql data how to synchronize to Greenplum"!
I. Resource information
I won't repeat it here
2. Configure related data sources, target data sources and java environments
MySQL data source
1. Database, create database testdb1;
2. User permissions, select permissions and binlog pull permissions are required. Root permissions are used here.
Create table tb1 (a int, b char(10), primary key(a))
pgsql destination database
create user testdb with password 'testdb';
create database testdb with owner 'testdb';
3. synchronized table (use testdb user to switch to testdb database), create table tb1(a int, b char(10), primary key(a));
Java environment installation
1, Download binary installation package: jdk-8u101-linux-x64.tar.gz
2. Decompress the binary package and make soft links: tar xf jdk-8u101-linux-x64.tar.gz && ln -s /data/jdk1.8.0_101 /usr/java
3. Configuration path and java environment variable: vim /etc/profile.d/java.sh
export JAVA_HOME=/usr/java
export JRE_HOME=$JAVA_HOME/jre
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$ JAVA_HOME/lib:$JAVA_HOME/jre/lib
4, Source effect: source /etc/profile.d/java.sh
5. Install jsvc, yum install jsvc
III. Installation and startup configuration of kafka
1. Download address: mirrors.tuna.tsinghua.edu.cn/apache/kafka/
2. Kafka official document: kafka.apache.org/
3. Decompress: tar xf kafka_2.11-2.0.0.tgz && cd kafka_2.11-2.0.0
4、ZooKeeper
Start, bin/zookeeper-server-start.sh config/zookeeper.properties
Close, bin/zookeeper-server-stop.sh config/zookeeper.properties
5、Kafka server
Start, bin/kafka-server-start.sh config/server.properties
Start, bin/kafka-server-stop.sh config/server.properties
6、Topic
Create, bin/kafka-topics.sh--create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic green
Query, bin/kafka-topics.sh--list --zookeeper localhost:2181
Delete, bin/kafka-topics.sh--delete --zookeeper localhost:2181 --topic green
7. Producer (not required for this experiment, used for learning)
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic green
>aaa
>123
>
8. Consumer (not necessary for this experiment, used as learning)
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic green --from-beginning
aaa
123
4. Maxwell installation and startup configuration
1. Download address: github.com/zendesk/maxwell/releases
2, maxwell official document: github.com/zendesk/maxwell
3. Decompress: tar xf maxwell-1.17.1.tar.gz && cd maxwell-1.17.1
4. Modify the configuration file, cp config.properties.example config.properties && vim config.properties
log_level=info
# kafka info
producer=kafka
kafka.bootstrap.servers=localhost:9092
kafka_topic=green
ddl_kafka_topic=green
# mysql login info
host=xx.xx.xx.xx
port=3306
user=root
password=123456
5. Start maxwell, bin/maxwell --config config.properties
6. maxwell records relevant information in the source database generation library maxwell by default.
V. Bireme installation and startup configuration
1. Download address: github.com/HashDataInc/bireme/releases
2. Bireme official document: github.com/HashDataInc/bireme/blob/master/README_zh-cn.md
3. Decompress: tar xf birem-1.0.0.tar.gz && cd birem-1.0.0
4. Modify the configuration file, vim etc/config.properties
# target database where the data will sync into.
target.url = jdbc:postgresql://xxx.xxx.xxx.xxx:5432/testdb
target.user = testdb
target.passwd = testdb
# data source name list, separated by comma.
data_source = maxwell1
# data source "mysql1" type
maxwell1.type = maxwell
# kafka server which maxwell write binlog into.
maxwell1.kafka.server = 127.0.0.1:9092
# kafka topic which maxwell write binlog into.
maxwell1.kafka.topic = green
# kafka groupid used for consumer.
maxwell1.kafka.groupid = bireme
# set the IP address for bireme state server.
state.server.addr = 0.0.0.0
# set the port for bireme state server.
state.server.port = 8080
5. Modify the configuration file, vim etc/maxwell1.properties (table mapping configuration)
Note: maxwell1 of maxwell1.properties must be consistent with bireme's data_source
testdb1.tb1 = public.tb1
testdb2.tb1 = public.tb1
Start birem, bin/birem start
VI. Testing
1. mysql data source
insert into tb1 select 1,'a';
insert into tb1 select 2,'b';
2. pgsql target database
testdb=# select * from tb1;
a | b
---+------------
1 | a
2 | b
(2 rows)
At this point, I believe that everyone has a deeper understanding of "how Mysql data is synchronized to Greenplum". Let's actually operate it! Here is the website, more related content can enter the relevant channels for inquiry, pay attention to us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.