Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

12C database Goldengate synchronous heterogeneous database Kafka middleware

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

In the first two days, the requirements of the test environment will be online, the production environment, the demand or

a. Data source: SSP library ssp.m_system_user,Oracle DB 12.1.0.2.0 OGGCORE_12.2.0.1.0_PLATFORMS_151211.1401_FBO Version 12.2.0.1.1 OGGCORE_12.2.0.1.0_PLATFORMS_151211.1401_FBO

b. Data target: MySQL DLS library DLS_SYSTEM_USER

C.kafka cluster: 10.1.1.247, Ogg Version 12.3.0.1.0 OGGCORE_OGGADP.12.3.0.1.0GA_PLATFORMS_170828.1608

Since oracle 12c is already a multi-tenant architecture, there are some situations to consider when using OGG synchronization

A CDB contains multiple PDB

Extraction pattern can only be integrated (integration) pattern, and traditional claasic capture capture is not supported.

Because you want to use integrated extract, you need to be able to access log mining server, which can only be accessed from cdb$root

The source side uses common user, that is, c##ogg, to access the source side DB, so that you can access the redo log & all pdbs of DB.

In GGSCI or parameter files, you can use pdb.schema.table to access specific tables or sequences

You can specify a PDB by using the sourceCatalog parameter in the parameter file, and only schema.table is needed in the following parameters

There must be one replicat process for each pdb on the target side, that is, a replicat process can only be delivered to one PDB, not to more than one process.

Source-side OGG users need to be empowered: dbms_goldengate_auth.grant_admin_privilege ('ClearGGADMING container = >' all'), and it is recommended that the user setting of ogg should be: grant dba to c##ogg container=all

In addition to opening archiving, force logging, and minimum additional logs, the source DB may need to turn on another switch: alter system set enable_goldengate_replication=true

Specific implementation steps

1. Add additional logs to the tables to be synchronized

Dblogin userid ogg@SALESPDB,password OGG_PROD

Add trandata ssp.m_system_user

2. Add extraction process

Add extract EXT_KAF4,integrated tranlog, begin now-12c difference

Add EXTTRAIL. / dirdat/k4, extract EXT_KAF4,MEGABYTES 200

GGSCI (salesdb as ogg@salesdb/SALESPDB) 17 > dblogin useridalias ggroot

Successfully logged into database CDB$ROOT.

-- you have to register the ext process in cdb

Register extract EXT_KAF4 database container (SALESPDB)

Edit params EXT_KAF4

Extract EXT_KAF4

Userid c##ggadmin,PASSWORD ggadmin

LOGALLSUPCOLS

UPDATERECORDFORMAT COMPACT

Exttrail. / dirdat/k4,FORMAT RELEASE 12.1

SOURCECATALOG SALESPDB

Table ssp.m_system_user

3. Add delivery process:

Add extract PMP_KAF4, exttrailsource. / dirdat/k4

Add rmttrail. / dirdat/b4,EXTRACT PMP_KAF4,MEGABYTES 200

Eidt params PMP_KAF4

EXTRACT PMP_KAF4

USERID c##ggadmin,PASSWORD ggadmin

PASSTHRU

RMTHOST 10.1.1.247, MGRPORT 9178

RMTTRAIL. / dirdat/b4,format release 12.1

SOURCECATALOG SALESPDB

Table ssp.m_system_user

4. Add initialization process

ADD EXTRACT ek_04, sourceistable-added at the source end

EXTRACT ek_04

USERID c##ggadmin,PASSWORD ggadmin

RMTHOST 10.1.1.247, MGRPORT 9178

RMTFILE. / dirdat/b5,maxfiles 999, megabytes 500,format release 12.1

SOURCECATALOG SALESPDB

Table ssp.m_system_user

5. Generate the def file:

Edit param salesdb4

USERID c##ggadmin,PASSWORD ggadmin

Defsfile / home/oracle/ogg/ggs12/dirdef/salesdb4.def,format release 12.1

SOURCECATALOG SALESPDB

Table ssp.m_system_user

Execute the following command under OGG_HOME to generate the def file

Defgen paramfile. / dirprm/salesdb4.prm

Transfer the generated def file to the target side kafka--$OGG_HOME/dirdef

-mysql database address: 10.1.11.24 mysql address

-kafka address 10.1.1.246 VOL0000BET 10.1.1.247Rich 0000 topic: DLS_MERCHANT

1. Add initialization process:-dirprm

ADD replicat rp_06,specialrun

EDIT PARAMS rp_06

SPECIALRUN

End runtime

Setenv (NLS_LANG= "AMERICAN_AMERICA.ZHS16GBK")

Targetdb libfile libggjava.so set property=./dirprm/kafka_k05.props

SOURCEDEFS. / dirdef/salesdb4.def

EXTFILE. / dirdat/b5

Reportcount every 1 minutes, rate

Grouptransops 10000

Map SALESPDB.SSP.M_SYSTEM_USER,TARGET DLS.DLS_SYSTEM_USER

2. Add replication process:

Add replicat rep_04,exttrail. / dirdat/b4

Edit params rep_04

REPLICAT rep_04

Setenv (NLS_LANG= "AMERICAN_AMERICA.ZHS16GBK")

HANDLECOLLISIONS

Targetdb libfile libggjava.so set property=./dirprm/kafka_k05.props

SOURCEDEFS. / dirdef/salesdb4.def

Reportcount every 1 minutes, rate

Grouptransops 10000

Map SALESPDB.SSP.M_SYSTEM_USER,TARGET DLS.DLS_SYSTEM_USER

3. Parameter configuration:

Cd / home/appgroup/ogg/ggs12/dirprm

Custom_kafka_producer.properties file

Vi kafka_k05.props

Gg.handlerlist = kafkahandler

Gg.handler.kafkahandler.type=kafka

Gg.handler.kafkahandler.KafkaProducerConfigFile=custom_kafka_producer.properties

# The following resolves the topic name using the short table name

Gg.handler.kafkahandler.topicMappingTemplate= DLS_MERCHANT

# The following selects the message key using the concatenated primary keys

# gg.handler.kafkahandler.keyMappingTemplate=

# gg.handler.kafkahandler.format=avro_op

Gg.handler.kafkahandler.format = json

Gg.handler.kafkahandler.format.insertOpKey=I

Gg.handler.kafkahandler.format.updateOpKey=U

Gg.handler.kafkahandler.format.deleteOpKey=D

Gg.handler.kafkahandler.format.truncateOpKey=T

Gg.handler.kafkahandler.format.prettyPrint=false

Gg.handler.kafkahandler.format.jsonDelimiter=CDATA []

Gg.handler.kafkahandler.format.includePrimaryKeys=true

Gg.handler.kafkahandler.SchemaTopicName= DLS_MERCHANT

Gg.handler.kafkahandler.BlockingSend = false

Gg.handler.kafkahandler.includeTokens=false

Gg.handler.kafkahandler.mode=op

Goldengate.userexit.timestamp=utc

Goldengate.userexit.writers=javawriter

Javawriter.stats.display=TRUE

Javawriter.stats.full=TRUE

Gg.log=log4j

Gg.log.level=INFO

Gg.report.time=30sec

# Sample gg.classpath for Apache Kafka

Gg.classpath=dirprm/:/opt/cloudera/parcels/KAFKA/lib/kafka/libs/

# Sample gg.classpath for HDP

# gg.classpath=/etc/kafka/conf:/usr/hdp/current/kafka-broker/libs/

Javawriter.bootoptions=-Xmx512m-Xms32m-Djava.class.path=ggjava/ggjava.jar

-start each process

1. Source extraction, delivery, initialization process starts

2. The target side starts the initialization process, executes the initialization script, and starts the replication process

Start rp_06

. / replicat paramfile. / dirprm/rp_06.prm reportfile. / dirrpt/rp_06.rpt-p INITIALDATALOAD

Start rep_04

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report