Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

[OGG] collation of basic knowledge of OGG

2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

[OGG] collation of basic knowledge of OGG

1. Introduction of GoldenGate

GoldenGate software is a kind of structured data replication software based on log. GoldenGate can realize the real-time capture, transformation and delivery of a large number of transaction data, synchronize the data between the source database and the target database, and maintain sub-second data delay.

GoldenGate can support a variety of topologies, including one-to-one, one-to-many, many-to-one, cascading and two-way replication.

GoldenGate basic architecture

Oracle GoldenGate is mainly composed of the following components

● Extract

● Data pump

● Trails

● Collector

● Replicat

● Manager

The Oracle GoldenGate data replication process is as follows:

Use the extraction process (Extract Process) to read Online Redo Log or Archive Log in the source database, then parse it, and extract only the change information of the data, such as DML operations-add, delete, modify operations, and convert the extracted information into the intermediate format defined by GoldenGate and stored in the queue file (trail file). Then use the transfer process to transfer the queue file (trail file) to the target system through TCP/IP.

There is a process called Server Collector on the target side, which accepts the data change information transmitted from the source side, caches the information in the GoldenGate queue file (trail file), and waits for the replication process on the target side to read the data.

The GoldenGate replication process (replicat process) reads the data change information from the queue file (trail file), creates the corresponding SQL statement, executes through the local interface of the database, submits it to the target database, updates its own checkpoint after the submission is successful, records the location where the replication has been completed, and the replication process of the data is finally completed.

Oracle GoldenGate (OGG) can achieve real-time transaction change data capture, transformation and transmission in diversified and complex IT architectures, in which data processing and exchange are transaction-based and support heterogeneous platforms, such as DB2,MSSQL, etc.

There are two main types of scenarios supported by Golden Gate for different business needs:

● High availability and disaster recovery solution

● Real-time data Integration solution

Among them, the high availability and disaster recovery solution is mainly used to eliminate unplanned and planned downtime, which consists of the following three subscenarios:

1. Disaster recovery and emergency backup

two。 Eliminate planned downtime

3. Dual business center (also known as: double active)

The real-time data integration solution mainly provides real-time data for DSS or OLTP databases to realize data integration and integration. It includes the following two subschemes:

1. Real-time supply of data warehouse

two。 Real-time report

Flexible topology to realize the flexible scheme of users:

The following figure is a typical logical structure diagram of Golden Gate configuration:

① Manager

As the name implies, the Manager process is the control process of the process in Golden Gate, which is used to manage processes such as Extract,Data Pump,Replicat.

Before the Extract, Data Pump, and Replicat processes start, the Manager process must be started on the source side and the destination side

It must remain running throughout the Golden Gate run

⒈ monitors and starts other processes of GoldenGate

⒉ manages trail files and Reporting

On Windows systems, the Manager process is started as a service, and on Unix systems it is a process

② Extract

The Extract process runs on the source side of the database. It is the capture mechanism of Golden Gate. You can configure the Extract process to do the following:

⒈ initial data loading: for initial data loading, the Extract process extracts data directly from the source object

⒉ synchronous change capture: keep the source data synchronized with other datasets. After the initial data synchronization is completed, the Extract process captures changes in the source data, such as DML changes, DDL changes, etc.

③ Replicat

The Replicat process is a process running on the target system, which is responsible for reading the data extracted by the Extract process (changed transactions or DDL changes) and applying it to the target database.

Just like the Extract process, you can configure the Replicat process to do the following:

⒈ initialization data loading: for initialization data loading, the Replicat process applies data to the target object or routes them to a high-speed Bulk-load tool

⒉ data synchronization, which applies committed transactions captured by the Extract process to the target database

④ Collector

Collector is a background process running on the target side

Receive the database changes transmitted from the TCP/IP network and write to the Trail file

Dynamic collector: the collector that is automatically started by the management process is called dynamic collector. Users cannot interact with dynamic collector.

Static collector: can be configured to run collector manually. This collector is called static collector.

⑤ Trails

In order to continuously extract and replicate database changes, GoldenGate temporarily stores the captured data changes in a series of files on disk, called Trail files.

These files can be on source DB, on the target DB, or on intermediate systems, depending on which configuration you choose

The one on the source side of the database is called Local Trail or Extract Trail; on the target side is called Remote Trail.

⑥ Data Pumps

Data Pump is an auxiliary Extract mechanism configured on the source side

Data Pump is an optional component. If Data Pump is not configured, the Extract main process sends the data to the Remote Trail file on the target side.

If Data Pump is configured, the local Trail file written by the Extract main process will be sent by Data Pump to the Remote Trail file on the target side over the network

The benefits of using Data Pump are:

⒈ if the destination side or network fails, the Extract process on the source side will not terminate unexpectedly

⒉ needs to filter or transform data at different stages.

⒊ replicates multiple source databases to the data center

⒋ data needs to be replicated to multiple target databases

⑦ Data source

When processing change data of a transaction, the Extract process can obtain it directly from the transaction log of the database (Oracle, DB2, SQL Server, MySQL, etc.)

Or get it from GoldenGate VAM. Through VAM, the database vendor will provide the required components for changes to the data extracted by the Extract process

⑧ Groups

To distinguish between multiple Extract and Replicat processes on a system, we can define process groups

For example, to replicate different datasets in parallel, we can create two Replicat groups

A process group consists of a process (Extract process or Replicat process), a corresponding parameter file, a Checkpoint file, and other related files

If the process in the processing group is a Replicat process, then the processing group also contains a Checkpoint table

Introduction to GoldenGate

Oracle Golden Gate software is a kind of structured data replication and backup software based on logs. It obtains the incremental changes of the data by analyzing the online logs or archived logs of the source database, and then applies these changes to the target database, so as to realize the synchronization of the source database and the target database. Oracle Golden Gate can realize real-time replication of a large amount of data at subsecond level between heterogeneous IT infrastructure (including almost all common operating system platforms and database platforms), so it can be applied in emergency systems, online reports, real-time data warehouse supply, transaction tracking, data synchronization, centralization / distribution, disaster recovery, database upgrade and migration, dual business centers and other scenarios. At the same time, Oracle Golden Gate can achieve a variety of flexible topologies, such as one-to-one, broadcast (one-to-many), aggregation (many-to-one), bi-directional, point-to-point, cascade and so on.

GoldenGate technical architecture

Like the traditional logical replication, the principle of Oracle GoldenGate is to extract the redo log or archive log from the source side, then deliver it to the target side through TCP/IP, and finally parse and restore it to the target side, so that the target side can synchronize the data at the same source side. The following is the technical architecture of OracleGoldenGate:

Manager process

The Manager process is the control process of the GoldenGate, running on the source side and the target side. Its main functions are as follows: starting, monitoring, restarting other processes of Goldengate, reporting errors and events, allocating data storage space, issuing threshold reports and so on. There is one and only one manager process on the destination side and the source side, and its running state is running good stopped. On the windows system, the manager process is started as a service, and the second is a system process on the Linux/Unix system.

Extract process

Extract runs on the source side of the database and is responsible for capturing data from source-side tables or logs. The role of Extract can be divided by table time:

Initial time loading phase: during the initial data loading phase, the Extract process extracts data directly from the data table on the source side.

Synchronous change capture phase: after the initial data synchronization is completed, the Extract process is responsible for capturing changes in the source data (DML and DDL)

GoldenGate does not support ddl operations for all databases.

The Extract process captures all configured object changes that need to be synchronized, but only sends committed transactions to remote trail files for synchronization. When a transaction commits, all log records related to the transaction are recorded in the trail file in transaction unit order. The Extract process uses its inherent checkpoint mechanism to periodically record the location of its read and write. This mechanism is to ensure that the Extract process terminates or the operating system crashes. After restarting Extract, GoldenGate can return to the previous state and continue to run down from the previous breakpoint. Through the above two mechanisms, the integrity of the data can be guaranteed.

Multiple Extract processes can operate on different objects at the same time. For example, you can use the data extracted by another extract process as a report table while one extract process extracts and transacts transaction data to the target side. Alternatively, two extract processes can take advantage of two trail files and extract them and transmit them to both replicat processes in parallel to reduce data synchronization latency.

When initializing reprinting or bulk synchronizing data, GoldenGate generates extract files to store data instead of trail files. By default, only one extract file is generated, but multiple extract files can also be generated through configuration if the operating system has a single file size limit or other factors. The extract file does not record checkpoints.

The status of the Extract process includes Stopped (normal stop), Starting (starting), Running (running), and Abended (abbreviation for Abnomal End, indicating an abnormal end).

Pump process

The pump process runs on the source side of the database, and its function is to send the local trail files generated by the source side to the destination side in the form of data blocks through the TCP/IP protocol, which is usually the recommended way. The pump process is essentially a special form of the extract process. If the trail file is not used, the extract process delivers the data directly to the target side and generates the remote trail file after the data is extracted.

The corresponding Pump process is called the Server Collector process, which does not need to attract my attention, because in practice, there is no need for us to configure it, so it is transparent to us. It runs on the target side, and its task is to reassemble the data delivered by Extract/Pump into remote ttrail files.

Note: trail files are generated on the target side regardless of whether or not the pump process is used

The pump process can be configured online or in batches, it can perform data filtering, mapping and transformation, and it can also be configured in "pass-through mode" so that the desired format can be generated directly when the data is transferred to the target side without additional operation. Pass-through mode improves the efficiency of data pump because the generated objects do not need to be retrieved.

In most cases, oracle recommends data pump for the following reasons:

1. Provide protection for the target side or network problems: if the trail file is configured only on the target side, the content extracted by the extract process will be continuously saved in memory and sent to the target side in time on the source side. When the network or the destination fails, because the extract process can not send data to the destination in time, the extract process will run out of memory and then terminate abnormally. If the data pump process is configured on the source side, the captured data will be transferred to the hard disk, preventing abnormal termination. When the fault is repaired and connectivity is restored between the source side and the destination side, the data pump process sends the trail file from the source side to the target side.

2. It can support complex data filtering or conversion: when using data filtering or conversion, you can configure a data pump process to perform the first step of the conversion on the destination side or the source side, and use another data pump process or Replicat group for the second part of the conversion.

3. Effective planning of storage resources: when synchronizing from multiple data sources to a data center, the extracted data can be saved at the source side and the trail file can be saved at the target side, thus saving storage space.

4. Solve the single point of failure of transmitting data from a single data source to multiple destinations: when sending data from one data source to multiple destinations, you can configure different data pump processes for each destination. In this way, if one destination fails or the network fails, the other destinations will not be affected and can continue to synchronize data.

Replicat process

The Replicat process, which we usually call the application process. Running on the target side, it is the last station of data transmission, which is responsible for reading the contents of the target trail file, parsing it into DML or DDL statements, and then applying it to the target database.

Like the Extract process, Replicat has its internal checkpoint mechanism to ensure that it can be recovered from the last recorded location after restart without the risk of data loss.

The status of the Replicat process includes Stopped (normal stop), Starting (starting), Running (running), and Abended (abbreviation for Abnomal End, indicating an abnormal end).

Trail file

In order to deliver the database transaction information from the source side to the target side more effectively and safely. GoldenGate introduces the concept of trail files. It was mentioned earlier that after extract has extracted the data, Goldengate will convert the extracted transaction information into a file in GoldenGate proprietary format. Then pump is responsible for delivering the trail files from the source side to the destination side, so such files will exist on both the source and destination sides. The purpose of the trail file is to prevent a single point of failure, persist the transaction information, and use the checkpoint mechanism to record its read and write location. If a failure occurs, the data can be retransmitted according to the location recorded by the checkpoint. Of course, it can also be sent directly to the target side through extract and TCP/IP protocol to generate remote trail files. However, this approach can result in data loss, which has been mentioned earlier, and I will not repeat it here.

The Trail file defaults to 10MB, with a two-character start with a number of 0000000999999 as the file name. Such as c:\ directory/tr000001. It is stored by default in the dirdata subdirectory of GoldenGate. You can create different trail files for different applications or objects. At a time, only one extract process processes an trail file.

GoldenGate after version 10.0 will store records containing trail file information in the trail file header, while versions prior to 10.0 will not store this information. The data record in each trail file contains the header area and the data area. Contains transaction information in the header area, and the data area contains the actual extracted data

How does the process write trail files

In order to reduce the load of the system, the extracted data is stored in the trail file in large byte blocks. At the same time, in order to improve compatibility, the data stored in trail files is stored in a common data schema (a schema that can be quickly and accurately converted between heterogeneous databases). Of course, data can also be stored in different schemas according to the needs of different applications.

By default, the extract process writes to the trail file as an append. When the extract process terminates abnormally, the trail file is marked as requiring recovery. When extract is restarted, the data after checkpoint is appended to the trail file. Prior to GoldenGate 10.0, the extract process was in override mode. That is, when the extract process terminates abnormally, the data after the last fully written transaction data will be overwritten to the contents of the existing trail file.

Here is the author's understanding is not very thorough, the original text is as follows, hope readers to give suggestions

By default, Extract operates in append mode, where if there is a process failure, a recovery marker is written to the trail and Extract appends recovery data to the file so that a history of all prior data is retained for recovery purposes.

In append mode, the Extract initialization determines the identity of the last complete transaction that was written to the trail at startup time. With that information, Extract ends recovery when the commit record for that transaction is encountered in the data source; then it begins new data capture with the next committed transaction that qualifies for extraction and begins appending the new data to the trail. A data pump or Replicat starts reading again from that recovery point.

Overwrite mode is another version of Extract recovery that was used in versions of GoldenGate prior to version 10.0. In these versions, Extract overwrites the existing transaction data in the trail after the last write-checkpoint position, instead of appending the new data. The first transaction that is written is the first one that qualifies for extraction after the last read checkpoint position in the data source.

Checkpoint

Checkpoint is used to relocate the starting point of extraction or replication after extraction or replication failure (such as system outage, network fault light). In an advanced synchronous configuration, you can read the same set of trail files by configuring checkpoint for multiple extract or replicat processes.

The extract process identifies checkpoint,Replicat in both the data source and the trail file and only marks checkpoint in the trail file.

In batch mode, neither the extract nor the replicat process records the checkpoint. If the batch fails, the rectification batch will be redone.

Checkpoint information is stored by default in dirchk, a subdirectory of goldengate. In addition to the checkpoint file on the target side, we can also configure to store the checkpoint information of the replicat through additional checkpoint table.

Group

We can distinguish the roles of different processes by grouping different extract and replicat processes. For example, when we need to replicate different datasets in parallel, we can create two or more replication processes.

The process group contains processes, process files, checkpoint files, and other process-related files. For the replicat process, if checkpoint table is configured, different groups will contain checkpoint table.

The naming rules for groups are as follows

GGSCI

GGSCI is an acronym for GoldenGate Software Command Interface, which provides a wealth of commands to perform various operations on Goldengate, such as creating, modifying, monitoring GoldenGate processes, and so on.

Commit Sequence Number

As mentioned many times earlier, Goldengate ensures the integrity of data on a transaction-by-transaction basis, so how does GoldenGate identify transactions? Commit Sequence Number (CSN) is used here. CSN is stored in transaction logs and trail files for data extraction and replication. CSN is recorded in the trail file as a sign of the start of the transaction, which can be viewed through the @ GETENV field conversion function or the logdump tool. The CSN for different database platforms is shown as follows

GoldenGate's support for different databases

* can only be used as the destination side, not the source side. However, Goldengate can extract data from the original table loaded directly by mysql. As the author does not know mysql, this is only a literal translation, the original text is as follows

The exception being that GoldenGate can extract records from MySQL source tables as part of a GoldenGate direct load.

* * API tool for transaction data management in GoldenGate

* only mirror replication is supported, but data manipulation, filtering and field mapping are not supported.

Refer to: "Oracle GoldenGate Administrator Guide"

Chapter 1 of the GoldenGate practice of the Enterprise IT Operation and maintenance Book

II. GoldenGate installation implementation 2.1.Create the GoldenGate software installation directory

Create the file system: / u01/gg on the database server as the installation directory for GoldenGate.

2.2 Administrative users of GoldenGate

When installing the GoldenGate software and maintaining the GoldenGate software, you can use the oracle user on the system. The owner of the GoldenGate installation directory must be the GoldenGate administrative user. In this implementation, use the oracle user as the GoldenGate administrative user and add the environment variables of the oracle user (do the following on both the production side and the disaster recovery side):

Export GG_HOME=/u01/gg

Export LD_LIBRARY_PATH=$GG_HOME:$ORACLE_HOME/lib:/usr/bin:/lib

Export PATH=$GG_HOME:$PATH

2.3 install GoldenGate software

Switch to the oracle user, store the compressed package of the GG software in the GoldenGate installation directory, namely / u01/gg, and decompress the package to the GoldenGate installation directory (do the following on both the production side and the disaster recovery side):

Tar-zxvf * .gz

Go to the GoldenGate installation directory and run the GGSCI command to enter the GG interface (do the following on both the production side and the disaster recovery side):

Cd / u01/gg

. / ggsci

Create a subdirectory under the GGSCI interface (do the following on both the production side and the disaster recovery side):

GGSCI > create subdirs

At this point, the GoldenGate software has been installed.

2.4 set database archiving mode

View the archive mode of the database:

SQL > archive log list

If it is in non-archive mode, you need to enable the archive mode:

Shutdown immediate

Startup mount

Alter database archivelog

Alter database open

2.5 Open additional logs for the database

Open additional logs and switch logs (make sure Online redo log and Archive log are consistent)

Alter database add supplemental log data

Alter database add supplemental log data (primary key, unique,foreign key) columns

Alter system switch logfile

2.6 enable database mandatory log mode

Alter database force logging

2.7Create GoldenGate administrative user

The following operations should be carried out on both the production side and the disaster recovery side:

-- create tablespace

SQL > create tablespace ogg datafile'$ORACLE_BASE/oradata/test/ogg01.dbf' size 300m

-- create the user

SQL > create user ogg identified by ogg default tablespace ogg

-- grant role privileges

SQL > grant resource, connect, dba to ogg

2.8Editing GLOBALS parameter files

Change to the GoldenGate installation directory and execute the command:

Cd / u01/gg

. / ggsci

GGSCI > EDIT PARAMS. / GLOBALS

Add the following to the file:

GGSCHEMA ogg-the specified database user for DDL replication

Use the default key to generate ciphertext:

GGSCI > encrypt password ogg encryptkey default

Encrypted password: AACAAAAAAAAAAADAHBLDCCIIOIRFNEPB

Record this ciphertext, which will be used in the configuration of the following process parameters.

2.9 manage process MGR parameter configuration

PORT 7839

DYNAMICPORTLIST 7840-7860

-- AUTOSTART ER *

-- AUTORESTART EXTRACT *, RETRIES 5 Magi WAITMINUTES 3

PURGEOLDEXTRACTS. / dirdat/*,usecheckpoints, minkeepdays 2

Userid ogg, password AACAAAAAAAAAAADAHBLDCCIIOIRFNEPB, ENCRYPTKY default

PURGEDDLHISTORY MINKEEPDAYS 11,MAXKEEPDAYS 14

PURGEMARKERHISTORY MINKEEPDAYS 11, MAXKEEPDAYS 14

2.10 EXTN parameter configuration of the extraction process

EXTRACT extn

Setenv (NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252)

Userid ogg, password AACAAAAAAAAAAADAHBLDCCIIOIRFNEPB, ENCRYPTKEY default

REPORTCOUNT EVERY 1 MINUTES, RATE

DISCARDFILE. / dirrpt/discard_extn.dsc,APPEND,MEGABYTES 1024

DBOPTIONS ALLOWUNUSEDCOLUMN

WARNLONGTRANS 2h,CHECKINTERVAL 3m

EXTTRAIL. / dirdat/na

TRANLOGOPTIONS EXCLUDEUSER OGG

TRANLOGOPTIONS ALTARCHIVEDLOGFORMAT t_%s_%r.dbf

FETCHOPTIONS NOUSESNAPSHOT

TRANLOGOPTIONS CONVERTUCS2CLOBS

TRANLOGOPTIONS altarchivelogdest primary instance test / oradata/arch

-- TRANLOGOPTIONS RAWDEVICEOFFSET 0

DYNAMICRESOLUTION

DDL INCLUDE ALL

DDLOPTIONS addtrandata, NOCROSSRENAME, REPORT

Table QQQ.*

Table CUI.*

2.11 Transport process DPEN parameter configuration

EXTRACT dpen

RMTHOST 192.168.4.171, MGRPORT 7839, compress

PASSTHRU

Numfiles 50000

RMTTRAIL. / dirdat/na

TABLE QQQ.*

TABLE CUI.*

2.12 create the DDL object of OGG

$cd / u01/gg

$sqlplus "/ as sysdba"

SQL > @ marker_setup.sql

Enter GoldenGate schema name:ogg

Alter system set recyclebin=off

SQL > @ ddl_setup.sql

Enter GoldenGate schema name: ogg

SQL > @ role_setup.sql

Grant this role to each user assigned to the Extract, Replicat, GGSCI, and Manager processes, by using the following SQL command:

SQL > GRANT GGS_GGSUSER_ROLE TO

Where is the user assigned to the GoldenGate processes.

Note the hint here: you need to manually assign this GGS_GGSUSER_ROLE to the database user used by extract (that is, the user specified through userid in the parameter file). You can execute a similar sql under sqlplus:

SQL > GRANT GGS_GGSUSER_ROLE TO ogg

Note: the ogg here is the user used by extract. If you have multiple extract and use different database users, you need to repeat the above process to grant GGS_GGSUSER_ROLE permissions.

Run the following script to make the trigger effective:

SQL > @ ddl_enable.sql

Note: before enabling extraction on the production side, disable the DDL capture trigger and call ddl_disable.sql.

2.13 data initialization

In the initialization process, the source database does not need to be down, and the initialization process is divided into three parts:

The production side starts the extraction process.

Export data from the production side

Import data from disaster recovery end

Add the extraction process, the transmission process and the corresponding queue file on the production side, and execute the command as follows:

/ / create process EXTN

GGSCI > add extract extn,tranlog,begin now

GGSCI > add exttrail. / dirdat/na,extract extn,megabytes 500

/ / create process DPEN

GGSCI > add extract dpen,exttrailsource. / dirdat/na

GGSCI > add rmttrail. / dirdat/na,extract dpen,megabytes 500

Start the management process on the production side:

GGSCI > start mgr

Enable DDL capture trigger:

$cd / u01/gg

$sqlplus "/ as sysdba"

SQL > @ ddl_enable.sql

Start the extraction process on the production side:

GGSCI > start EXTN

In the database, get the current SCN number and record the SCN number:

SQL > select to_char (dbms_flashback.get_system_change_number) from dual

603809

In the database, create the required directories for the data pump and grant permissions:

SQL > CREATE OR REPLACE DIRECTORY DATA_PUMP AS'/ u01'

SQL > grant read, write on DIRECTORY DATA_PUMP to ogg

Use data pump to export data on the production side:

Expdp ogg/ogg schemas='QQQ' directory=DATA_PUMP dumpfile=QQQ_bak_%U flashback_scn=123456789 logfile=expdp_QQQ.log filesize=4096m

Expdp ogg/ogg schemas='CUI' directory=DATA_PUMP dumpfile=CUI_bak_%U flashback_scn=123456789 logfile=expdp_ CUI.log filesize=4096m

Expdp ogg/ogg schemas='test1' directory=DATA_PUMP dumpfile=test1_bak_%U flashback_scn=603809 logfile=expdp_QQQ.log filesize=4096m

Transfer the exported file to the disaster recovery end, and use the data pump to import the data:

Impdp ogg/ogg DIRECTORY=DATA_PUMP DUMPFILE=QQQ_bak_%U logfile=impdp_ QQQ.log

Impdp ogg/ogg DIRECTORY=DATA_PUMP DUMPFILE=CUI_bak_%U logfile=impdp_CUI.log

2.14 configuration of MGR parameters for disaster tolerant management process

PORT 7839

DYNAMICPORTLIST 7840-7860

-- AUTOSTART ER *

-- AUTORESTART EXTRACT *, RETRIES 5 Magi WAITMINUTES 3

PURGEOLDEXTRACTS. / dirdat/*,usecheckpoints, minkeepdays 2

Userid ogg, password AACAAAAAAAAAAADAHBLDCCIIOIRFNEPB, ENCRYPTKEY default

2.15 Editing the GLOBALS parameter file

Change to the GoldenGate installation directory and execute the command:

Cd / u01/gg

. / ggsci

Ggsci > EDIT PARAMS. / GLOBALS

Add the following to the file:

GGSCHEMA ogg-the specified database user for DDL replication

2.16 REPN parameter configuration of replication process on disaster tolerance side

REPLICAT repn

Setenv (NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252)

Userid ogg, password AACAAAAAAAAAAADAHBLDCCIIOIRFNEPB, ENCRYPTKEY default

SQLEXEC "ALTER SESSION SET CONSTRAINTS=DEFERRED"

REPORT AT 01:59

REPORTCOUNT EVERY 30 MINUTES, RATE

REPERROR DEFAULT, ABEND

Assumetargetdefs

DISCARDFILE. / dirrpt/repna.dsc, APPEND, MEGABYTES 1024

DISCARDROLLOVER AT 02:30

ALLOWNOOPUPDATES

REPERROR (1403, discard)

DDL INCLUDE MAPPED

DDLOPTIONS REPORT

MAPEXCLUDE QQQ.T0417

MAP QQQ.*, TARGET QQQ.*

MAP CUI.*, TARGET CUI.*

2.17 create replication process repn

Execute the following command to create the replication process repn:

GGSCI > add replicat repn, exttrail. / dirdat/na, nodbcheckpoint

2.18 start the transmission process on the production side and the replication process on the disaster recovery side

GGSCI > start dpen

GGSCI > start REPLICAT repn aftercsn 123456789

2.19 Test scenario

(1) create a table on the production side database.

(2) modify the data of this table on the production side database.

(3) on the production side database, delete this table.

III. GoldenGate basic operation and maintenance orders

(1) View process status

GGSCI > info all

-- check the overall operation of GG, such as process Lag latency and checkpoint latency.

GGSCI > info

Check the health of a process, such as which archive log or online redo log the extraction process is reading, which queue file the transfer process is transmitting, and which queue file the replication process is using.

GGSCI > info showch

-- View the details of a process running.

(2) View the process report

GGSCI > view report

When reporting an error, get the error message from the process report.

(3) on the operating system, check the usage of the GoldenGate installation directory

$df-h

-- check whether the ogg directory is full.

IV. Use of Logdump tool

5. Performance optimization of primary Goldengate

Batchsql

Insert abend

Limit memory usage

Particle size separation

VI. Goldengate version upgrade

VII. Two-way goldengate replication

VIII. Failback between production repository and disaster recovery database

Data conversion between heterogeneous databases, data filtering and filtering

IV. Common troubleshooting

Failure (1)

Error message:

OGG-00446 Could not find archived log for sequence 53586 thread 1 under alternative destinations. SQL. Last alternative log tried / arch_cx/1_53586_776148274.arc., error retri eving redo file name for sequence 53586, archived = 1, use_alternate = 0Not able to establish initial position for sequence 53586, rba 44286992. What to do: recover the missing archive logs from the backup. If the required archive log is still not found, data initialization can only be reimplemented. Error message: OGG-01154 Oracle GoldenGate Delivery for Oracle, repn.prm: SQL error 1691 mapping DATA_USER.DMH_WJXXB to DATA_USER.DMH_WJXXB OCI Error ORA-01691: unable to extend lob segment DATA_USER.SYS_LOB0000083691C00014 $$by 16384 in tablespace DATA_USER_LOB_U128M_1 (status = 1691), SQL. How to deal with it: the tablespace in the database is full and needs to be expanded. Error message: OGG-00664 OCI Error during OCIServerAttach (status = 12541-ORA-12541: TNS:no listener). How to handle it: start the listener of the database. Error message: OGG-00665 OCI Error describe for query (status = 3135-ORA-03135: connection lost contact Process ID: 8859 Session ID: 131Serial number: 31), SQL.

Treatment method:

The database was shut down ahead of time without shutting down the OGG process, resulting in an exception in the OGG process. If you find this error, you should immediately close the OGG process, pay attention to the archived log of the database, make sure that the archived log will not be missing, and then wait for the database to start successfully, then start the OGG process immediately.

Failure (5)

Error message:

OGG-01161 Bad column index (4) specified for table QQQ.TIANSHI, max columns = 4.

Treatment method:

Compare the table structure of this table between the production side and the disaster-tolerant side, if the table on the disaster-tolerant side lacks a column, log in to the database on the disaster-tolerant side, add this column, and then start the replication process.

Failure (6)

Error message:

ERROR OGG-00199 Table QQQ.T0417 does not exist in target database.

Treatment method:

Check the parameters of the source extraction process and whether the DDL replication parameters are configured, and re-initialize the data for this table.

GOLDENGATE Operation and maintenance Manual

Description of OGG common monitoring commands

The easiest way to monitor GoldenGate instances is through the GGSCI command line. Determine whether GoldenGate is working properly by entering a series of commands on the command line and viewing the return information. The information returned from the command line includes an overall overview, process running status, checkpoint information, parameter file configuration, latency, and so on.

In addition to logging in to the GGSCI interface directly through the host, you can also log in to each GoldenGate instance through the GoldenGate Director Web interface and run the GGSCI command. If customers deploy many GoldenGate instances, it will be inconvenient to log in to the GGSCI interface of each instance separately. It is recommended to log in to each instance through the GoldenGate Director Web interface and run command line commands.

Start the GoldenGate process

1) first log in to the source system as the system user who started the GoldenGate process (usually oracle).

2) enter the GoldenGate installation directory and execute. / ggsci to enter command line mode.

3) start the source management process GGSCI > start mgr

4) also log in to the target GoldenGate installation directory, execute. / ggsci, and then execute GGSCI > start mgr to start the management process.

5) execute GGSCI > start er * on the source side to start all processes

6) also log in to the backup side and execute GGSCI > start er * to start all processes

7) use GGSCI > info er * or GGSCI > info to check whether the process status is Running (indicating that it has been started). Note that some processes take a few minutes to get up, please repeat the command to observe their startup status.

Description: regardless of source or destination, you need to start the extract/replicat process before starting each mgr process.

The general usage of the start command is: start

Such as:

GGSCI > start extdm starts a process called extdm

You can also use wildcards, such as:

GGSCI > start er * start all extract and replicat processes

GGSCI > start extract * d * start all processes containing the character 'd'extract

GGSCI > start replicat rep* starts all replicat processes that start with "rep"

Stop the GoldenGate process

Follow these steps to stop the GoldenGate process:

1) Log in to the source host as the system user who starts the GoldenGate process (usually oracle), enter the GoldenGate installation directory and execute. / ggsci enter the command line management interface

2) (this step is only for the main extract process that extracted the log, and the data pump process and the replicat process do not need this step) verify that the log required by the extraction process of GoldenGate restarts, and execute the following command for each main extXX process:

Ggsci > info extXX, showch

... ..

Read Checkpoint # 1

... .

Recovery Checkpoint (position of oldest unprocessed transaction in the data source):

Thread #: 1

Sequence #: 9671

RBA: 239077904

Timestamp: 2008-05-20 1111 39R 07.000000

SCN: 2195.1048654191

Redo File: Not available

Current Checkpoint (position of last record read in the data source):

Thread #: 1

Sequence #: 9671

RBA: 239377476

Timestamp: 2008-05-20 11 Fran 39R 10.000000

SCN: 2195.1048654339

Redo File: Not Available

Read Checkpoint # 2

... ..

Recovery Checkpoint (position of oldest unprocessed transaction in the data source):

Thread #: 2

Sequence #: 5287

RBA: 131154160

Timestamp: 2008-05-20 1114 715 42.000000

SCN: 2195.1048640151

Redo File: / dev/rredo07

Current Checkpoint (position of last record read in the data source):

Thread #: 2

Sequence #: 5287

RBA: 138594492

Timestamp: 2008-05-20 11 Fran 39RU 14.000000

SCN: 2195.1048654739

Redo File: / dev/rredo07

... ..

First, take a look at the oldest log sequence number that Recovery Checkpoint needs to read. For example, example 1 requires log 9671 and all subsequent archive logs, and instance 2 requires a sequence number of 5287 and all subsequent archive logs. Verify that these archive logs exist in the archive log directory before you can restart the next step. "if these logs have been deleted, the archive logs need to be restored first the next reboot."

Note: for OGG 11 and later, the function of automatic caching long transactions has been added. By default, unsubmitted transactions are automatically cached to the local hard disk every 4 hours, so it only takes up to 8 hours to archive logs. However, the cache long transaction operation is only valid when extract is running, and will not be cached after it is stopped. In this case, the minimum archived log is 8 hours plus downtime. It is generally recommended to ensure that 12 hours of archived logs plus downtime are kept during restart.

3) execute GGSCI > stop er * to stop all source processes, or stop to stop each process separately.

4) Log in to the target system as oracle user, enter the installation directory / oraclelog1/goldengate, and execute. / ggsci to enter the command line.

5) execute stop er * on the target system to stop replication

6) when the processes at both ends have been stopped, the management processes in each system can be stopped through stop mgr if necessary.

Similarly, the stop command is used in the same way as the start command. I won't repeat it here.

Note that if you are only modifying the extraction or replication process parameters, you do not need to stop MGR. Do not stop the MGR process easily and use the wildcard er * carefully so as not to adversely affect other replication processes.

Check the overall operation

Go to the GoldenGate installation directory, run GGSCI, and then use the info all command to see the overall operation. The following figure is shown:

Group represents the name of the process (the MGR process does not display the name); Lag represents the delay of the process; and Status represents the state of the process. There are four states:

STARTING: indicates that it is in the process of starting

RUNNING: indicates that the process is running normally

STOPPED: indicates that the process is shut down normally

ABENDED: indicates that the process is shut down abnormally, and further investigation is needed.

Normally, the state of all processes should be RUNNING, and the Lag should be within a reasonable range.

View parameter settings

Use view params to view the parameter settings for a process. The command also supports the wildcard character *.

View process status

Use the info command to view process information. The information you can see includes process status, checkpoint information, latency, and so on. Such as:

You can also use the info detail command to view more detailed information. Including the trail files used, parameter files, report files, warning log location, and so on. Such as:

Use the info showch command to view detailed information about checkpoint, which can be used to view transaction records processed by the GoldenGate process. One of the more important is the recovery checkpoint of the extract process, which represents the earliest unprocessed transaction in the source data; through recovery checkpoint, you can see which log file the redo log of the transaction is in and the sequence number of the log file. All log files with larger serial numbers need to be retained.

View delay

GGSCI > lag can view detailed delay information. Such as:

This command is more accurate than the delay information you can see with the info command.

Note that this command can only see the delay information of the last processed record.

This command supports the wildcard character *.

View statistics

GGSCI > stats, table. You can view the number of records processed by the process. The report lists in detail the types of processing and the number of records. Such as:

GGSCI > stats edr, total lists all records processed since the process started.

GGSCI > stats edr, daily, table gg.test lists all records about the gg.test table that have been processed since that day.

View the run report

GGSCI > view report can view the running report. Such as:

You can also go to the / dirrpt/ directory and view the corresponding report file. The latest reports are always named after .rpt. A report with a suffix is a historical report, and the larger the number, the longer the corresponding time. The following figure is shown:

If there is an error while the process is running, the error code and detailed error diagnostic information are included in the report file. By finding the error code, you can help locate the cause of the error and solve the problem.

OGG's common operation and maintenance task guide to configure automatic queue deletion

1) enter the installation directory to execute. / ggsci

2) perform edit param mgr editing and management process parameters, and add or modify the following lines

Purgeoldextracts / / dirdat/*, usecheckpoint, minkeepdays 7

The first parameter is the queue location. * all queue files in the backup center can be matched.

The second parameter indicates that the first step is to ensure that the checkpoint needs are met and that the unprocessed queue cannot be deleted.

The third parameter indicates the minimum number of days to keep, followed by the number of days. For example, if you want to keep only the queue / ggs/dirdat/xm file for 3 days, you can configure it as follows:

Purgeoldextracts / ggs/dirdat/xm, usecheckpoint, minkeepdays 3

3) stop the MGR process, modify the parameters and restart the process

GGSCI > stop mgr

Enter y to confirm stop

GGSCI > start mgr

Note: temporarily stopping the mgr process does not affect data replication.

Configure to automatically start Extract and Replicat processes when starting MGR

1) enter the installation directory to execute. / ggsci

2) perform edit param mgr editing and management process parameters, and add the following lines

AUTOSTART ER *

3) stop the MGR process, modify the parameters and restart the process

GGSCI > stop mgr

GGSCI > start mgr

Note: it is generally recommended not to start automatically, but manually, to observe whether the status verification startup is successful or not, and to modify the parameters manually.

Configure MGR to automatically restart Extract and Replicat processes

GoldenGate has the function of automatically restarting extract or replicat processes, can automatically recover errors caused by network outages and temporary database suspensions, and automatically restart related processes after the system is restored without human intervention.

1) enter the installation directory and execute ggsci to enter the command line interface

2) perform edit param mgr editing and management process parameters, and add the following lines

AUTORESTART ER *, RETRIES 3, WAITMINUTES 5, RESETMINUTES 60

The above parameters indicate that all processes are attempted to restart every 5 minutes for a total of three attempts. After that, clear zero every 60 minutes, and try again every 5 minutes for a total of 3 times.

3) stop the MGR process, modify the parameters and restart the process to make the modified parameter file effective

GGSCI > stop mgr

GGSCI > start mgr

Long-term transaction management

Before stopping the extraction process, you need to check whether there is a long transaction through the command to prevent the archive log from being found the next time you start:

Ggsci > info extXX, showch

... ..

Read Checkpoint # 1

... .

Recovery Checkpoint (position of oldest unprocessed transaction in the data source):

Thread #: 1

Sequence #: 9671

RBA: 239077904

Timestamp: 2008-05-20 1111 39R 07.000000

SCN: 2195.1048654191

Redo File: Not available

Current Checkpoint (position of last record read in the data source):

Thread #: 1

Sequence #: 9671

RBA: 239377476

Timestamp: 2008-05-20 11 Fran 39R 10.000000

SCN: 2195.1048654339

Redo File: Not Available

Read Checkpoint # 2

... ..

Recovery Checkpoint (position of oldest unprocessed transaction in the data source):

Thread #: 2

Sequence #: 5287

RBA: 131154160

Timestamp: 2008-05-20 1114 715 42.000000

SCN: 2195.1048640151

Redo File: / dev/rredo07

Current Checkpoint (position of last record read in the data source):

Thread #: 2

Sequence #: 5287

RBA: 138594492

Timestamp: 2008-05-20 11 Fran 39RU 14.000000

SCN: 2195.1048654739

Redo File: / dev/rredo07

... ..

To facilitate the management of long transactions, GoldenGate provides some commands to view these long transactions, which can help customers and application developers find the corresponding long transactions and submit or rollback them in GoldenGate.

(1) ways to view long-term transactions

Ggsci > send extract, showtrans [thread n] [count n]

Where is the name of the process to be viewed, such as extsz/extxm/extjx, etc.

Thread n is optional, which means to view only uncommitted transactions on one of the nodes

Count n is also optional, indicating that only n records are displayed. For example, to view the 10 longest transactions on node 1 in the extsz process, you can use the following command:

Ggsci > send extract extsz, showtrans thread 1 count 10

The output is a list of all uncommitted transactions in descending order of time. The corresponding transactions can be found through xid. Please apply the developer and DBA help to find out the reasons for the uncommitted. After committing through the database or rolling back, the checkpoint of GoldenGate will automatically scroll forward.

(II) the method of skipping or accepting long transactions using the GoldenGate command

To force the commit or rollback of a specified transaction in GoldenGate, you can use the following command (the parameter in):

Ggsci > SEND EXTRACT, SKIPTRANS THREAD / / skip transactions

Ggsci > SEND EXTRACT, FORCETRANS THREAD / / force that the transaction has been submitted

Note: using these commands will only allow the GoldenGate process to skip or assume that the transaction has been submitted, but does not change the transactions in the database, they still exist in the database. Therefore, it is strongly recommended to use commit or rollback transactions in the database instead of using GoldenGate processing.

(3) configure long transaction alarm

You can configure long transaction alarms in the extract process. The parameters are as follows:

Extract extsz

……

Warnlongtrans 12h, checkintervals 10m

Exttrail / backup/goldengate/dirdat/sz

... .

The above indicates that GoldenGate will check long transactions every 10 minutes. If there are long transactions for more than 12 hours, GoldenGate will add an alarm message to the ggserr.log in the root directory. You can view these alerts by looking at ggserr.log or by executing the view ggsevt command in ggsci. The above configuration can help to find and process long-term transactions in a timely manner.

Note: in OGG 11g, extract provides the BR parameter, which can be set to cache long transactions to the local hard disk (default dirtmp directory) at regular intervals (default 4 hours). Therefore, as long as extract does not stop archiving logs for no more than 8 hours (limit case). However, if the extract is stopped, long transactions can no longer be automatically cached, and the archive logs required will depend on longer downtime.

Resynchronization of the table (time window required)

If some tables cause data inconsistencies on both sides for various reasons, you need to resynchronize, you can refer to the following steps.

1) confirm that there are no data changes in the tables to be modified (if there are conditions, it is recommended to stop the application system and lock all users except sys and goldengate to prevent data changes during the upgrade, or lock the tables to be resynchronized)

2) restart the dpe process (to be able to zero the statistics)

3) stop the rep process on the target side

Note: step 4-6 in order to import source-side data to the destination side through exp/imp, customers can also choose other initialization methods, such as establishing dblink for the source-side table on the destination side, and then initializing the destination-side table through create table as select from.

4) use exp to export the table or several table data on the source side. For example:

Exp goldengate/XXXX file=nanhai.dmp tables=ctais2.SB_ZSXX grants=y

5) transfer to the destination side via ftp

6) on the destination side, import data using imp

Nohup imp goldengate/XXXXX file=nanhai.dmp fromuser=ctais2 touser=ctais2 ignore=y &

7) if these tables have foreign keys, check these foreign keys on the target side and disable them (remember to maintain the prohibited and enabled scripts under dirsql SQL)

8) start the rep process on the target side

9) use the stats mydpe command to observe the statistical information of data pump to see if it contains the data changes of this resynchronization table. If you confirm that there are no data changes in these tables during this period, the reinitialization is successful. Otherwise, duplicate data may occur in the middle, and the target replicat will report an error. Set the error handling mechanism to reperror default,discard, wait for replicat to catch up, and verify the records in discard again. If all are consistent, the reinitialization will be considered successful. Of course, you can also re-initialize these tables at another time.

Resynchronization of tables (no time window required)

If some tables cause data inconsistencies on both sides due to various reasons and need to be resynchronized, but the actual business is always available 24 hours a day and cannot provide a time window, you can refer to the following steps. Due to its complexity, you need to be careful when using it. )

1) confirm that there is no significant delay in the ext/dpe/rep process, otherwise wait for the leveling before performing the operation.

2) stop the rep process on the target side

Note: in steps 3-5, customers can also choose other initialization methods, such as expdp/impdp, to import source-side data to the destination side through exp/imp.

3) obtain the current scn number on the source side. For example:

Select dbms_flashback.get_system_change_number from dual

The following example takes the obtained scn number as 1176681

4) use exp to export the table or several table data you need to reinitialize on the source side, and specify the scn number you just wrote down. For example:

Exp / tables=ctais2.SB_ZSXX grants=n statistics=none triggers=n compress=n FLASHBACK_SCN=1176681

5) transfer to the destination side via ftp

6) on the destination side, import data using imp

Nohup imp goldengate/XXXXX file=nanhai.dmp fromuser=ctais2 touser=ctais2 ignore=y &

7) if these tables have foreign keys, check these foreign keys on the target side and disable them (remember to maintain the prohibited and enabled scripts under dirsql SQL)

8) Edit the rep parameter file corresponding to the target side, add a filter condition to its map, and apply only the records after the specified scn number to these reinitialized tables (be careful not to modify the tables other than this initialization, which will result in data loss! ):

Map source.mytab, target target.mytab, filter (@ GETENV ("TRANSACTION", "CSN") > 1176681)

9) after confirming that the parameters are correct, start the rep process on the target side

10) use info repxx or lag repxx until the process catches up. Stop the process and remove filter to enter normal replication.

Data structure changes and application upgrades (when replicating DML only) source-side and target-side database addition or subtraction replication tables

(1) increase the number of replication tables

In the process parameters of GoldenGate, if all tables are matched by *, as long as the conditions matched by * are met, GoldenGate can replicate automatically as long as the table is established on the source side. There is no need to modify the configuration file, but additional logs need to be added to the new table.

The steps are as follows:

GGSCI > dblogin userid goldengate, password XXXXXXX

GGSCI > info trandata.

If it is not enable, you need to join it manually:

GGSCI > add trandata.

Note: (for Oracle 9i only) if the table has a primary key or the table has no more than 32 columns, then enabled is displayed to indicate that it has been added successfully; if there is no primary key and the column has more than 32 columns, there may be an error indicating that it cannot be added and it needs to be handled manually. Please handle it manually according to the method in Appendix II at this time.

If the wildcard is not used, a new replication table needs to be added to the last table list in the main Extract and Data Pump; the mapping of the table is also added to the map list of the target replicat.

Then, to add the table, first establish the table structure on the target side.

If you have a foreign key and trigger, you need to temporarily disable the foreign key and trigger in the target table, and maintain the corresponding script files that disable and enable these objects under dirsql.

For all source and target processes that have modified the file, you need to restart the process for the new parameters to take effect.

(II) reduce the number of replication tables

GoldenGate copies all tables that meet the wildcard conditions by default. If some tables are no longer needed, you can drop them on the source side and drop them to the destination drop without making any changes to the replication.

If several of these tables still exist, but without GoldenGate replication, you can exclude them by following these steps:

1) first verify the existence of the required archive logs on the source system, and then stop the corresponding extXX process through stop extXX

2) execute stop repXX in ggsci on the destination system to stop the replication process on the destination side

3) modify the parameter file of the ext process on the source side to exclude the tables that are not copied:

Ggsci > edit param extXX

……

Tableexclude ctais2.TMP_*

Tableexclude ctais2.BAK_*

Tableexclude ctais2.MLOG$_*

Tableexclude ctais2.RUPD$_*

Tableexclude ctais2.KJ_*

Tableexclude myschema.mytable

Table ctais2.*

…… .

Add the line "tableexclude.;" before the line that defines the table in the file. Note that the full schema and the name of the table are written.

Note: if you do not use wildcards, you can comment out the table line where the table is located.

4) modify the rep process parameters on the destination side, and also exclude the table:

GGSCI > edit param repXX

Add a line before map:

-- mapexclude CTAIS2.SHOULIXINXI

Mapexclude myschema.mytable

MAP ctais2.*, TARGET ctais2.*

Note: if you do not use wildcards, you can comment out the map line where the table is located.

5) start the replication process repXX on the destination system

GGSCI > start repXX

6) start the crawling process extXX on the source-side system

GGSCI > start extXX

You can enter the normal replication state.

Modify the table structure (when only DML is copied)

When the table structure that the database needs to replicate changes, such as adding columns, changing the properties of some columns, such as length, and so on, you can follow the following steps:

1) stop the extraction and delivery processes of the source and destination according to the operation order described earlier in this article (note that the extraction at the source side should verify the existence of the archive log to prevent it from being restarted). There is no need to stop the manager process.

2) modify the target table structure

3) modify the source table structure

4) if the table has a primary key and the primary key is not modified this time, you can directly start all the processes of the source and destination to continue to copy and complete the modification; otherwise, if the table does not have a primary key or if the primary key is modified this time, you need to continue with the following steps

Ggsci > dblogin userid goldengate, password XXXXXX

Ggsci > delete trandata schema.mytable

Ggsci > add trandata schema.mytable

(for Oracle 9i only) if the table has more than 32 columns, the above operation may report an error, which needs to be processed manually. Please refer to Appendix 2 on how to manually delete and add additional logs for the table.

5) restart the crawling and replication processes on the source side and the destination side.

Upgrade of customer applications (when only replicating DML)

If the customer's application is upgraded, resulting in changes in the source system table, if you do not configure DDL replication to, you need to modify the GoldenGate synchronization process, please refer to the following steps.

1) stop the extraction and delivery processes of the source and destination (note that the source extraction needs to verify the existence of the archive log to prevent it from being restarted). There is no need to stop the manager process.

2) upgrade the source system

3) reconstruct the stored procedures, tables, function and other operations created by the customer upgrade application on the target side. DML operations such as additions, deletions and modifications of business tables need not be performed on the target side, they will be copied by OGG

4) manually disable the established trigger and foreign keys on the target side, and add these sql and reverse maintenance (that is, re-enable trigger and foreign keys) SQL to the corresponding script files in the target OGG dirsql directory

Note: when installing the implementation, the tables that prohibit trigger and foreign keys should be placed under the target dirsql, and the file names are recommended as disableTrigger.sql and disableFK.sql. At the same time, you need to prepare a reverse maintenance (that is, re-enable trigger and foreign keys, it is recommended to be enableTrigger.sql and enableFK.sql) SQL, which is also placed in the dirsql directory of the target OGG in case you take over the application in the future.

5) for the tables added on the source side during the upgrade process, additional logs need to be added to the new tables. The steps are as follows:

GGSCI > dblogin userid goldengate, password XXXXXXX

GGSCI > info trandata.

If it is not enable, you need to join it manually:

GGSCI > add trandata.

Note: (for Oracle 9i only) if the table has a primary key or the table has no more than 32 columns, then enabled is displayed to indicate that it has been added successfully; if there is no primary key and the column has more than 32 columns, there may be an error indicating that it cannot be added and it needs to be handled manually. Please handle it manually according to the method in Appendix II at this time.

6) for tables drop dropped on the source side during upgrade, GoldenGate copies all tables that meet the wildcard conditions by default. You can drop them directly on the destination side without making any changes to the replication.

7) if the table with the primary key is modified during the upgrade, continue with the following steps

Ggsci > dblogin userid goldengate, password XXXXXX

Ggsci > delete trandata schema.mytable

Ggsci > add trandata schema.mytable

(for Oracle 9i only) if the table has more than 32 columns, the above operation may report an error, which needs to be processed manually. Please refer to Appendix 2 on how to manually delete and add additional logs for the table.

8) restart the crawling and replication processes on the source side and the destination side.

Configure DDL replication to automatically synchronize data structure changes whether DDL replication is turned on

Please refer to the appendix for specific restrictions on DDL replication for OGG. In view of these limitations, another important factor is that the trigger of DDL will have a certain impact on the performance of the source database. DDL replication is not recommended in principle on the national network. If there is a special reason to turn on DDL replication, you can negotiate with the Oracle engineer.

To turn on DDL replication

The following are the steps to configure DDL replication for reference only. For more information, please refer to the official installation documentation of GoldenGate.

? (optional, but strongly recommended) collect statistics on a regular basis to improve data dictionary access speed

DDL replication of OGG requires a lot of access to data dictionary information, and collecting statistics through the database on a regular basis (for example, once a month) can effectively improve the performance of OGG DDL replication. The following is an example:

Sqlplus / nolog TRUE)

Execute dbms_stats.gather_schema_stats ('SYS',cascade= > TRUE)

Execute dbms_stats.gather_schema_stats ('SYSTEM',cascade= > TRUE)

Exit

EOF

? Establish an OGG replication user, or grant permissions to an existing user:

CREATE USER goldengate IDENTIFIED BY goldengate DEFAULT TABLESPACE ts_ogg

GRANT CONNECT TO goldengate

GRANT RESOURCE TO goldengate

Grant dba to goldengate

? Specify the schema where the DDL object resides, which is directly established under the goldengate user:

Ggsci > EDIT PARAMS. / GLOBALS

GGSCHEMA goldengate

? Check that the recyclebin parameter of the database is turned off:

SQL > show parameter recyclebin

NAME TYPE

VALUE

-

Recyclebin string

On

If it is not off, you need to disable recyclebin:

Alter system set recyclebin=off

? Create the DDL object for OGG:

Sqlplus "/ as sysdba"

SQL > @ marker_setup.sql

Enter GoldenGate schema name:goldengate

SQL > @ ddl_setup.sql

Enter GoldenGate schema name:goldengate

SQL > @ role_setup.sql

Grant this role to each user assigned to the Extract, Replicat, GGSCI, and Manager processes, by using the following SQL command:

GRANT GGS_GGSUSER_ROLE TO

Where is the user assigned to the GoldenGate processes.

Note the hint here: it requires you to manually assign this GGS_GGSUSER_ROLE to the database user used by your extract (that is, the user specified through userid in the parameter file), and you can execute a similar sql under sqlplus:

GRANT GGS_GGSUSER_ROLE TO ggs1

The ggs1 here is the user used by extract. If you have multiple extract and use different database users, you need to repeat the above process to grant GGS_GGSUSER_ROLE permissions.

? Start the trigger captured by OGG DDL

Execute the ddl_enable.sql script in sqlplus to enable trigger captured by ddl.

Note: the trigger captured by ddl and the extract process of OGG are independent of each other, and it does not depend on the existence of the extract process. Even if the extract process for OGG does not exist or does not start, but trigger has been enabled, the action of capturing ddl continues. If you want to stop capturing DDL capture completely, you need to perform the next step to disable ddl's trigger.

? (optional) install tools to improve OGG DDL replication performance

To provide the performance of DDL replication for OGG, you can add the ddl_pin script to the script started by the database, which requires the parameter of the DDL user with an OGG (that is, the user who installed the DDL object, in this case, goldengate):

SQL > @ ddl_pin

? (if DDL replication is no longer needed) stop the trigger captured by OGG DDL

Execute the ddl_disable.sql script in sqlplus to enable trigger captured by ddl.

Typical configuration of DDL replication

The data pump process of GoldenGate and the ddl switch of replicat are turned on by default, and only the main extract is turned off by default, so the configuration of DDL is generally carried out only in the main extract. Combined with the various restrictions of OGG described in the appendix, if you need to turn on DDL replication, it is recommended that you only open DDL replication of tables and index that are closely related to the data. The parameters are as follows:

DDL &

INCLUDE MAPPED OBJTYPE 'table' &

INCLUDE MAPPED OBJTYPE 'index'

DDLOPTIONS ADDTRANDATA, NOCROSSRENAME

In addition, add the parameters of the automatic purge ddl intermediate table to the mgr:

Userid goldengate,password XXXXX

PURGEDDLHISTORY MINKEEPDAYS 3, MAXKEEPDAYS 7

PURGEMARKERHISTORY MINKEEPDAYS 3, MAXKEEPDAYS 7

For other objects, manual maintenance is still recommended to upgrade at both ends at the same time. It is important to note that cascading deletions and trigger should be disabled immediately after the target side is established.

Exception handling plan network failure

If the autorestart parameter is set in the MGR process parameter file, GoldenGate can restart automatically without human intervention.

When the network fails, the Datapump process that GoldenGate is responsible for generating the remote queue will stop automatically. At this point, the MGR process will start the Datapump process automatically according to the autorestart settings in mgr.prm on a regular basis to test whether the network is restored. After the network is restored, the Datapump process responsible for generating the remote queue is restarted, and GoldenGate's checkpoint mechanism ensures that the process continues to replicate from the log location where the replication was last aborted.

It should be noted that because the source-side extraction process (Capture) is still grabbing the log and writing to the local queue file, but the Datapump process can not move the local queue to the remote location in time, so the local queue file cannot be automatically cleared and accumulated. Sufficient storage space is needed to store stacked queue files. The calculation formula is as follows:

Queue size per unit time of storage capacity ≥ × network failure recovery time

MGR periodically starts crawling and replication process parameters configuration reference:

GGSCI > edit param mgr

Port 7809

Autorestart er *, waitminutes 3 retries 5 re SETMINUTES 60

Retry every 3 minutes, wait 60 minutes after 5 failed retries, and then try again three times.

Single node failure in RAC environment

In the RAC environment, the GoldenGate software is installed in a shared directory. You can start the GoldenGate runtime interface by connecting to a shared directory through any node. If one of the nodes fails, causing the GoldenGate process to abort, you can switch directly to another node to continue running. It is recommended to do the following with the assistance of Oracle Technical support:

1) Log in to the source system as an oracle user (through another intact node)

2) confirm that the file system where GoldenGate is installed is mounted to another node in the same directory

3) confirm that the GoldenGate installation directory belongs to oracle users and their groups

4) confirm that oracle users and their groups have read and write permissions to the GoldenGate installation directory

5) enter the goldengate installation directory

6) execute. / ggsci to enter the command line interface

7) execute start mgr to start mgr

8) execute start er * to start all processes

Check whether each process is started properly, and then you can enter the normal replication. The above process can be automatically switched by integrating with cluster software such as CRS or HACMP. For specific steps, please refer to the national network test documentation.

Common exceptions in Extract process

For the source database, if the extraction process extxm becomes abended, you can view the report by using the view report command in ggsci, and you can quickly locate errors by searching ERROR.

In general, the reason for extracting an exception is that it cannot find the corresponding archive log, which can be executed under the command line of the archive log directory.

Ls-lt arch_X_XXXXX.arc

Check to see if the log exists, and if not, the possible reasons are:

§the log has been compressed

GoldenGate cannot be decompressed automatically and needs to be decompressed manually before it can be read.

§the log has been deleted

If the log has been deleted and needs to be restored before it can be replicated, contact the DBA of your unit to restore the archive log.

It is generally necessary to back up the archived logs regularly and clear the old archived logs. You need to ensure that the archive logs remain in the archive directory long enough before they can be backed up and cleared. That is, regular backups clear archives from several hours ago, rather than all archives. The retention time is calculated as follows:

Retention time of an archive file the time required by the ≥ extraction process to process all the logs in the file

You can run the info exXX showch command from the command line or the GoldenGate Director Web interface to see which log sequence number the crawling process exXX processes. All archives prior to this serial number can be safely cleared. As shown in the following figure:

Common exceptions in Replicat process

For the target database, if the delivery process repXX becomes abended, you can view the report by using the view report command in ggsci, and you can quickly locate errors by searching ERROR.

The error in the replication process is usually the target database error, such as:

1) temporary database downtime

2) insufficient storage space for the target tablespace

3) there is inconsistency in the target table.

You can check the cause of the error according to the report, rule it out and restart the rep process.

One thing to note: it is easy to ignore UNDO tablespaces. If the DML statement contains a large number of update and delete operations, the target side undo will be generated very quickly and may fill the UNDO tablespace. Therefore, you need to check the size of the UNDO tablespace frequently.

General steps for exception handling

If there is an exception in GoldenGate replication, you can try to resolve the problem by following these steps:

1. Find the word ERROR through the ggsci > view report command, determine the cause of the error and eliminate it according to its information

two。 View alarm log information via ggsci > view ggsevt

3. Check whether the databases on both sides are running properly and whether the network is connected.

4. If you cannot determine the cause of the error, you can seek Oracle technical support. The following information is generally required when seeking technical support:

A) error description

B) the process report, located under dirrpt, begins with an uppercase process name and ends with .rpt, or EXTSZ.rpt if the process is named extsz

C) GGS log ggserr.log, located in the GGS home directory

D) missing data report, defined in the parameter disardfile of the replication process, usually ending with .dsc

E) current queue, located under dirdat

Appendix Oracle GoldenGate V11.1 data replication restrictions

Unstructured data replication such as files is not supported

GoldenGate relies on the parsing of database logs to obtain data changes, so it can only support the replication of data changes in the database, but can not support the replication of unstructured data such as files.

Oracle data type restrictions

GoldenGate supports replication of common Oralce data types.

L data types not supported by GoldenGate

A) ANYDATA

B) ANYDATASET

C) ANYTYPE

D) BFILE

E) BINARY_INTEGER

F) MLSLABEL

G) PLS_INTEGER

H) TIMEZONE_ABBR

I) TIMEZONE_REGION

J) URITYPE

K) UROWID

L GoldenGate has limited support for XML Type replication

? Oracle 9i and later only

? The table must have a primary key or unique index

L GoldenGate has limited support for UDT user-defined type replication

? If you have this type of data, please contact the technical support staff and provide a script.

Oracle DML operation support

GoldenGate currently supports all DML operations of regular tables and DML operations that have limited support for some special objects. For special tables or objects, please refer to the following section on special objects.

L GoldenGate does not support objects such as nologging tables.

When the table or tablespace is set to nologging, inserting data using unconventional modes such as sqlloader or append will not be written to the database log, so GoldenGate cannot obtain these data changes. It is recommended that all required business tables be set to logging status, and nologging tables are not replicated.

L GoldenGate does not support objects and operations as follows

A) REF

B) tablespaces and tables created using the COMPRESS option

C) Database Replay

L GoldenGate supports replication of Sequence sequences

L GoldenGate can support replication of synonyms or DBLink by copying the source table.

Because operations on these objects themselves take place in the source database objects to which they are linked, and operations on these linked target objects are not recorded in the database log, GoldenGate does not replicate operations on synonyms or DBLink itself, but these operations are applied to the source table and generate logs, so changes can be replicated by copying the source table.

L GoldenGate has limited support for IOT indexes to organize table replication

? Oracle 10.2 and later only

? IOT created using MAPPING TABLE can be supported, but only data changes in the base table are extracted, not MAPPING TABLE.

? IOT stored in compress mode is not supported. For example, IOT stored in a tablespace that uses the compress option is not supported.

L GoldenGate has limited support for Clustered Table replication

? Oracle 9i and later only

? Clustered tables that does not support Encrypted encryption and compressed compression

L GoldenGate has limited support for materialized view replication

? Materialized views created with the WITH ROWID option are not supported

? The source table must have a primary key

? Truncate of materialized view is not supported but DELETE FROM is supported.

? The target materialized view must be updatable

? Full refresh for materialized views is only supported in Oracle 10g or later

Oracle DDL replication restrictions

The principle of GoldenGateDDL replication is to obtain the sql from the source database through Trigger and reproduce it to the target side. There are many limitations in practical use, that is, the sql that can be executed by the source side may not be able to execute successfully to the target side. Here are some common problems:

? DDL cannot execute successfully when the object designed in the SQL statement does not exist in the target. For example, if the source establishes a DBLINk or create table as select * from mydblink, the destination side may not have a library or object pointed to by the dblink, so the sql statement will report an error

? When the physical locations of the two ends are different, the establishment of statements related to the physical location, such as data file or tablespace, needs to be replaced with the physical location of the target on the target side.

? When the creation constraint does not specify a name, objects with different names are generated at the source and target, so that these objects cannot be correctly mapped to the target side when they are modified later.

? When copying a table with LOB, the ddl operation must wait for the DML operation to complete before replicating

? Cannot copy tables with indication and column names in Chinese

? Chinese comments cannot be added to the definition of a table or other object

? Objects such as CREATE trigger/procedure/function/package with compilation errors cannot be copied

? Sql statements with'/'at the end cannot be copied.

In addition, GoldenGate DDL replication requires turning off the _ RECYCLEBIN parameter of Oracle (Oracle 10.1) or the RECYCLEBIN parameter (Oracle 10.2 and later).

Another important thing is that DDL replication of Trigger based,GoldenGate may degrade the performance of the source database, so DDL replication is not recommended. For more information, please refer to the national network OGG implementation principles.

Description: please refer to OGG's official reference manual for more details.

How to add additional logs for unkeyed tables with more than 32 columns in Oracle 9i

The essence of adding additional log operations to a database table is to execute the following SQL statement:

Alter table

Add supplemental log group (column,..) Always

Oracle GoldenGate's add trandata also calls this statement to execute:

1) when the table has a primary key, all columns that serve as the primary key are added to the columns clause and added to the additional log group

2) if there is no primary key, a unique index will be found, and the unique index column will be added to the additional log group in the columns clause.

3) if there is no primary key and unique index, all columns are added to the additional log group.

When adding additional logs to non-primary keys and unique index tables, Oracle 9i has a restriction that each additional log group cannot exceed 32 columns (approximate numbers, depending on the actual column definition length). At this point, the add Trandata command that calls GoldenGate will fail by splitting all the columns of the table into groups of no more than 32 columns, and then adding additional log groups (set different additional log group names for different combinations). Here is an example of adding additional logs to a list of more than 32:

ALTER TABLE SIEBEL.XYZ_SQL ADD SUPPLEMENTAL LOG GROUP GGS_XYZ_SQL_649101_1 (ACTION, ACTION_HASH, ADDRESS, BUFFER_GETS, CHILD_ADDRESS, CHILD_LATCH, CHILD_NUMBER, COMMAND_TYPE, CPU_TIME, DISK_READS, ELAPSED_TIME, EXECUTIONS, FETCHES, FIRST_LOAD_TIME, HASH_VALUE, INSTANCE_ID, INVALIDATIONS, IS_OBSOLETE, KEPT_VERSIONS, LAST_LOAD_TIME, LITERAL_HASH_VALUE, LOADED_VERSIONS, LOADS, MODULE, MODULE_HASH, OBJECT_STATUS, OPEN_VERSIONS, OPTIMIZER_COST, OPTIMIZER_MODE OUTLINE_CATEGORY, OUTLINE_SID, PARSE_CALLS) always

ALTER TABLE SIEBEL.XYZ_SQL ADD SUPPLEMENTAL LOG GROUP GGS_XYZ_SQL_649101_2 (PARSING_SCHEMA_ID, PARSING_USER_ID, PERSISTENT_MEM, PLAN_HASH_VALUE, REMOTE, ROWS_PROCESSED, RUNTIME_MEM, SERIALIZABLE_ABORTS, SHARABLE_MEM, SNAP_ID, SORTS, SQLTYPE, SQL_TEXT, TYPE_CHK_HEAP, USERS_EXECUTING, USERS_OPENING) always

Note: after adding additional logs manually, you cannot use info trandata to view the additional logs in ggsci. At this time, you can query whether there are any tables that have not been added to the additional logs by using the following statement:

SQL > select * from dba_log_groups where owner='SIEBEL' and table_name='XXX'

To verify that all the required columns are in the attached log, you can query dba_log_group_columns again.

If you need to remove the additional log group drop, you can use the following format:

Alter table

Drop supplemental log group

On the character set Analysis of ogg

Once we are familiar with the character set of oracle, it is best not to modify it. We should pay attention to the character set of oracle goldengate, because if the character set of the target side is inconsistent with that of the source side, and some characters cannot represent ogg on the target side, we may not be able to guarantee data consistency.

Source database character set:

SQL > select value from v$nls_parameters where parameter='NLS_CHARACTERSET'

VALUE

AL32UTF8

If Xiaoyu sets SETENV (NLS_LANG= "AMERICAN_AMERICA.ZHS16GBK") on the source side to specify the character set of the source side client

GGSCI (dg01) 21 > view params exiaoyu

Extract exiaoyu

SETENV (NLS_LANG= "AMERICAN_AMERICA.ZHS16GBK")

SETENV (ORACLE_SID= "xiaoyu")

Userid ogg,password ogg

Dynamicresolution

Gettruncates

Report at 2:00

Reportrollover at 3:00

Warnlongtrans 3h,checkinterval 10m

Exttrail. / dirdat/dd

Table xiaoyu.*

Table xiaoyugg.*

Let's take a look at the report of the corresponding extract process. It is found that ogg finds that the NLS_LANG variable of the source client is inconsistent with the character set of the source database, so it chooses the character set of the source database, which is not specified according to the SETENV in the parameters of the extract process.

GGSCI (dg01) 52 > view report exiaoyu

* * Running with the following parameters * *

*

2013-06-04 04:50:27 INFO OGG-03035 Operating system character set identified as UTF-8. Locale: en_US, LC_ALL:.

Extract exiaoyu

SETENV (NLS_LANG= "AMERICAN_AMERICA.ZHS16GBK")

Set environment variable (NLS_LANG=AMERICAN_AMERICA.ZHS16GBK)

SETENV (ORACLE_SID= "xiaoyu")

Set environment variable (ORACLE_SID=xiaoyu)

Userid ogg,password *

2013-06-04 04:50:28 INFO OGG-03500 WARNING: NLS_LANG environment variable does not match database character set, or not set. Using database character set value of AL32UTF8.

[oracle@ogg 11.2] $oggerr 3500

03500, 00000, "WARNING: NLS_LANG environment variable does not match database character set, or not set. Using database character set value of {0}"

/ / * {0}: nls_charset (String)

/ / * Cause: The NLS_LANG environment variable is not set to the same as the

/ / database character set. Oracle GoldenGate is using the database

/ / character set.

/ / * Action: None

It seems that when the source setting of NLS_LANG is inconsistent with the character set of oracle database, ogg will still choose the character set of oracle database and ignore the process parameter SETEVN NLS_LANG of extract.

Next, test the target side:

SETENV (NLS_LANG= "AMERICAN_AMERICA.ZHS16GBK") is also specified here

GGSCI (ogg.single) 15 > view params rxiaoyu

Replicat rxiaoyu

SETENV (NLS_LANG= "AMERICAN_AMERICA.ZHS16GBK")

SETENV (ORACLE_SID= "xiaoyu")

Userid ogg,password ogg

Assumetargetdefs

Gettruncates

Report at 2:00

Reportrollover at 3:00

Discardfile. / dirrpt/discard_rxiaoyu.dsc,append,megabytes 100

Map xiaoyu.xiaoyu10,target xiaoyu.xiaoyu10,filter (@ getenv ("transaction", "csn") > 1074454806)

Map xiaoyu.*,target xiaoyu.*

Map xiaoyugg.*,target ogg.*

Observing the replicat process on the target side, it is found that ogg selects SETENV (NLS_LANG= "AMERICAN_AMERICA.ZHS16GBK") in the process parameters.

GGSCI (ogg.single) 17 > view report rxiaoyu

. . .

2013-06-05 03:14:14 WARNING OGG-03504 NLS_LANG character set ZHS16GBK on the target is different from the source database character set AL32UTF8. Replication may not be valid if the source data has an incompatible character for the target NLS_LANG character set

At this point, the prompt given by ogg needs to set the SETENV NLS_LANG variable correctly in the replicat process. Here, the source side passes the AL32UTF8 character set, the target side specifies ZHS16GBK through the replicat process parameter SETENV NLS_LANG, and ogg also uses the parameters of the replicat process, and the source character set is not selected.

[oracle@ogg 11.2] $oggerr 3504

03504, 00000, "NLS_LANG character set {0} on the target is different from the source database character set {1}. Replication may not be valid if the source data has an incompatible character for the target NLS_LANG character set."

/ / * {0}: nls_lang_charset (String)

/ / * {1}: src_db_charset (String)

/ / * Cause: The NLS_LANG environment variable on the target is set to a

/ / different character set than the character set of the source

/ / database.

/ / * Action: Set the NLS_LANG environment variable on the target to the

/ / character set of the source database that is shown in the message.

/ / You can use the SETENV parameter in the Replicat parameter file to

/ / set it for the Replicat session.

The 3504 warning reported by ogg is to remind the destination side of the inconsistency between the character set and the source side, which may cause exceptions in the replicat process. Here, ogg also recommends setting NLS_LANG in the replicat process to make the destination side consistent with the source side.

Then the influence of the character set on the ogg is the source side and the target side. If the source side and target side database character sets have always been, here a consistent SETENV NLS_LANG is equal to the default database character set directly in the process, while if the source side and target side character sets are inconsistent, you need to manually specify that the replicat process parameter SETENV NLS_LANG equals the source side character set on the target side. Of course, for the final data rows in the database, Xiaoyu thinks that it still needs to be converted to the character set of the target side oracle database again. (ogg is also a synchronous replication product, and its technical principle can not be separated from oracle database.)

About Me

.

● this article is sorted out from the network

● article is updated synchronously on itpub (http://blog.itpub.net/26736162), blog Park (http://www.cnblogs.com/lhrbest) and personal Wechat official account (xiaomaimiaolhr).

● article itpub address: http://blog.itpub.net/26736162/abstract/1/

● article blog park address: http://www.cnblogs.com/lhrbest

● pdf version of this article and wheat seedling cloud disk address: http://blog.itpub.net/26736162/viewspace-1624453/

● database written examination interview questions database and answers: http://blog.itpub.net/26736162/viewspace-2134706/

● QQ group: 230161599 WeChat group: private chat

● contact me, please add QQ friend (646634621), indicate the reason for adding

● completed in Mordor from 09:00 on 2017-07-01 to 22:00 on 2017-07-31.

The content of the ● article comes from the study notes of wheat seedlings, and some of it is sorted out from the Internet. Please forgive me if there is any infringement or improper place.

Copyright ● all rights reserved, welcome to share this article, please reserve the source for reprint

.

Pick up your phone and use Wechat client to scan the picture on the left below to follow the Wechat official account of wheat seedlings: xiaomaimiaolhr, scan the QR code on the right to join the QQ group of wheat seedlings, and learn the most practical database technology.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report