In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
GOLDENGATE Operation and maintenance Manual
OGG common monitoring commands
Description
The easiest way to monitor GoldenGate instances is through the GGSCI command line. Determine whether GoldenGate is working properly by entering a series of commands on the command line and viewing the return information. The information returned from the command line includes an overall overview, process running status, checkpoint information, parameter file configuration, latency, and so on.
In addition to logging in to the GGSCI interface directly through the host, you can also log in to each GoldenGate instance through the GoldenGate Director Web interface and run the GGSCI command. If customers deploy many GoldenGate instances, it will be inconvenient to log in to the GGSCI interface of each instance separately. It is recommended to log in to each instance through the GoldenGate Director Web interface and run command line commands.
Start the GoldenGate process
1) first log in to the source system as the system user who started the GoldenGate process (usually oracle).
2) enter the GoldenGate installation directory and execute. / ggsci to enter command line mode.
3) start the source management process GGSCI > start mgr
4) also log in to the target GoldenGate installation directory, execute. / ggsci, and then execute GGSCI > start mgr to start the management process.
5) execute GGSCI > start er * on the source side to start all processes
6) also log in to the backup side and execute GGSCI > start er * to start all processes
7) use GGSCI > info er * or GGSCI > info to check whether the process status is Running (indicating that it has been started). Note that some processes take a few minutes to get up, please repeat the command to observe their startup status.
Description: regardless of source or destination, you need to start the extract/replicat process before starting each mgr process.
The general usage of the start command is: start
Such as:
GGSCI > start extdm starts a process called extdm
You can also use wildcards, such as:
GGSCI > start er * start all extract and replicat processes
GGSCI > start extract * d * start all processes containing the character 'd'extract
GGSCI > start replicat rep* starts all replicat processes that start with "rep"
Stop the GoldenGate process
Follow these steps to stop the GoldenGate process:
1) Log in to the source host as the system user who starts the GoldenGate process (usually oracle), enter the GoldenGate installation directory and execute. / ggsci enter the command line management interface
2) (this step is only for the main extract process that extracted the log, and the data pump process and the replicat process do not need this step) verify that the log required by the extraction process of GoldenGate restarts, and execute the following command for each main extXX process:
Ggsci > info extXX, showch
... ..
Read Checkpoint # 1
... .
Recovery Checkpoint (position of oldest unprocessed transaction in the data source):
Thread #: 1
Sequence #: 9671
RBA: 239077904
Timestamp: 2008-05-20 1111 39R 07.000000
SCN: 2195.1048654191
Redo File: Not available
Current Checkpoint (position of last record read in the data source):
Thread #: 1
Sequence #: 9671
RBA: 239377476
Timestamp: 2008-05-20 11 Fran 39R 10.000000
SCN: 2195.1048654339
Redo File: Not Available
Read Checkpoint # 2
... ..
Recovery Checkpoint (position of oldest unprocessed transaction in the data source):
Thread #: 2
Sequence #: 5287
RBA: 131154160
Timestamp: 2008-05-20 1114 715 42.000000
SCN: 2195.1048640151
Redo File: / dev/rredo07
Current Checkpoint (position of last record read in the data source):
Thread #: 2
Sequence #: 5287
RBA: 138594492
Timestamp: 2008-05-20 11 Fran 39RU 14.000000
SCN: 2195.1048654739
Redo File: / dev/rredo07
... ..
First, take a look at the oldest log sequence number that Recovery Checkpoint needs to read. For example, example 1 requires log 9671 and all subsequent archive logs, and instance 2 requires a sequence number of 5287 and all subsequent archive logs. Verify that these archive logs exist in the archive log directory before you can restart the next step. "if these logs have been deleted, the archive logs need to be restored first the next reboot."
Note: for OGG 11 and later, the function of automatic caching long transactions has been added. By default, unsubmitted transactions are automatically cached to the local hard disk every 4 hours, so it only takes up to 8 hours to archive logs. However, the cache long transaction operation is only valid when extract is running, and will not be cached after it is stopped. In this case, the minimum archived log is 8 hours plus downtime. It is generally recommended to ensure that 12 hours of archived logs plus downtime are kept during restart.
3) execute GGSCI > stop er * to stop all source processes, or stop to stop each process separately.
4) Log in to the target system as oracle user, enter the installation directory / oraclelog1/goldengate, and execute. / ggsci to enter the command line.
5) execute stop er * on the target system to stop replication
6) when the processes at both ends have been stopped, the management processes in each system can be stopped through stop mgr if necessary.
Similarly, the stop command is used in the same way as the start command. I won't repeat it here.
Note that if you are only modifying the extraction or replication process parameters, you do not need to stop MGR. Do not stop the MGR process easily and use the wildcard er * carefully so as not to adversely affect other replication processes.
Check the overall operation
Go to the GoldenGate installation directory, run GGSCI, and then use the info all command to see the overall operation. The following figure is shown:
WpsBA38.tmp
Group represents the name of the process (the MGR process does not display the name); Lag represents the delay of the process; and Status represents the state of the process. There are four states:
STARTING: indicates that it is in the process of starting
RUNNING: indicates that the process is running normally
STOPPED: indicates that the process is shut down normally
ABENDED: indicates that the process is shut down abnormally, and further investigation is needed.
Normally, the state of all processes should be RUNNING, and the Lag should be within a reasonable range.
View parameter settings
Use view params to view the parameter settings for a process. The command also supports the wildcard character *.
WpsBA39.tmp
View process status
Use the info command to view process information. The information you can see includes process status, checkpoint information, latency, and so on. Such as:
WpsBA3A.tmp
You can also use the info detail command to view more detailed information. Including the trail files used, parameter files, report files, warning log location, and so on. Such as:
WpsBA4B.tmp
Use the info showch command to view detailed information about checkpoint, which can be used to view transaction records processed by the GoldenGate process. One of the more important is the recovery checkpoint of the extract process, which represents the earliest unprocessed transaction in the source data; through recovery checkpoint, you can see which log file the redo log of the transaction is in and the sequence number of the log file. All log files with larger serial numbers need to be retained.
WpsBA4C.tmp
View delay
GGSCI > lag can view detailed delay information. Such as:
WpsBA4D.tmp
This command is more accurate than the delay information you can see with the info command.
Note that this command can only see the delay information of the last processed record.
This command supports the wildcard character *.
View statistics
GGSCI > stats, table. You can view the number of records processed by the process. The report lists in detail the types of processing and the number of records. Such as:
WpsBA4E.tmp
GGSCI > stats edr, total lists all records processed since the process started.
GGSCI > stats edr, daily, table gg.test lists all records about the gg.test table that have been processed since that day.
View the run report
GGSCI > view report can view the running report. Such as:
WpsBA4F.tmp
You can also go to the / dirrpt/ directory and view the corresponding report file. The latest reports are always named after .rpt. A report with a suffix is a historical report, and the larger the number, the longer the corresponding time. The following figure is shown:
WpsBA60.tmp
If there is an error while the process is running, the error code and detailed error diagnostic information are included in the report file. By finding the error code, you can help locate the cause of the error and solve the problem.
Guide to common operation and maintenance tasks of OGG
Configure automatic deletion queues
1) enter the installation directory to execute. / ggsci
2) perform edit param mgr editing and management process parameters, and add or modify the following lines
Purgeoldextracts / / dirdat/*, usecheckpoint, minkeepdays 7
The first parameter is the queue location. * all queue files in the backup center can be matched.
The second parameter indicates that the first step is to ensure that the checkpoint needs are met and that the unprocessed queue cannot be deleted.
The third parameter indicates the minimum number of days to keep, followed by the number of days. For example, if you want to keep only the queue / ggs/dirdat/xm file for 3 days, you can configure it as follows:
Purgeoldextracts / ggs/dirdat/xm, usecheckpoint, minkeepdays 3
3) stop the MGR process, modify the parameters and restart the process
GGSCI > stop mgr
Enter y to confirm stop
GGSCI > start mgr
Note: temporarily stopping the mgr process does not affect data replication.
Configure to automatically start Extract and Replicat processes when starting MGR
1) enter the installation directory to execute. / ggsci
2) perform edit param mgr editing and management process parameters, and add the following lines
AUTOSTART ER *
3) stop the MGR process, modify the parameters and restart the process
GGSCI > stop mgr
GGSCI > start mgr
Note: it is generally recommended not to start automatically, but manually, to observe whether the status verification startup is successful or not, and to modify the parameters manually.
Configure MGR to automatically restart Extract and Replicat processes
GoldenGate has the function of automatically restarting extract or replicat processes, can automatically recover errors caused by network outages and temporary database suspensions, and automatically restart related processes after the system is restored without human intervention.
1) enter the installation directory and execute ggsci to enter the command line interface
2) perform edit param mgr editing and management process parameters, and add the following lines
AUTORESTART ER *, RETRIES 3, WAITMINUTES 5, RESETMINUTES 60
The above parameters indicate that all processes are attempted to restart every 5 minutes for a total of three attempts. After that, clear zero every 60 minutes, and try again every 5 minutes for a total of 3 times.
3) stop the MGR process, modify the parameters and restart the process to make the modified parameter file effective
GGSCI > stop mgr
GGSCI > start mgr
Long-term transaction management
Before stopping the extraction process, you need to check whether there is a long transaction through the command to prevent the archive log from being found the next time you start:
Ggsci > info extXX, showch
... ..
Read Checkpoint # 1
... .
Recovery Checkpoint (position of oldest unprocessed transaction in the data source):
Thread #: 1
Sequence #: 9671
RBA: 239077904
Timestamp: 2008-05-20 1111 39R 07.000000
SCN: 2195.1048654191
Redo File: Not available
Current Checkpoint (position of last record read in the data source):
Thread #: 1
Sequence #: 9671
RBA: 239377476
Timestamp: 2008-05-20 11 Fran 39R 10.000000
SCN: 2195.1048654339
Redo File: Not Available
Read Checkpoint # 2
... ..
Recovery Checkpoint (position of oldest unprocessed transaction in the data source):
Thread #: 2
Sequence #: 5287
RBA: 131154160
Timestamp: 2008-05-20 1114 715 42.000000
SCN: 2195.1048640151
Redo File: / dev/rredo07
Current Checkpoint (position of last record read in the data source):
Thread #: 2
Sequence #: 5287
RBA: 138594492
Timestamp: 2008-05-20 11 Fran 39RU 14.000000
SCN: 2195.1048654739
Redo File: / dev/rredo07
... ..
To facilitate the management of long transactions, GoldenGate provides some commands to view these long transactions, which can help customers and application developers find the corresponding long transactions and submit or rollback them in GoldenGate.
(1) ways to view long-term transactions
Ggsci > send extract, showtrans [thread n] [count n]
Where is the name of the process to be viewed, such as extsz/extxm/extjx, etc.
Thread n is optional, which means to view only uncommitted transactions on one of the nodes
Count n is also optional, indicating that only n records are displayed. For example, to view the 10 longest transactions on node 1 in the extsz process, you can use the following command:
Ggsci > send extract extsz, showtrans thread 1 count 10
The output is a list of all uncommitted transactions in descending order of time. The corresponding transactions can be found through xid. Please apply the developer and DBA help to find out the reasons for the uncommitted. After committing through the database or rolling back, the checkpoint of GoldenGate will automatically scroll forward.
(II) the method of skipping or accepting long transactions using the GoldenGate command
To force the commit or rollback of a specified transaction in GoldenGate, you can use the following command (the parameter in):
Ggsci > SEND EXTRACT, SKIPTRANS THREAD / / skip transactions
Ggsci > SEND EXTRACT, FORCETRANS THREAD / / force that the transaction has been submitted
Note: using these commands will only allow the GoldenGate process to skip or assume that the transaction has been submitted, but does not change the transactions in the database, they still exist in the database. Therefore, it is strongly recommended to use commit or rollback transactions in the database instead of using GoldenGate processing.
(3) configure long transaction alarm
You can configure long transaction alarms in the extract process. The parameters are as follows:
Extract extsz
……
Warnlongtrans 12h, checkintervals 10m
Exttrail / backup/goldengate/dirdat/sz
... .
The above indicates that GoldenGate will check long transactions every 10 minutes. If there are long transactions for more than 12 hours, GoldenGate will add an alarm message to the ggserr.log in the root directory. You can view these alerts by looking at ggserr.log or by executing the view ggsevt command in ggsci. The above configuration can help to find and process long-term transactions in a timely manner.
Note: in OGG 11g, extract provides the BR parameter, which can be set to cache long transactions to the local hard disk (default dirtmp directory) at regular intervals (default 4 hours). Therefore, as long as extract does not stop archiving logs for no more than 8 hours (limit case). However, if the extract is stopped, long transactions can no longer be automatically cached, and the archive logs required will depend on longer downtime.
Resynchronization of the table (time window required)
If some tables cause data inconsistencies on both sides for various reasons, you need to resynchronize, you can refer to the following steps.
1) confirm that there are no data changes in the tables to be modified (if there are conditions, it is recommended to stop the application system and lock all users except sys and goldengate to prevent data changes during the upgrade, or lock the tables to be resynchronized)
2) restart the dpe process (to be able to zero the statistics)
3) stop the rep process on the target side
Note: step 4-6 in order to import source-side data to the destination side through exp/imp, customers can also choose other initialization methods, such as establishing dblink for the source-side table on the destination side, and then initializing the destination-side table through create table as select from.
4) use exp to export the table or several table data on the source side. For example:
Exp goldengate/XXXX file=nanhai.dmp tables=ctais2.SB_ZSXX grants=y
5) transfer to the destination side via ftp
6) on the destination side, import data using imp
Nohup imp goldengate/XXXXX file=nanhai.dmp fromuser=ctais2 touser=ctais2 ignore=y &
7) if these tables have foreign keys, check these foreign keys on the target side and disable them (remember to maintain the prohibited and enabled scripts under dirsql SQL)
8) start the rep process on the target side
9) use the stats mydpe command to observe the statistical information of data pump to see if it contains the data changes of this resynchronization table. If you confirm that there are no data changes in these tables during this period, the reinitialization is successful. Otherwise, duplicate data may occur in the middle, and the target replicat will report an error. Set the error handling mechanism to reperror default,discard, wait for replicat to catch up, and verify the records in discard again. If all are consistent, the reinitialization will be considered successful. Of course, you can also re-initialize these tables at another time.
Resynchronization of tables (no time window required)
If some tables cause data inconsistencies on both sides due to various reasons and need to be resynchronized, but the actual business is always available 24 hours a day and cannot provide a time window, you can refer to the following steps. Due to its complexity, you need to be careful when using it. )
1) confirm that there is no significant delay in the ext/dpe/rep process, otherwise wait for the leveling before performing the operation.
2) stop the rep process on the target side
Note: in steps 3-5, customers can also choose other initialization methods, such as expdp/impdp, to import source-side data to the destination side through exp/imp.
3) obtain the current scn number on the source side. For example:
Select dbms_flashback.get_system_change_number from dual
The following example takes the obtained scn number as 1176681
4) use exp to export the table or several table data you need to reinitialize on the source side, and specify the scn number you just wrote down. For example:
Exp / tables=ctais2.SB_ZSXX grants=n statistics=none triggers=n compress=n FLASHBACK_SCN=1176681
5) transfer to the destination side via ftp
6) on the destination side, import data using imp
Nohup imp goldengate/XXXXX file=nanhai.dmp fromuser=ctais2 touser=ctais2 ignore=y &
7) if these tables have foreign keys, check these foreign keys on the target side and disable them (remember to maintain the prohibited and enabled scripts under dirsql SQL)
8) Edit the rep parameter file corresponding to the target side, add a filter condition to its map, and apply only the records after the specified scn number to these reinitialized tables (be careful not to modify the tables other than this initialization, which will result in data loss! ):
Map source.mytab, target target.mytab, filter (@ GETENV ("TRANSACTION", "CSN") > 1176681)
9) after confirming that the parameters are correct, start the rep process on the target side
10) use info repxx or lag repxx until the process catches up. Stop the process and remove filter to enter normal replication.
Data structure change and application upgrade
(when only replicating DML) add or subtract replication tables for source and destination databases
(1) increase the number of replication tables
In the process parameters of GoldenGate, if all tables are matched by *, as long as the conditions matched by * are met, GoldenGate can replicate automatically as long as the table is established on the source side. There is no need to modify the configuration file, but additional logs need to be added to the new table.
The steps are as follows:
GGSCI > dblogin userid goldengate, password XXXXXXX
GGSCI > info trandata.
If it is not enable, you need to join it manually:
GGSCI > add trandata.
Note: (for Oracle 9i only) if the table has a primary key or the table has no more than 32 columns, then enabled is displayed to indicate that it has been added successfully; if there is no primary key and the column has more than 32 columns, there may be an error indicating that it cannot be added and it needs to be handled manually. Please handle it manually according to the method in Appendix II at this time.
If the wildcard is not used, a new replication table needs to be added to the last table list in the main Extract and Data Pump; the mapping of the table is also added to the map list of the target replicat.
Then, to add the table, first establish the table structure on the target side.
If you have a foreign key and trigger, you need to temporarily disable the foreign key and trigger in the target table, and maintain the corresponding script files that disable and enable these objects under dirsql.
For all source and target processes that have modified the file, you need to restart the process for the new parameters to take effect.
(II) reduce the number of replication tables
GoldenGate copies all tables that meet the wildcard conditions by default. If some tables are no longer needed, you can drop them on the source side and drop them to the destination drop without making any changes to the replication.
If several of these tables still exist, but without GoldenGate replication, you can exclude them by following these steps:
1) first verify the existence of the required archive logs on the source system, and then stop the corresponding extXX process through stop extXX
2) execute stop repXX in ggsci on the destination system to stop the replication process on the destination side
3) modify the parameter file of the ext process on the source side to exclude the tables that are not copied:
Ggsci > edit param extXX
……
Tableexclude ctais2.TMP_*
Tableexclude ctais2.BAK_*
Tableexclude ctais2.MLOG$_*
Tableexclude ctais2.RUPD$_*
Tableexclude ctais2.KJ_*
Tableexclude myschema.mytable
Table ctais2.*
…… .
Add the line "tableexclude.;" before the line that defines the table in the file. Note that the full schema and the name of the table are written.
Note: if you do not use wildcards, you can comment out the table line where the table is located.
4) modify the rep process parameters on the destination side, and also exclude the table:
GGSCI > edit param repXX
Add a line before map:
-- mapexclude CTAIS2.SHOULIXINXI
Mapexclude myschema.mytable
MAP ctais2.*, TARGET ctais2.*
Note: if you do not use wildcards, you can comment out the map line where the table is located.
5) start the replication process repXX on the destination system
GGSCI > start repXX
6) start the crawling process extXX on the source-side system
GGSCI > start extXX
You can enter the normal replication state.
Modify the table structure (when only DML is copied)
When the table structure that the database needs to replicate changes, such as adding columns, changing the properties of some columns, such as length, and so on, you can follow the following steps:
1) stop the extraction and delivery processes of the source and destination according to the operation order described earlier in this article (note that the extraction at the source side should verify the existence of the archive log to prevent it from being restarted). There is no need to stop the manager process.
2) modify the target table structure
3) modify the source table structure
4) if the table has a primary key and the primary key is not modified this time, you can directly start all the processes of the source and destination to continue to copy and complete the modification; otherwise, if the table does not have a primary key or if the primary key is modified this time, you need to continue with the following steps
Ggsci > dblogin userid goldengate, password XXXXXX
Ggsci > delete trandata schema.mytable
Ggsci > add trandata schema.mytable
(for Oracle 9i only) if the table has more than 32 columns, the above operation may report an error, which needs to be processed manually. Please refer to Appendix 2 on how to manually delete and add additional logs for the table.
5) restart the crawling and replication processes on the source side and the destination side.
Upgrade of customer applications (when only replicating DML)
If the customer's application is upgraded, resulting in changes in the source system table, if you do not configure DDL replication to, you need to modify the GoldenGate synchronization process, please refer to the following steps.
1) stop the extraction and delivery processes of the source and destination (note that the source extraction needs to verify the existence of the archive log to prevent it from being restarted). There is no need to stop the manager process.
2) upgrade the source system
3) reconstruct the stored procedures, tables, function and other operations created by the customer upgrade application on the target side. DML operations such as additions, deletions and modifications of business tables need not be performed on the target side, they will be copied by OGG
4) manually disable the established trigger and foreign keys on the target side, and add these sql and reverse maintenance (that is, re-enable trigger and foreign keys) SQL to the corresponding script files in the target OGG dirsql directory
Note: when installing the implementation, the tables that prohibit trigger and foreign keys should be placed under the target dirsql, and the file names are recommended as disableTrigger.sql and disableFK.sql. At the same time, you need to prepare a reverse maintenance (that is, re-enable trigger and foreign keys, it is recommended to be enableTrigger.sql and enableFK.sql) SQL, which is also placed in the dirsql directory of the target OGG in case you take over the application in the future.
5) for the tables added on the source side during the upgrade process, additional logs need to be added to the new tables. The steps are as follows:
GGSCI > dblogin userid goldengate, password XXXXXXX
GGSCI > info trandata.
If it is not enable, you need to join it manually:
GGSCI > add trandata.
Note: (for Oracle 9i only) if the table has a primary key or the table has no more than 32 columns, then enabled is displayed to indicate that it has been added successfully; if there is no primary key and the column has more than 32 columns, there may be an error indicating that it cannot be added and it needs to be handled manually. Please handle it manually according to the method in Appendix II at this time.
6) for tables drop dropped on the source side during upgrade, GoldenGate copies all tables that meet the wildcard conditions by default. You can drop them directly on the destination side without making any changes to the replication.
7) if the table with the primary key is modified during the upgrade, continue with the following steps
Ggsci > dblogin userid goldengate, password XXXXXX
Ggsci > delete trandata schema.mytable
Ggsci > add trandata schema.mytable
(for Oracle 9i only) if the table has more than 32 columns, the above operation may report an error, which needs to be processed manually. Please refer to Appendix 2 on how to manually delete and add additional logs for the table.
8) restart the crawling and replication processes on the source side and the destination side.
Configure DDL replication to automatically synchronize data structure changes
Whether to turn on DDL replication
Please refer to the appendix for specific restrictions on DDL replication for OGG. In view of these limitations, another important factor is that the trigger of DDL will have a certain impact on the performance of the source database. DDL replication is not recommended in principle on the national network. If there is a special reason to turn on DDL replication, you can negotiate with the Oracle engineer.
To turn on DDL replication
The following are the steps to configure DDL replication for reference only. For more information, please refer to the official installation documentation of GoldenGate.
? (optional, but strongly recommended) collect statistics on a regular basis to improve data dictionary access speed
DDL replication of OGG requires a lot of access to data dictionary information, and collecting statistics through the database on a regular basis (for example, once a month) can effectively improve the performance of OGG DDL replication. The following is an example:
Sqlplus / nolog TRUE)
Execute dbms_stats.gather_schema_stats ('SYS',cascade= > TRUE)
Execute dbms_stats.gather_schema_stats ('SYSTEM',cascade= > TRUE)
Exit
EOF
? Establish an OGG replication user, or grant permissions to an existing user:
CREATE USER goldengate IDENTIFIED BY goldengate DEFAULT TABLESPACE ts_ogg
GRANT CONNECT TO goldengate
GRANT RESOURCE TO goldengate
Grant dba to goldengate
? Specify the schema where the DDL object resides, which is directly established under the goldengate user:
Ggsci > EDIT PARAMS. / GLOBALS
GGSCHEMA goldengate
? Check that the recyclebin parameter of the database is turned off:
SQL > show parameter recyclebin
NAME TYPE
VALUE
-
Recyclebin string
On
If it is not off, you need to disable recyclebin:
Alter system set recyclebin=off
? Create the DDL object for OGG:
Sqlplus "/ as sysdba"
SQL > @ marker_setup.sql
Enter GoldenGate schema name:goldengate
SQL > @ ddl_setup.sql
Enter GoldenGate schema name:goldengate
SQL > @ role_setup.sql
Grant this role to each user assigned to the Extract, Replicat, GGSCI, and Manager processes, by using the following SQL command:
GRANT GGS_GGSUSER_ROLE TO
Where is the user assigned to the GoldenGate processes.
Note the hint here: it requires you to manually assign this GGS_GGSUSER_ROLE to the database user used by your extract (that is, the user specified through userid in the parameter file), and you can execute a similar sql under sqlplus:
GRANT GGS_GGSUSER_ROLE TO ggs1
The ggs1 here is the user used by extract. If you have multiple extract and use different database users, you need to repeat the above process to grant GGS_GGSUSER_ROLE permissions.
? Start the trigger captured by OGG DDL
Execute the ddl_enable.sql script in sqlplus to enable trigger captured by ddl.
Note: the trigger captured by ddl and the extract process of OGG are independent of each other, and it does not depend on the existence of the extract process. Even if the extract process for OGG does not exist or does not start, but trigger has been enabled, the action of capturing ddl continues. If you want to stop capturing DDL capture completely, you need to perform the next step to disable ddl's trigger.
? (optional) install tools to improve OGG DDL replication performance
To provide the performance of DDL replication for OGG, you can add the ddl_pin script to the script started by the database, which requires the parameter of the DDL user with an OGG (that is, the user who installed the DDL object, in this case, goldengate):
SQL > @ ddl_pin
? (if DDL replication is no longer needed) stop the trigger captured by OGG DDL
Execute the ddl_disable.sql script in sqlplus to enable trigger captured by ddl.
Typical configuration of DDL replication
The data pump process of GoldenGate and the ddl switch of replicat are turned on by default, and only the main extract is turned off by default, so the configuration of DDL is generally carried out only in the main extract. Combined with the various restrictions of OGG described in the appendix, if you need to turn on DDL replication, it is recommended that you only open DDL replication of tables and index that are closely related to the data. The parameters are as follows:
DDL &
INCLUDE MAPPED OBJTYPE 'table' &
INCLUDE MAPPED OBJTYPE 'index'
DDLOPTIONS ADDTRANDATA, NOCROSSRENAME
In addition, add the parameters of the automatic purge ddl intermediate table to the mgr:
Userid goldengate,password XXXXX
PURGEDDLHISTORY MINKEEPDAYS 3, MAXKEEPDAYS 7
PURGEMARKERHISTORY MINKEEPDAYS 3, MAXKEEPDAYS 7
For other objects, manual maintenance is still recommended to upgrade at both ends at the same time. It is important to note that cascading deletions and trigger should be disabled immediately after the target side is established.
Exception handling plan
Network malfunction
If the autorestart parameter is set in the MGR process parameter file, GoldenGate can restart automatically without human intervention.
When the network fails, the Datapump process that GoldenGate is responsible for generating the remote queue will stop automatically. At this point, the MGR process will start the Datapump process automatically according to the autorestart settings in mgr.prm on a regular basis to test whether the network is restored. After the network is restored, the Datapump process responsible for generating the remote queue is restarted, and GoldenGate's checkpoint mechanism ensures that the process continues to replicate from the log location where the replication was last aborted.
It should be noted that because the source-side extraction process (Capture) is still grabbing the log and writing to the local queue file, but the Datapump process can not move the local queue to the remote location in time, so the local queue file cannot be automatically cleared and accumulated. Sufficient storage space is needed to store stacked queue files. The calculation formula is as follows:
Queue size per unit time of storage capacity ≥ × network failure recovery time
MGR periodically starts crawling and replication process parameters configuration reference:
GGSCI > edit param mgr
Port 7809
Autorestart er *, waitminutes 3 retries 5 re SETMINUTES 60
Retry every 3 minutes, wait 60 minutes after 5 failed retries, and then try again three times.
Single node failure in RAC environment
In the RAC environment, the GoldenGate software is installed in a shared directory. You can start the GoldenGate runtime interface by connecting to a shared directory through any node. If one of the nodes fails, causing the GoldenGate process to abort, you can switch directly to another node to continue running. It is recommended to do the following with the assistance of Oracle Technical support:
1) Log in to the source system as an oracle user (through another intact node)
2) confirm that the file system where GoldenGate is installed is mounted to another node in the same directory
3) confirm that the GoldenGate installation directory belongs to oracle users and their groups
4) confirm that oracle users and their groups have read and write permissions to the GoldenGate installation directory
5) enter the goldengate installation directory
6) execute. / ggsci to enter the command line interface
7) execute start mgr to start mgr
8) execute start er * to start all processes
Check whether each process is started properly, and then you can enter the normal replication. The above process can be automatically switched by integrating with cluster software such as CRS or HACMP. For specific steps, please refer to the national network test documentation.
Common exceptions in Extract process
For the source database, if the extraction process extxm becomes abended, you can view the report by using the view report command in ggsci, and you can quickly locate errors by searching ERROR.
In general, the reason for extracting an exception is that it cannot find the corresponding archive log, which can be executed under the command line of the archive log directory.
Ls-lt arch_X_XXXXX.arc
Check to see if the log exists, and if not, the possible reasons are:
§the log has been compressed
GoldenGate cannot be decompressed automatically and needs to be decompressed manually before it can be read.
§the log has been deleted
If the log has been deleted and needs to be restored before it can be replicated, contact the DBA of your unit to restore the archive log.
It is generally necessary to back up the archived logs regularly and clear the old archived logs. You need to ensure that the archive logs remain in the archive directory long enough before they can be backed up and cleared. That is, regular backups clear archives from several hours ago, rather than all archives. The retention time is calculated as follows:
Retention time of an archive file the time required by the ≥ extraction process to process all the logs in the file
You can run the info exXX showch command from the command line or the GoldenGate Director Web interface to see which log sequence number the crawling process exXX processes. All archives prior to this serial number can be safely cleared. As shown in the following figure:
WpsBA61.tmp
Common exceptions in Replicat process
For the target database, if the delivery process repXX becomes abended, you can view the report by using the view report command in ggsci, and you can quickly locate errors by searching ERROR.
The error in the replication process is usually the target database error, such as:
1) temporary database downtime
2) insufficient storage space for the target tablespace
3) there is inconsistency in the target table.
You can check the cause of the error according to the report, rule it out and restart the rep process.
One thing to note: it is easy to ignore UNDO tablespaces. If the DML statement contains a large number of update and delete operations, the target side undo will be generated very quickly and may fill the UNDO tablespace. Therefore, you need to check the size of the UNDO tablespace frequently.
General steps for exception handling
If there is an exception in GoldenGate replication, you can try to resolve the problem by following these steps:
1. Find the word ERROR through the ggsci > view report command, determine the cause of the error and eliminate it according to its information
two。 View alarm log information via ggsci > view ggsevt
3. Check whether the databases on both sides are running properly and whether the network is connected.
4. If you cannot determine the cause of the error, you can seek Oracle technical support. The following information is generally required when seeking technical support:
A) error description
B) the process report, located under dirrpt, begins with an uppercase process name and ends with .rpt, or EXTSZ.rpt if the process is named extsz
C) GGS log ggserr.log, located in the GGS home directory
D) missing data report, defined in the parameter disardfile of the replication process, usually ending with .dsc
E) current queue, located under dirdat
Appendix
Oracle GoldenGate V11.1 data replication restrictions
Unstructured data replication such as files is not supported
GoldenGate relies on the parsing of database logs to obtain data changes, so it can only support the replication of data changes in the database, but can not support the replication of unstructured data such as files.
Oracle data type restrictions
GoldenGate supports replication of common Oralce data types.
L data types not supported by GoldenGate
A) ANYDATA
B) ANYDATASET
C) ANYTYPE
D) BFILE
E) BINARY_INTEGER
F) MLSLABEL
G) PLS_INTEGER
H) TIMEZONE_ABBR
I) TIMEZONE_REGION
J) URITYPE
K) UROWID
L GoldenGate has limited support for XML Type replication
? Oracle 9i and later only
? The table must have a primary key or unique index
L GoldenGate has limited support for UDT user-defined type replication
? If you have this type of data, please contact the technical support staff and provide a script.
Oracle DML operation support
GoldenGate currently supports all DML operations of regular tables and DML operations that have limited support for some special objects. For special tables or objects, please refer to the following section on special objects.
L GoldenGate does not support objects such as nologging tables.
When the table or tablespace is set to nologging, inserting data using unconventional modes such as sqlloader or append will not be written to the database log, so GoldenGate cannot obtain these data changes. It is recommended that all required business tables be set to logging status, and nologging tables are not replicated.
L GoldenGate does not support objects and operations as follows
A) REF
B) tablespaces and tables created using the COMPRESS option
C) Database Replay
L GoldenGate supports replication of Sequence sequences
L GoldenGate can support replication of synonyms or DBLink by copying the source table.
Because operations on these objects themselves take place in the source database objects to which they are linked, and operations on these linked target objects are not recorded in the database log, GoldenGate does not replicate operations on synonyms or DBLink itself, but these operations are applied to the source table and generate logs, so changes can be replicated by copying the source table.
L GoldenGate has limited support for IOT indexes to organize table replication
? Oracle 10.2 and later only
? IOT created using MAPPING TABLE can be supported, but only data changes in the base table are extracted, not MAPPING TABLE.
? IOT stored in compress mode is not supported. For example, IOT stored in a tablespace that uses the compress option is not supported.
L GoldenGate has limited support for Clustered Table replication
? Oracle 9i and later only
? Clustered tables that does not support Encrypted encryption and compressed compression
L GoldenGate has limited support for materialized view replication
? Materialized views created with the WITH ROWID option are not supported
? The source table must have a primary key
? Truncate of materialized view is not supported but DELETE FROM is supported.
? The target materialized view must be updatable
? Full refresh for materialized views is only supported in Oracle 10g or later
Oracle DDL replication restrictions
The principle of GoldenGateDDL replication is to obtain the sql from the source database through Trigger and reproduce it to the target side. There are many limitations in practical use, that is, the sql that can be executed by the source side may not be able to execute successfully to the target side. Here are some common problems:
? DDL cannot execute successfully when the object designed in the SQL statement does not exist in the target. For example, if the source establishes a DBLINk or create table as select * from mydblink, the destination side may not have a library or object pointed to by the dblink, so the sql statement will report an error
? When the physical locations of the two ends are different, the establishment of statements related to the physical location, such as data file or tablespace, needs to be replaced with the physical location of the target on the target side.
? When the creation constraint does not specify a name, objects with different names are generated at the source and target, so that these objects cannot be correctly mapped to the target side when they are modified later.
? When copying a table with LOB, the ddl operation must wait for the DML operation to complete before replicating
? Cannot copy tables with indication and column names in Chinese
? Chinese comments cannot be added to the definition of a table or other object
? Objects such as CREATE trigger/procedure/function/package with compilation errors cannot be copied
? Sql statements with'/'at the end cannot be copied.
In addition, GoldenGate DDL replication requires turning off the _ RECYCLEBIN parameter of Oracle (Oracle 10.1) or the RECYCLEBIN parameter (Oracle 10.2 and later).
Another important thing is that DDL replication of Trigger based,GoldenGate may degrade the performance of the source database, so DDL replication is not recommended. For more information, please refer to the national network OGG implementation principles.
Description: please refer to OGG's official reference manual for more details.
How to add additional logs for unkeyed tables with more than 32 columns in Oracle 9i
The essence of adding additional log operations to a database table is to execute the following SQL statement:
Alter table
Add supplemental log group (column,..) Always
Oracle GoldenGate's add trandata also calls this statement to execute:
1) when the table has a primary key, all columns that serve as the primary key are added to the columns clause and added to the additional log group
2) if there is no primary key, a unique index will be found, and the unique index column will be added to the additional log group in the columns clause.
3) if there is no primary key and unique index, all columns are added to the additional log group.
When adding additional logs to non-primary keys and unique index tables, Oracle 9i has a restriction that each additional log group cannot exceed 32 columns (approximate numbers, depending on the actual column definition length). At this point, the add Trandata command that calls GoldenGate will fail by splitting all the columns of the table into groups of no more than 32 columns, and then adding additional log groups (set different additional log group names for different combinations). Here is an example of adding additional logs to a list of more than 32:
ALTER TABLE SIEBEL.XYZ_SQL ADD SUPPLEMENTAL LOG GROUP GGS_XYZ_SQL_649101_1 (ACTION, ACTION_HASH, ADDRESS, BUFFER_GETS, CHILD_ADDRESS, CHILD_LATCH, CHILD_NUMBER, COMMAND_TYPE, CPU_TIME, DISK_READS, ELAPSED_TIME, EXECUTIONS, FETCHES, FIRST_LOAD_TIME, HASH_VALUE, INSTANCE_ID, INVALIDATIONS, IS_OBSOLETE, KEPT_VERSIONS, LAST_LOAD_TIME, LITERAL_HASH_VALUE, LOADED_VERSIONS, LOADS, MODULE, MODULE_HASH, OBJECT_STATUS, OPEN_VERSIONS, OPTIMIZER_COST, OPTIMIZER_MODE OUTLINE_CATEGORY, OUTLINE_SID, PARSE_CALLS) always
ALTER TABLE SIEBEL.XYZ_SQL ADD SUPPLEMENTAL LOG GROUP GGS_XYZ_SQL_649101_2 (PARSING_SCHEMA_ID, PARSING_USER_ID, PERSISTENT_MEM, PLAN_HASH_VALUE, REMOTE, ROWS_PROCESSED, RUNTIME_MEM, SERIALIZABLE_ABORTS, SHARABLE_MEM, SNAP_ID, SORTS, SQLTYPE, SQL_TEXT, TYPE_CHK_HEAP, USERS_EXECUTING, USERS_OPENING) always
Note: after adding additional logs manually, you cannot use info trandata to view the additional logs in ggsci. At this time, you can query whether there are any tables that have not been added to the additional logs by using the following statement:
SQL > select * from dba_log_groups where owner='SIEBEL' and table_name='XXX'
To verify that all the required columns are in the attached log, you can query dba_log_group_columns again.
If you need to remove the additional log group drop, you can use the following format:
Alter table
Drop supplemental log group
On the character set Analysis of ogg
Once we are familiar with the character set of oracle, it is best not to modify it. We should pay attention to the character set of oracle goldengate, because if the character set of the target side is inconsistent with that of the source side, and some characters cannot represent ogg on the target side, we may not be able to guarantee data consistency.
Source database character set:
SQL > select value from v$nls_parameters where parameter='NLS_CHARACTERSET'
VALUE
AL32UTF8
If Xiaoyu sets SETENV (NLS_LANG= "AMERICAN_AMERICA.ZHS16GBK") on the source side to specify the character set of the source side client
GGSCI (dg01) 21 > view params exiaoyu
Extract exiaoyu
SETENV (NLS_LANG= "AMERICAN_AMERICA.ZHS16GBK")
SETENV (ORACLE_SID= "xiaoyu")
Userid ogg,password ogg
Dynamicresolution
Gettruncates
Report at 2:00
Reportrollover at 3:00
Warnlongtrans 3h,checkinterval 10m
Exttrail. / dirdat/dd
Table xiaoyu.*
Table xiaoyugg.*
Let's take a look at the report of the corresponding extract process. It is found that ogg finds that the NLS_LANG variable of the source client is inconsistent with the character set of the source database, so it chooses the character set of the source database, which is not specified according to the SETENV in the parameters of the extract process.
GGSCI (dg01) 52 > view report exiaoyu
* * Running with the following parameters * *
*
2013-06-04 04:50:27 INFO OGG-03035 Operating system character set identified as UTF-8. Locale: en_US, LC_ALL:.
Extract exiaoyu
SETENV (NLS_LANG= "AMERICAN_AMERICA.ZHS16GBK")
Set environment variable (NLS_LANG=AMERICAN_AMERICA.ZHS16GBK)
SETENV (ORACLE_SID= "xiaoyu")
Set environment variable (ORACLE_SID=xiaoyu)
Userid ogg,password *
2013-06-04 04:50:28 INFO OGG-03500 WARNING: NLS_LANG environment variable does not match database character set, or not set. Using database character set value of AL32UTF8.
[oracle@ogg 11.2] $oggerr 3500
03500, 00000, "WARNING: NLS_LANG environment variable does not match database character set, or not set. Using database character set value of {0}"
/ / * {0}: nls_charset (String)
/ / * Cause: The NLS_LANG environment variable is not set to the same as the
/ / database character set. Oracle GoldenGate is using the database
/ / character set.
/ / * Action: None
It seems that when the source setting of NLS_LANG is inconsistent with the character set of oracle database, ogg will still choose the character set of oracle database and ignore the process parameter SETEVN NLS_LANG of extract.
Next, test the target side:
SETENV (NLS_LANG= "AMERICAN_AMERICA.ZHS16GBK") is also specified here
GGSCI (ogg.single) 15 > view params rxiaoyu
Replicat rxiaoyu
SETENV (NLS_LANG= "AMERICAN_AMERICA.ZHS16GBK")
SETENV (ORACLE_SID= "xiaoyu")
Userid ogg,password ogg
Assumetargetdefs
Gettruncates
Report at 2:00
Reportrollover at 3:00
Discardfile. / dirrpt/discard_rxiaoyu.dsc,append,megabytes 100
Map xiaoyu.xiaoyu10,target xiaoyu.xiaoyu10,filter (@ getenv ("transaction", "csn") > 1074454806)
Map xiaoyu.*,target xiaoyu.*
Map xiaoyugg.*,target ogg.*
Observing the replicat process on the target side, it is found that ogg selects SETENV (NLS_LANG= "AMERICAN_AMERICA.ZHS16GBK") in the process parameters.
GGSCI (ogg.single) 17 > view report rxiaoyu
. . .
2013-06-05 03:14:14 WARNING OGG-03504 NLS_LANG character set ZHS16GBK on the target is different from the source database character set AL32UTF8. Replication may not be valid if the source data has an incompatible character for the target NLS_LANG character set
At this point, the prompt given by ogg needs to set the SETENV NLS_LANG variable correctly in the replicat process. Here, the source side passes the AL32UTF8 character set, the target side specifies ZHS16GBK through the replicat process parameter SETENV NLS_LANG, and ogg also uses the parameters of the replicat process, and the source character set is not selected.
[oracle@ogg 11.2] $oggerr 3504
03504, 00000, "NLS_LANG character set {0} on the target is different from the source database character set {1}. Replication may not be valid if the source data has an incompatible character for the target NLS_LANG character set."
/ / * {0}: nls_lang_charset (String)
/ / * {1}: src_db_charset (String)
/ / * Cause: The NLS_LANG environment variable on the target is set to a
/ / different character set than the character set of the source
/ / database.
/ / * Action: Set the NLS_LANG environment variable on the target to the
/ / character set of the source database that is shown in the message.
/ / You can use the SETENV parameter in the Replicat parameter file to
/ / set it for the Replicat session.
The 3504 warning reported by ogg is to remind the destination side of the inconsistency between the character set and the source side, which may cause exceptions in the replicat process. Here, ogg also recommends setting NLS_LANG in the replicat process to make the destination side consistent with the source side.
Then the influence of the character set on the ogg is the source side and the target side. If the source side and target side database character sets have always been, here a consistent SETENV NLS_LANG is equal to the default database character set directly in the process, while if the source side and target side character sets are inconsistent, you need to manually specify that the replicat process parameter SETENV NLS_LANG equals the source side character set on the target side. Of course, for the final data rows in the database, Xiaoyu thinks that it still needs to be converted to the character set of the target side oracle database again. (ogg is also a synchronous replication product, and its technical principle can not be separated from oracle database.)
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.