In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
CHAPTER 1
Introduction to Oracle GoldenGate
Chapter 1
Introduction to Oracle GoldenGate
Oracle GoldenGate supported processing methods and databases
Processing methods and databases supported by Oracle GoldenGate
Oracle GoldenGate enables the exchange and manipulation of data at the transaction level
Among multiple, heterogeneous platforms across the enterprise 1. Its modular architecture
Gives you the flexibility to extract and replicate selected data records, transactional
Changes, and changes to DDL (data definition language 2) across a variety of topologies.
With this flexibility, and the filtering, transformation, and custom processing features of
Oracle GoldenGate, you can support numerous business requirements:
Oracle GoldenGate supports data exchange and processing at the transaction level
Multiple heterogeneous platforms across enterprises. Its modular architecture gives you the flexibility to extract and copy transactional selected data records
Changes between multiple topologies and changes to DDL (data definition language 2). With this flexibility and the filtering, transformation and custom processing capabilities of DDL (data definition language 2)
With Oracle GoldenGate, you can support many business needs:
● Business continuance and high availability.
● business continuity and high availability
● Initial load and database migration.
● initialization loading and data migration
● Data integration.
● data integration
● Decision support and data warehousing.
● decision support and data Warehouse
Figure 1 Oracle GoldenGate supported topologies
Figure 1 Topology supported by Oracle GoldenGate
Support for replication across different database types and topologies varies by database type. See the Oracle GoldenGate Installation and Setup Guide for your database for detailed information about supported configurations.
Support for replication between different database types and topologies varies depending on the database type. For more information about supported configurations, see the Oracle GoldenGate installation and setup Guide for the database
2. DDL is not supported for all databases
Not all databases support DDL
For full information about processing methodology, supported topologies and functionality, and configuration requirements, see
The Oracle GoldenGate Installation and Setup Guide for your database.
For complete information about how to handle, supported topologies and features, and configuration requirements, see
Oracle GoldenGate installation and setup Guide for your database.
Supported only as a target database. Cannot be a source database for Oracle GoldenGate extraction.
* * Uses a capture module that communicates with the Oracle GoldenGate API to send change data to Oracle GoldenGate.
* Only like-to-like configuration is supported. Data manipulation, filtering, column mapping not supported.
Only supported as a target database. Cannot be the source database extracted by Oracle GoldenGate.
* * use a capture module that communicates with Oracle GoldenGate API to send change data to Oracle GoldenGate.
Only similar configurations are supported. Data manipulation, filtering, and column mapping are not supported.
Overview of the Oracle GoldenGate architecture
Overview of Oracle GoldenGate Architecture
Oracle GoldenGate can be configured for the following purposes
● statically extracts data records from one database and loads them to another database
● continuously extracts and replicates transactional DML operations and DDL changes (for supported databases) to maintain the consistency of the source and target data.
● extracts and copies files from the database to files external to the database
Oracle GoldenGate consists of the following components:
● Extract
● Data pump
● Replicat
● Trails or extract files
● Checkpoints
● Manager
● Collector
Figure 2 illustrates the logical architecture of Oracle GoldenGate for initial data loads and
For the synchronization of DML and DDL operations. This is the basic configuration.
Variations of this model are recommended depending on business needs.
Figure 2 illustrates the logical architecture of Oracle GoldenGate for initial data loading and synchronization of DML and DDL operations. This is the basic configuration. It is recommended that you change this model according to your business needs.
Figure 2 Oracle GoldenGate logical architecture
Figure 2 Oracle GoldenGate logical architecture
Overview of Extract
The Extract process runs on the source system and is the extraction (capture) mechanism
Of Oracle GoldenGate. You can configure Extract in one of the following ways:
● Initial loads: For initial data loads, Extract extracts (captures) a current, static set of
Data directly from their source objects.
● Change synchronization: To keep source data synchronized with another set of data
Extract captures DML and DDL operations after the initial synchronization has taken
Place.
Extract captures from a data source that can be one of the following:
● Source tables, if the run is an initial load.
● The database recovery logs or transaction logs (such as the Oracle redo logs or SQL/MX
Audit trails). The actual method of capturing from the logs varies depending on the
Database type.
● A third-party capture module. This method provides a communication layer that
Passes data and metadata from an external API to the Extract API. The database
Vendor or a third-party vendor provides the components that extract the data
Operations and pass them to Extract.
When configured for change synchronization, Extract captures the DML and DDL
Operations that are performed on objects in the Extract configuration. Extract stores these
Operations until it receives commit records or rollbacks for the transactions that contain
Them. When a rollback is received, Extract discards the operations for that transaction.
When a commit is received, Extract persists the transaction to disk in a series of files called
A trail, where it is queued for propagation to the target system. All of the operations in each
Transaction are written to the trail as a sequentially organized transaction unit. This
Design ensures both speed and data integrity.
NOTE Extract ignores operations on objects that are not in the Extract configuration, even
Though the same transaction may also include operations on objects that are in the
Extract configuration.
Multiple Extract processes can operate on different objects at the same time. For example
Two Extract processes can extract and transmit in parallel to two Replicat processes (with
Two persistence trails) to minimize target latency when the databases are large. To
Differentiate among different Extract processes, you assign each one a group name (see
"Overview of groups" on page 16).
Overview of Extract
The extraction process runs on the source system, which is the extraction (capture) mechanism of Oracle GoldenGate. You can configure extraction in one of the following ways:
● initial loading: for initial data loading, the current static dataset is extracted (captured) directly from the source object.
● change synchronization: in order to synchronize the source data with another set of data, Extract captures DML and DDL operations after the initial synchronization occurs.
Extract capture from the following data sources:
● source table (if the run is initially loaded).
● database recovery logs or transaction logs (such as Oracle redo logs or SQL / MX audit trails). The actual method captured from the log depends on the type of database.
● third-party capture module. This method provides a communication layer that passes data and metadata from the external API to the Extract API. A database vendor or a third-party vendor provides a component that extracts data and passes it to Extract.
When you change the synchronization configuration, Extract captures the DML and DDL operations performed on objects in the Extract configuration. Extract stores these operations until it receives
The commit record or rollback of the transaction of the operation. When a rollback is received, Extract aborts the operation of the transaction. When a commit is received, Extract persists the transaction to a series of
Disks in files called trail in which transactions are queued to be propagated to the target system. All operations in each transaction are written as sequentially organized transaction units.
Trail . This design ensures speed and data integrity.
Note that even though the same transaction may also include operations on objects in the extraction configuration, extraction ignores operations on objects that are not in the extraction configuration.
Multiple extraction processes can operate on different objects at the same time. For example,
Two extraction processes can extract and transfer two replication processes (two persistence traces) in parallel. When the database is large, the target latency is minimized. To distinguish between different extraction processes, assign a group name to each process (see "Group Overview" on page 16).
Overview of data pumps
Overview of data pump
A data pump is a secondary Extract group within the source Oracle GoldenGate
Configuration. If a data pump is not used, Extract must send the captured data operations
To a remote trail on the target. In a typical configuration with a data pump, however, the
Primary Extract group writes to a trail on the source system. The data pump reads this
Trail and sends the data operations over the network to a remote trail on the target. The
Data pump adds storage flexibility and also serves to isolate the primary Extract process
From TCP/IP activity.
The data pump is the auxiliary extraction group in the source side Oracle GoldenGate configuration.
If you do not use a data pump, Extract must send captured data operations to remote tracking on the target
However, in a typical data pump configuration, primary Extract group writes traces on the source system.
The data pump reads the trace and sends the data operation over the network to the remote tracking on the target.
The data pump increases storage flexibility and isolates the main extraction process from TCP / IP activities.
In general, a data pump can perform data filtering, mapping, and conversion, or it can be
Configured in pass-through mode, where data is passively transferred as-is, without
Manipulation. Pass-through mode increases the throughput of the data pump, because all
Of the functionality that looks up object definitions is bypassed.
In most business cases, you should use a data pump. Some reasons for using a data pump
Include the following:
In general, the data pump can perform data filtering, mapping, and conversion, or it can be configured in pass-through mode, in which data is passively transmitted as is without operation. Pass-through mode improves the throughput of the data pump because some of the functions defined by all lookup objects are bypassed. In most business cases, data pumps should be used. Some reasons for using the data pump include the following:
● Protection against network and target failures: In a basic Oracle GoldenGate
Configuration, with only a trail on the target system, there is nowhere on the source
System to store the data operations that Extract continuously extracts into memory. If
The network or the target system becomes unavailable, Extract could run out of
Memory and abend. However, with a trail and data pump on the source system
Captured data can be moved to disk, preventing the abend of the primary Extract.
When connectivity is restored, the data pump captures the data from the source trail
And sends it to the target system (s).
● protects against network and destination failures: in a basic Oracle GoldenGate configuration, there is only one clue on the target system, and there is no place on the source system to store consecutive data operations extracted into memory. If the network or target system is not available, the fetch may run out of memory and terminate abnormally. However, through trail and data pump on the source system, the captured data can be moved to disk, thus preventing the abnormal termination of the main extraction. When the connection is restored, data pump captures data from the source trail and sends it to the destination system.
● You are implementing several phases of data filtering or transformation. When using
Complex filtering or data transformation configurations, you can configure a data pump
To perform the first transformation either on the source system or on the target system
Or even on an intermediary system, and then use another data pump or the Replicat
Group to perform the second transformation.
● you are in several stages of implementing data filtering or transformation. The data pump can be configured when using complex filtering or data conversion configurations
Perform the first conversion on the source or target system, even on the intermediate system, and then perform the second conversion using another data pump or Replicat group.
● Consolidating data from many sources to a central target. When synchronizing multiple
Source databases with a central target database, you can store extracted data
Operations on each source system and use data pumps on each of those systems to send
The data to a trail on the target system. Dividing the storage load between the source
And target systems reduces the need for massive amounts of space on the target system
To accommodate data arriving from multiple sources.
● consolidates data from many sources into one central target. When synchronizing multiple source databases with a central target database, you can store extracted data operations on each source system and use a data pump on each system to send data to the path to the target system. Distributing storage load between the source and target systems reduces the need for a large amount of space on the target system to accommodate data from multiple sources.
● Synchronizing one source with multiple targets. When sending data to multiple target
Systems, you can configure data pumps on the source system for each target. If network
Connectivity to any of the targets fails, data can still be sent to the other targets.
● synchronizes a source and multiple destinations. When sending data to multiple target systems, you can configure data pumps for each target on the source system.
If the network connection to any destination fails, the data can still be sent to other destinations
Overview of Replicat
Overview of replication
The Replicat process runs on the target system, reads the trail on that system, and then
Reconstructs the DML or DDL operations and applies them to the target database. You can
Configure Replicat in one of the following ways:
The Replicat process runs on the target system, reads the queues on that system, and then reproduces the DML or DDL operation and applies it to the target database.
You can configure Replicat in one of the following ways:
● Initial loads: For initial data loads, Replicat can apply a static data copy to target objects
Or route it to a high-speed bulk-load utility.
● initial loading: for initial data loading, Replicat can use static data copies for the target object or route it to the high-speed bulk load utility.
● Change synchronization: When configured for change synchronization, Replicat applies
The replicated source operations to the target objects using a native database interface
Or ODBC, depending on the database type. To preserve data integrity, Replicat applies
The replicated operations in the same order as they were committed to the source
Database.
● change synchronization: when configured for change synchronization, Replicat uses the native database interface or ODBC to apply replicated source operations to the target object, depending on the database type. To maintain data integrity, Replicat applies replication operations in the same order that they were committed to the source database.
You can use multiple Replicat processes with multiple Extract processes in parallel to
Increase throughput. To preserve data integrity, each set of processes handles a different
Set of objects. To differentiate among Replicat processes, you assign each one a group name
(see "Overview of groups" on page 16)
You can use multiple replication processes in parallel with multiple extraction processes to improve throughput. To maintain data integrity, each process set handles a different set of objects. To distinguish between replication processes, you assign each process a group name (see "Group Overview" on page 16).
You can delay Replicat so that it waits a specific amount of time before applying the
Replicated operations to the target database. A delay may be desirable, for example, to
Prevent the propagation of errant SQL, to control data arrival across different time zones
Or to allow time for other planned events to occur. The length of the delay is controlled by
The DEFERAPPLYINTERVAL parameter.
You can delay Replicat so that it waits for a specific amount of time before applying a replication operation to the target database. Delay may be desirable, for example, to prevent
The propagation of incorrect SQL, in order to control the arrival of data across different time zones, or to allow time for other scheduled events to occur. The length of the delay is determined by DEFERAPPLYINTERVAL
Parameter control.
Overview of trails
Overview of queu
To support the continuous extraction and replication of database changes, Oracle
GoldenGate stores records of the captured changes temporarily on disk in a series of files
Called a trail. A trail can exist on the source system, an intermediary system, the target
System, or any combination of those systems, depending on how you configure Oracle
GoldenGate. On the local system it is known as an extract trail (or local trail). On a remote
System it is known as a remote trail.
To support continuous extraction and replication of database changes, Oracle GoldenGate temporarily stores records of captured changes in a series of files on disk called trail. Paths can exist on source systems, intermediate systems, target systems, or any combination of these systems, depending on how you configure Oracle GoldenGate. On the local system, it is called the extraction path (or local path). On remote systems, it is called a remote path.
By using a trail for storage, Oracle GoldenGate supports data accuracy and fault tolerance
(see "Overview of checkpoints" on page 14) The use of a trail also allows extraction and
Replication activities to occur independently of each other. With these processes separated
You have more choices for how data is processed and delivered. For example, instead of
Extracting and replicating changes continuously, you could extract changes continuously
But store them in the trail for replication to the target later, whenever the target
Application needs them.
Oracle GoldenGate supports data accuracy and fault tolerance by using trail queues for storage (see "checkpoint overview" on page 14). The use of trail
Extraction and replication activities are also allowed to proceed independently of each other. With these separate processes, you have more choices about how to process and pass the data. For example, you can
Store changes in trail by continuously extracting them, rather than continuously extracting and copying them, so that they can later be replicated to the target when needed by the target application
Processes that write to, and read, a trail
The process of writing and reading queues
The primary Extract and the data-pump Extract write to a trail. Only one Extract process
Can write to a trail, and each Extract must be linked to a trail.
Main decimation and data pump decimation write queue. Only one extraction process can be written to the queue, and each extraction must be linked to a queue.
Processes that read the trail are:
The processes that read the queue include:
● Data-pump Extract: Extracts DML and DDL operations from a local trail that is linked
To a previous Extract (typically the primary Extract), performs further processing if
Needed, and transfers the data to a trail that is read by the next Oracle GoldenGate
Process downstream (typically Replicat, but could be another data pump if required).
● data pump extraction: extract DML and DDL operations from the local queue linked to the previous extraction (usually the primary extraction), perform further processing if necessary, and transfer the data to the queue read downstream of the next Oracle GoldenGate process (usually replication, but can be another data pump if needed).
● Replicat: Reads the trail and applies replicated DML and DDL operations to the target
Database.
● Replicat: reads queues and applies replicated DML and DDL operations to the target database
Trail creation and maintenance
Queue creation and maintenance
The trail files themselves are created as needed during processing, but you specify a two-
Character name for the trail when you add it to the Oracle GoldenGate configuration with
The ADD RMTTRAIL or ADD EXTTRAIL command. By default, trails are stored in the dirdat sub-
Directory of the Oracle GoldenGate directory.
The queue file itself is created as needed in the process, but when you add it to the Oracle GoldenGate configuration in the following way, you can specify a two-character name for the queue: the ADD RMTTRAIL or ADD EXTTRAIL command. By default, queues are stored in the dirdata subdirectory of the Oracle GoldenGate directory.
Full trail files are aged automatically to allow processing to continue without interruption
For file maintenance. As each new file is created, it inherits the two-character trail name
Appended with a unique, six-digit sequence number from 000000 through 999999 (for
Example c:\ ggs\ dirdat\ tr000001). When the sequence number reaches 999999, the numbering
Starts over at 000000.
The complete queue file automatically ages so that the process continues without interrupting file maintenance. When each new file is created, it inherits a two-character trail name
And append a unique six-digit serial number from 000000 to 999999 (for example, c:\ ggs\ dirdat\ tr000001). From 000000 when the serial number reaches 999999
restart.
You can create more than one trail to separate the data from different objects or
Applications. You link the objects that are specified in a TABLE or SEQUENCE parameter to a
Trail that is specified with an EXTTRAIL or RMTTRAIL parameter in the Extract parameter file.
Aged trail files can be purged by using the Manager parameter PURGEOLDEXTRACTS.
To maximize throughput, and to minimize I/O load on the system, extracted data is sent
Into and out of a trail in large blocks. Transactional order is preserved. By default, Oracle
GoldenGate writes data to the trail in canonical format, a proprietary format which allows
It to be exchanged rapidly and accurately among heterogeneous databases. However, data
Can be written in other formats that are compatible with different applications.
For additional information about the trail and the records it contains, see Appendix 2 on
Page 562.
You can create multiple queues to separate data from different objects or applications. Links the objects specified in the table or sequence parameters to the extraction parameter file
Use the track specified by the EXTTRAIL or RMTTRAIL parameter. You can use the manager parameter PURGEOLDEXTRACTS to clear obsolete queue files.
In order to maximize throughput and minimize the load on the system, the extracted data is sent to and out of a path in the form of large blocks of data. The transaction order is preserved.
By default, Oracle GoldenGate writes data to trail in a canonical format, a proprietary format that allows fast and accurate delivery between heterogeneous databases
Change the data. However, data can be written in other formats that are compatible with different applications. For more information about the trace and the records it contains, see page 562
Appendix 2.
Overview of extract files
Overview of extraction Fil
In some configurations, Oracle GoldenGate stores extracted data in an extract file instead
Of a trail. The extract file can be a single file, or it can be configured to roll over into
Multiple files in anticipation of limitations on file size that are imposed by the operating
System. In this sense, it is similar to a trail, except that checkpoints are not recorded. The
File or files are created automatically during the run. The same versioning features that
Apply to trails also apply to extract files.
In some configurations, Oracle GoldenGate stores the extracted data in the extraction file rather than in the queue. The extraction file can be a single file or a single file
Configured to scroll into multiple files if the operating system is expected to limit the file size. In this sense, it is similar to a queue, except that it is not recorded.
Checkpoint. One or more files are created automatically at run time. The same version control feature that applies to queues also applies to extracting files.
Overview of checkpoints
Checkpoint overview
Checkpoints store the current read and write positions of a process to disk for recovery
Purposes. Checkpoints ensure that data changes that are marked for synchronization
Actually are captured by Extract and applied to the target by Replicat, and they prevent
Redundant processing. They provide fault tolerance by preventing the loss of data should
The system, the network, or an Oracle GoldenGate process need to be restarted. For
Complex synchronization configurations, checkpoints enable multiple Extract or Replicat
Processes to read from the same set of trails.
Checkpoints work with inter-process acknowledgments to prevent messages from being
Lost in the network. Oracle GoldenGate has a proprietary guaranteed-message delivery
Technology.
The checkpoint stores the current read and write location of the process to disk for recovery purposes. Checkpoints ensure data changes marked as synchronized
They are actually captured by Extract and applied to the target by Replicat, which prevents redundant processing. They pass through defense
Stop data loss to provide fault tolerance. The system, network, or Oracle GoldenGate process needs to be restarted. For
In a complex synchronous configuration, checkpoints can enable multiple fetching or replicating the process of reading from the same set of records. Checkpoint and inter-process confirmation
Used together to prevent messages from being sent and lost in the network. Oracle GoldenGate has proprietary guaranteed message delivery technology.
Extract creates checkpoints for its positions in the data source and in the trail. Because
Extract only captures committed transactions, it must keep track of operations in all open
Transactions, in the event that any of them are committed. This requires Extract to record
A checkpoint where it is currently reading in a transaction log, plus the position of the start
Of the oldest open transaction, which can be in the current or any preceding log.
Extract creates checkpoints for its location in the data source and queue. Because Extract only captures committed transactions, it must track all open operations
Transaction, if any of them has been committed. This requires an excerpt to record the checkpoint currently read in the transaction log, plus the earliest open at the start position.
Transactions, which can be in the current log or any previous log.
To control the amount of transaction log that must be re-processed after an outage, Extract
Persists the current state and data of processing to disk at specific intervals, including the
State and data (if any) of long-running transactions. If Extract stops after one of these
Intervals, it can recover from a position within the previous interval or at the last
Checkpoint, instead of having to return to the log position where the oldest open long-
Running transaction first appeared. For more information, see the BR parameter in the
Oracle GoldenGate Windows and UNIX Reference Guide.
To control the amount of transaction logs that must be reprocessed after an interrupt, extract to persist the current processing state and data to disk at specific intervals, including
The status and data of the long-running transaction, if any. If the extraction stops the interval after one of them, it can start from the previous interval or the last
The location within the interval restores the checkpoint without having to return to the log location where the long was first opened-the first running transaction occurs. For more information, see
Oracle GoldenGate Windows and UNIX reference Guide.
Replicat creates checkpoints for its position in the trail. Replicat stores its checkpoints in
A checkpoint table in the target database to couple the commit of its transaction with its
Position in the trail file. The checkpoint table guarantees consistency after a database
Recovery by ensuring that a transaction will only be applied once, even if there is a failure
Of the Replicat process or the database process. For reporting purposes, Replicat also has
A checkpoint file on disk in the dirchk sub-directory of the Oracle GoldenGate directory.
Replicat creates a checkpoint for its location in trail. Replicat stores its checkpoints in the checkpoint table in the target database to use its
The commit of the transaction and its location in the track file. The checkpoint table ensures consistency after the database by ensuring that the transaction is applied only once, even if the
Failure) to restore the replication process or database process. For reporting purposes, Replicat also has the Oracle GoldenGate directory on disk
Checkpoint files in the dirchk subdirectory.
Checkpoints are not required for non-continuous types of configurations that can be re-run
From a start point if needed, such as initial loads.
For discontiguous types of configurations that can be rerun, no checkpoints are required, starting from the starting point, such as the initial load, if necessary.
Overview of Manager
Overview of manager
Manager is the control process of Oracle GoldenGate. Manager must be running on each
System in the Oracle GoldenGate configuration before Extract or Replicat can be started
And Manager must remain running while those processes are running so that resource
Management functions are performed. Manager performs the following functions:
Manager is the control process of Oracle GoldenGate. Manager must be running on each system in the Oracle GoldenGate configuration before you can
Start extraction or replication, and while these processes are running, the manager must keep running in order to perform administrative functions. The manager performs the following functions:
● Start Oracle GoldenGate processes
● Start dynamic processes
● Maintain port numbers for processes
● Perform trail management
● Create event, error, and threshold reports
One Manager process can control many Extract or Replicat processes. On Windows
Systems, Manager can run as a service. For more information about the Manager process
And configuring TCP/IP connections, see Chapter 3.
A manager process can control multiple extraction or replication processes. On Windows systems, Manager can run as a service. For more information about the manager process and configuring TCP/IP connections, see Chapter 3.
Overview of Collector
Overview of Collector
Collector is a process that runs in the background on the target system when continuous
Online change synchronization is active. Collector does the following:
The collector is a process that runs in the background of the target system when continuous online change synchronization is active. The collector does the following:
● Upon a connection request from a remote Extract to Manger, scan and bind to an
Available port and then send the port number to Manager for assignment to the requesting Extract process.
After ● remotely extracts the connection request to the manager, it scans and binds to the available port, and then sends the port number to the manager for allocation to the request extraction process.
● Receive extracted database changes that are sent by Extract and write them to a trail
File. Manager starts Collector automatically when a network connection is required, so
Oracle GoldenGate users do not interact with it. Collector can receive information from
Only one Extract process, so there is one Collector for each Extract that you use.
Collector terminates when the associated Extract process terminates.
The extraction database changes sent by extraction are received and written to the trace file. Manager automatically starts the collector when a network connection is required, so
Oracle GoldenGate users do not interact with it. The collector can receive information from only one extraction process, so each extraction you use has one
Collector. When the associated extraction process terminates, the collector terminates.
NOTE Collector can be run manually, if needed. This is known as a static Collector (as
Opposed to the regular, dynamic Collector). Several Extract processes can share
One static Collector; however, an one-to-one ratio is optimal. A static Collector can
Be used to ensure that the process runs on a specific port. For more information
About the static Collector, see the Oracle GoldenGate Windows and UNIX
Reference Guide. For more information about how Manager assigns ports, see
Chapter 3.
If necessary, you can run the note collector manually. This is called a static collector (as opposed to a regular dynamic collector).
Multiple extraction processes can share a static collector; however, an one-to-one ratio is optimal. Electrostatic collector
Used to ensure that the process is running on a specific port. For more information about static collectors, see the Oracle GoldenGate Windows and UNIX reference Guide. For more information about how Manager allocates ports, see
Chapter 3.
By default, Extract initiates TCP/IP connections from the source system to Collector on the
Target, but Oracle GoldenGate can be configured so that Collector initiates connections
From the target. Initiating connections from the target might be required if, for example
The target is in a trusted network zone, but the source is in a less trusted zone. For
Information about this configuration, see page 136.
By default, extract initiates a TCP/IP connection from the source system to the collector on the destination, but Oracle GoldenGate can be configured so that the collector starts from the
Target initiates connection. You may need to initiate a connection from the destination, for example, the destination is in a trusted network zone, but the source is in an untrusted zone. For the sake of this
For configuration information, see page 136.
Overview of process types
Overview of process types
Depending on the requirement, Oracle GoldenGate can be configured with the following processing types.
Oracle GoldenGate can configure the following processing types as needed.
● An online Extract or Replicat process runs until stopped by a user. Online processes
Maintain recovery checkpoints in the trail so that processing can resume after
Interruptions. You use online processes to continuously extract and replicate DML and
DDL operations (where supported) to keep source and target objects synchronized. The
EXTRACT and REPLICAT parameters apply to this process type.
● A source-is-table Extract process extracts a current set of static data directly from the
Source objects in preparation for an initial load to another database. This process type
Does not use checkpoints. The SOURCEISTABLE parameter applies to this process type.
● A special-run Replicat process applies data within known begin and end points. You
Use a special Replicat run for initial data loads, and it also can be used with an online
Extract to apply data changes from the trail in batches, such as once a day rather than
Continuously. This process type does not maintain checkpoints, because the run can be
Started over with the same begin and end points. The SPECIALRUN parameter applies to
This process type.
● A remote task is a special type of initial-load process in which Extract communicates
Directly with Replicat over TCP/IP. Neither a Collector process nor temporary disk
Storage in a trail or file is used. The task is defined in the Extract parameter file with
The RMTTASK parameter.
The ● online extraction or replication process runs until the user stops. The online process maintains recovery checkpoints in the trace to resume processing after an interruption. You can use the online process to continuously extract and copy DML and
DDL operation, if supported, to keep the source and destination objects in sync. The EXTRACT and REPLICAT parameters apply to this process type.
The ● source-is-table extraction process extracts a set of current static data directly from the source object in preparation for initial loading into another database. Checkpoints are not used for this process type. The SOURCEISTABLE parameter is applied to this process type.
The ● special operation replication process applies data at known starting points and endpoints. You can use a special Replicat run to load the initial data, and it can also be used with an online extraction to apply data changes from trail in batches
For example, once a day, not continuously. This process type does not maintain checkpoints because it can be restarted with the same start and end points. The SPECIALRUN parameter applies to this process type.
The ● remote task is a special type of initial loading process in which Extract and are copied directly through TCP/IP. Do not use temporary disk storage in collector processes or traces or files. The task is defined with the RMTTASK parameter in the extraction parameter file.
Overview of groups
Overview of groups
To differentiate among multiple Extract or Replicat processes on a system, you define
Processing groups. For example, to replicate different sets of data in parallel, you would
Create two Replicat groups.
To distinguish between multiple extraction or replication processes on the system, define a processing group. For example, to replicate different datasets in parallel, you can create two replication groups.
A processing group consists of a process (either Extract or Replicat), its parameter file, its
Checkpoint file, and any other files associated with the process. For Replicat, a group also
Includes the associated checkpoint table.
A process group consists of a process (Extract or Replicat), its parameter files, checkpoint files, and any other files associated with the process. For Replicat, the group also contains the associated checkpoint table.
You define groups by using the ADD EXTRACT and ADD REPLICAT commands in the Oracle
GoldenGate command interface, GGSCI. For permissible group names, see those
Commands in the Oracle GoldenGate Windows and UNIX Reference Guide.
You can use the ADD EXTRACT and ADD REPLICAT commands in the OracleGoldenGate command interface to define groups. For allowed group names, see these commands in the Oracle GoldenGate Windows and UNIX reference Guide.
All files and checkpoints relating to a group share the name that is assigned to the group
Itself. Any time that you issue a command to control or view processing, you supply a group
Name or multiple group names by means of a wildcard.
All files and checkpoints associated with the group share the name assigned to the group itself. When issuing commands to control or view processing, you can provide group names or multiple group names through wildcards
Overview of the Commit Sequence Number (CSN)
Overview of submission Serial number (CSN)
When using Oracle GoldenGate, you may need to refer to the submission serial number or CSN. CSN is used by Oracle GoldenGate to maintain transaction consistency and data integrity.
The identifier constructed to identify the transaction. It uniquely identifies the point in time the transaction was committed to the database. You can require CSN to locate Extract in the transaction log to relocate
Replicat is on the queue, or for other purposes. It is returned by some conversion functions and is included in reports and some GGSCI output. About CSN and each database
For more information on the CSN values list, see Appendix, page 559.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.