In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
1. Single copy structure
Source data system-- (Capture)-- > queue file-(Pump)-- > Network (Internet,Intranet)-- > queue file-- (Delivery)-- > destination data system
This structure is used to copy data from a single system to one or more target systems.
Three processes:
Extract (capture) process
Data pump process
Replicate (delivery) process.
The queue file is binary.
2. Bidirectional replication (double active) structure
This structure can be used for high availability of the system, similar to ADG technology, but OGG and ADG technology are similar and used in different scenarios.
3. Real-time data warehouse structure
Apply all the data of the enterprise to be integrated in one database, similar to Hadoop, the data can be further applied to big data technology.
4. Real-time data distribution
Data or part of the data can be distributed from one data source to target data sources in different geographic locations.
5. Data distribution through information
Transfer data through rich text, such as using Microsoft Excel, to distribute data to different database types.
6. Understand SCN (System Change Numbers)
In Oracle GoldenGate, SCN is used as the unique identity for the replication process to replicate transactions in the Oracle database. Microsoft SQL Server and MySQL also have similar Number.
After starting the capture process, record the SCN number. It can be obtained through the v$database view or the dbms_flashback package.
Get the SQL of SCN:
SQL > select current_scn from v$database
SQL > select dbms_flashback.get_system_change_number from dual
Note: use the gv$database view to get the SCN in the RAC environment.
7. OGG process
(1) Management process
The management process is the main process of all processes in OGG. As long as there is a replication process, it must always run on each of the relevant systems. The main functions are as follows:
A. Start and restart the OGG process
B. Start the dynamic process
C. The port number of the operation process
D. Management of queue files
E, event, error and threshold report
(2) Collection process
A background process that runs on the target data side (delivery side) when OGG is synchronized online. The main tasks include:
A, scan and restrict the port of the manager from the source-side extraction process.
B. Receive the transaction extracted from the source side and write the data to the queue file.
OGG assigns a collection process to each extraction process. When the extraction process ends, the collection process ends.
(3) capture process
The extraction process in OGG is used to obtain the changes of data from the online transaction log (such as oracle's redolog) when the data changes, so as to achieve data synchronization.
One extraction can be configured in two ways:
A, initial load: used for data initialization, usually static data import, often using specialrun parameters.
B, data change synchronization: synchronize from the source side to the target side when the data changes.
Typical capture processes have memory requirements, and each capture process needs 25M~55M memory.
The integrated capture process was introduced in 11.2.0.2 and only supports Oracle11.2.0.3 and later databases. Run integrated capture in the Oracle database to interact with the log mining process.
There are some processes within Oracle database that run with integrated capture as part of the log mining process configuration.
The log mining server consists of the following parts:
Read: read and decompose the redolog.
Preparation: scan the redolog and prefilter transactions in preparation for parallel use.
Reorganization: use SCN to consolidate redo records through preparation processes and services.
Get: transfer the logically changed record to the local queue file according to the format of the redo record.
Note: set the streams_pool_size parameter in SGA to get better performance according to the actual setting.
(4) data pump process
In fact, the data pump group is a secondary extraction process group, which is used to help data transfer through the network. Although the data pump is a capture process similar to an extraction group, it does not conflict. The main function of the data pump process is to transfer queue files to the remote target system over the network.
Why use the data pump process? This should be able to protect data when there is a problem with the network and avoid data inconsistencies.
The data pump process is not like the capture process, the data pump process does not need too much configuration, only configuration is available in classical mode.
(5) delivery process (replication process)
The delivery process is actually a data application process in the OGG environment. The task of the delivery process is to read the queue file, get the transaction in chronological order (according to the SCN list), and apply the transaction.
There are three modes of the delivery process:
Classic delivery: classic delivery is the delivery process of the default configuration of OGG and is managed by the operating system layer. Memory requirements: 25M~55M
Collaborative delivery: collaborative delivery is similar to classical delivery, except that the collaborative delivery process will produce subordinate processes coordinated by the master delivery process. A parallel collaborative delivery process can decompose a delivery process, making long transactions like small transactions.
Integrated delivery:
Introduce the integration delivery process into OGG12C. Integration delivery is based on primary key, foreign key, or unique key constraints, depending on transactions and application transactions.
Integration delivery Integration consists of four parts: Receiver,Prepare,Coordinator,Apply (n), as shown in the following figure.
(6) queue file
Queue files are OGG-specific binaries that host transactions in the OGG structure. When the data changes, it supports continuous data extraction and data replication. It can be stored in a local file or in a remote target system file.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.