In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Chapter one: the introduction of Dg theory
Data Guard (Dg for short) is a tool in oracle high availability system. Dg provides data protection through redundant data, and Dg ensures the synchronization of redundant data and master data through log synchronization mechanism. This synchronization can be real-time, delayed, synchronous and asynchronous. Dg is often used in remote disaster recovery and high availability solutions for small businesses. Dg can execute read-only queries on standby machines, thus dispersing the performance pressure on primary databases.
In a Dg environment, there are at least two databases, one of which provides services in the state of open, and this database is called primary database. The second one is in a state of recovery, called standby database. At run time, primary database provides external services, and users operate on primary database, which are recorded in online logs and archive logs, which are passed to standby database over the network. This log will be repeated on standby database to achieve data synchronization between primary database and standby database.
Chapter 2: Dg architecture
The Dg architecture can be divided into three parts by function:
1) Log sending (redo send)
2) Log reception (redo receive)
3) Log application (redo apply)
1. Log sending:
During the operation of Primary database, redo logs are generated continuously, and these logs need to be sent to standby database. This sending action can be done by primary database's LGWR or ARCH process, and different archiving destinations can use different methods, but only one method can be chosen for a destination. Which process to choose makes a big difference between database protection capability and system availability
1) use the ARCH process:
Primary database continues to generate redo log, and these logs are written to online logs by the LGWR process
When a set of online logs is full, a log switch occurs and a local archive is triggered
After the local archive is completed, the online log can be overwritten and reused
The ARCH process sends the archive log to the RFS process of standby database through net
The RFS process on the Standby database side writes the received log to the archive log
The MRP process or LSP process on the Standby database side applies these logs on the standby database to synchronize the data.
2) use the LGWR process:
The logs generated by Primary database are written to both the log file and the network. That is to say, when the LGWR process writes the log to the local log file, it also sends the log to the local LNSn process, and then the LNSn process sends the log to the remote destination over the network.
LGWR must wait for the write to the local log file operation and the network transfer through the LNSn process to succeed before the transaction on the primary database can be committed
The log switch of Standby database also triggers the log switch on standby database and the archiving of standby redo_log by standby database, and then triggers the MRP or LSP process of standby database to recover the archived logs.
2. Log reception:
After receiving the log, the RFS process of Standby database writes the log to a standby redo log or archived log file, and which file is written depends on the log delivery method of primary and the location of standby database. If you write to a standby redo log file, when a log switch occurs in primary database, the log switch of standby redo log on standby database will also be triggered and the standby redo log will be archived. If you write to archived log, then the action itself can be seen as an archiving operation.
3. Log application:
Log application service is to replay primary database logs on standby database, so as to realize the data synchronization of two databases.
Chapter 3: Dg configuration process
1. First, set the source database to forced archive mode:
Enter the database through sqlplus / as sysdba
2. The source database is set to archive mode. At present, the database instance has been set up.
3. Add the following parameters associated with dg to the source database, and restart all database instances after adding:
4. View logfile (500m each) and create standby_logfile
5. Rman backs up the entire source database, and copies the backup files to the corresponding directory of the source database after the backup is completed.
6. Create the pfile of the standby library (this process backs up the past from the source library and makes relevant modifications)
7. The modified parameters are as follows:
8. Modify the tnsname.ora of the source database and copy it to the corresponding location of the target database.
9. Create the required directories on the dg server:
Under oracle:
Mkdir-p / u01/app/oracle/admin/easdb_dg/adump
Mkdir-p / oradata/easdb_dg/controlfile/
Mkdir-p / oradata/easdb_dg/standbylog/
Mkdir-p / oradata/easdb_dg/onlinelog/
Mkdir-p / u01/app/oracle/diag/rdbms/easdb_dg/oem/trace/cdump
Under root:
Mkdir / backup
Chown oracle:oinstall / backup
10. Unified source and target database passwords:
11. Dg library nomount starts
12. Source-database side: rman establishes dataguard database
Rman target / auxiliary sys/kingdee123@easdb_dg
DUPLICATE TARGET DATABASE FOR STANDBY NOFILENAMECHECK
13. After the recovery is complete, create the pfile file currently used by the target library as the current spfile, and then start the database to mount mode
14. The target library is set to the log recovery model
The great task has been completed!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.