Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of DataGuard single instance to RAC Construction

2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces the DataGuard single instance to RAC build example analysis, has a certain reference value, interested friends can refer to, I hope you can learn a lot after reading this article, let the editor take you to know it.

For notes on a migration from Windows platform RAC database to Linux platform RAC, the basic steps are as follows:

1. Build DataGuard from Windows RAC to Linux single instance database

two。 Do switchover and change the slave IP to the scanip of the original RAC database

3. Set up a DataGuard from a single instance to Linux RAC (scanip is different from the original RAC), and switch to switchover

4. Modify the RAC database scanip to the scanip of the original RAC, change the single node standby database IP to the original database IP, modify the monitoring and tnsnames.ora files, and restore the operation of the disaster preparedness database.

The database version is Oracle 11.2.0.4, which is the two nodes of Windows platform, the amount of data is about 2.5T, and the downtime is about 15min.

This article only contains the relevant actions in step 3, which describes how to build a DataGuard from a single node to RAC. In this example, the IP of the two nodes of the RAC is 192.168.100.101 and 102, and the IP of the single instance is 192.168.100.100.

Implementation steps:

1. Preparation phase:

At this stage, we mainly do some preparatory configuration of the database, such as whether the archiving is enabled or not.

Single instance master database:

1) select force_logging from vault database;-- make sure that force logging is enabled in the main database

2) archive log list;-make sure the main database is in archive mode

3) add standby redo to the single instance main database. The advantage is that there is no need to add stanbyredo when doing switchover, and after the slave database uses this full slave, it will automatically create a standby redo without the need to add it manually. Generally, the standby redo is one more group than the normal redo, and the number of member in each group is random, usually 1.

RAC repository:

1) install RAC on both nodes according to the normal steps, but do not build a library, and you need to build a + DATA disk group in advance.

2) add the database and instance for Node 1:

Srvctl add database-d orcl_st-n orcl-o $ORACLE_HOME-s open-a "DATA,FRA"-r physical_standby

Srvctl add instance-d orcl_st-I orcl1-n node1

two。 Parameter file:

The main library is modified online:

Alter system set LOG_ARCHIVE_CONFIG='DG_CONFIG= (orcl,orcl_st) 'scope=both sid='*'; alter system set LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR= (ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=orcl' scope=both sid='*'; alter system set LOG_ARCHIVE_DEST_2='SERVICE=orcl_st reopen=120 lgwr async VALID_FOR= (ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=orcl_st' scope=both sid='*'; alter system set fal_server=orcl_st scope=both sid='*' Alter system set db_file_name_convert='/oradata/orcl/datafile','+data/orcl/datafile','/oradata/orcl/tempfile','+data/orcl/TEMPFILE' scope=spfile sid='*'; alter system set log_file_name_convert='/oradata/orcl/onlinelog','+data/orcl/ONLINELOG' scope=spfile sid='*'; alter system set standby_file_management=AUTO scope=both sid='*'

Note that log_file_name_convert does not map the path under db_recovery_file_dest as well, because the onlinelog in the flashback area of the main database will be automatically mapped to the relevant location of the flashback area of the slave database.

If the data files are scattered, you need to map all the paths of the data files to'+ data/orcl/datafile', for easy management.

Prepare the library modification parameter file:

*. _ _ oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment * .audit_file_dest='/u01/app/oracle/admin/orcl/adump'-- this directory needs to be created in advance * .audit_trail='db' * .compatible='11.2.0.4.0' * .cluster_database=true * .control_files='+DATA/orcl/controlfile/control01.ctl'#Restore Controlfile * .db_block _ size=8192 * .db_domain='' * .db_name='orcl' * .db_unique_name='orcl_st' * .db_recovery_file_dest='+FRA' * .db_recovery_file_dest_size=5218762752 * .diagnostic_dest='/u01/app/oracle' * .fal_server='ORCL' * .log_archive_config='DG_CONFIG= (orcl Orcl_st)'* .log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR= (ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=orcl_st' * .log_archive_dest_2='service=orcl reopen=120 lgwr async valid_for= (online_logfiles) Primary_role) db_unique_name=orcl' * .log_archive_dest_state_2='enable' * .log_archive_format='%t_%s_%r.dbf' * .memory_target=1073741824 * .open_cursors=500 * .processes=150 * .remote_login_passwordfile='EXCLUSIVE' * .resource_manager_plan='' * .standby_file_management='AUTO' orcl1.instance_name=orcl1 orcl1.instance_number=1 Orcl1.undo_tablespace='UNDOTBS1' orcl1.thread=1 orcl1.local_listener=' (address= (protocol=TCP) (HOST=192.168.100.103) (PORT=1521))'--fill in the VIP * .remote_listener=' (address= (protocol=TCP) (HOST=192.168.100.105) (PORT=1521))'of Node 1 here-- enter the scanip of RAC here

After modification, rename it to initorcl1.ora and put it in the $ORACLE_HOME/dbs directory.

3. Modify the tnsnames.ora file

Modify the tnsnames.ora file of the single instance master database as follows, and copy it to all nodes of the slave database.

ORCL = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP) (HOST = 192.168.100.100) (PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl)) ORCL_ST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP) (HOST = 192.168.100.101) (PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SID = orcl1)

4. Password file

Copy the password file orapw of the single instance master database to all nodes of the slave database, and rename it to orapworcl1 and orapworcl2

5. Complete the database in the main database and copy it to standby node 1. (slightly)

Before doing this, make sure that the backup plan of the main library has been stopped, or that ARCHIVELOG DELETION POLICY is set to applied on standby in rman

6. After the backup is transferred to the standby database, back up the standby control files in the main database.

Backup current controlfile for standby format 'xxx'

7. Copy the backed up standby control file to slave node 1.

8. Use the standby library parameter file to start the Node 1 instance to the nomount state.

Startup nomount

9. Use RMAN to restore the stanby control file on slave node 1.

Restore standby controlfile from 'xxx';-where the xxx path is the location of the standby control file in step 7.

Alter database mount

10. Register the backup set in the repository and restore the data files.

Catalog start with 'xxx';-fill in the path of the directory where the backup is located. Restore backup:

Run {allocate channel C1 type disk; allocate channel c2 type disk; allocate channel c3 type disk; allocate channel c4 type disk; allocate channel c5 type disk; allocate channel c6 type disk; allocate channel c7 type disk; allocate channel c8 type disk; set newname for datafile 1 to'+ DATA/orcl/datafile/system01.dbf'; set newname for datafile 2 to'+ DATA/orcl/datafile/sysaux01.dbf'; set newname for datafile 3 to'+ DATA/orcl/datafile/undotbs101.dbf'; set newname for datafile 4 to'+ DATA/orcl/datafile/users01.dbf' ...-- how many data files are there in the main library, and how many lines are written here? the format is:-- set newname for datafile file_id to 'file_name'; restore database; switch datafile all;}

11. After the above operations are completed, enable listening on slave node 1.

Either netca or netmgr is fine. The pmon process automatically registers the instance of node 1 with the listener. The default service_name is db_unique_name, which in this case is orcl_st.

twelve。 Start the MRP process in the slave library (mount status at this time)

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT

The statement to cancel MRP is:

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL

13. Observe the alert log of slave node 1 to learn the progress of synchronization.

14. After the increment is completed, cancel the MRP process, open the standby library, and restart the MRP process.

Thank you for reading this article carefully. I hope the article "sample Analysis from DataGuard single instance to RAC" shared by the editor will be helpful to you. At the same time, I also hope you will support us and pay attention to the industry information channel. More related knowledge is waiting for you to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report