Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Fix ORA-01578 and ORA-26040 errors of standby library caused by NOLOGGING operation of main library

2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Fix ORA-01578 and ORA-26040 errors of standby library caused by NOLOGGING operation of main library

Bad blocks caused by ORA-01578 and ORA-26040--NOLOGGING operations-error interpretation and solution: http://blog.itpub.net/26736162/viewspace-2152783/

It is well known that DG data synchronization is based on log streams, which is why you need to set the main library to FORCE LOGGING during the DG configuration phase. However, this will also bring a lot of problems, SQL execution efficiency is slow, for example: when we use the data pump for migration, we want to complete at least downtime, at this time we may consider the minimum log import to speed up the import, and then resynchronize the repository.

In some scenarios, we will use nologging operation to save a lot of data insertion time, and the problem caused by this operation is that if the library has a ready database, because the nologging insertion operation of the master database will not generate redo, it will not be transferred and applied on the slave database, which will lead to problems with the data of the slave database.

Set force_logging to nologging mode on a master database with master / slave relationship, and then create a table and set it to nologging mode:

SQL > alter database no force logging

SQL > create table DEMO tablespace users pctfree 99 as select rownum n from xmltable ('1 to 100')

SQL > alter table DEMO nologging

Then insert the data using / * + append*/ and submit

SQL > insert / * + append * / into DEMO select rownum n from xmltable ('1 to 1000')

SQL > commit

At this time, when you query the table in the database, you will see the following error message.

SQL > select count (1) from demo

Select count (1) from demo

*

ERROR at line 1:

ORA-01578: ORACLE data block corrupted (file # 4, block # 819)

ORA-01110: datafile 4:'/ data/data1/ORCL2/datafile/o1_mf_users_3ft1e9qb_.dbf'

ORA-26040: Data block was loaded using the NOLOGGING option

1.1 11g

In Oracle 11g, if you encounter such a problem, you can solve the problem by restoring the problematic data files in the repository. To fix this problem, you need to copy the data file containing the missing data from the main library to the physical backup library.

1. Query the main database

SQL > SELECT NAME, UNRECOVERABLE_CHANGE# FROM V$DATAFILE

NAME UNRECOVERABLE_CHANGE#

-

+ DATADG/orcl/datafile/system.270.972381717 0

+ DATADG/orcl/datafile/sysaux.265.972381717 0

+ DATADG/orcl/datafile/undotbs1.261.972381717 0

+ DATADG/orcl/datafile/users.259.972381717 6252054

+ DATADG/orcl/datafile/example.264.972381807 0

+ DATADG/orcl/datafile/undotbs2.258.972381927 0

+ DATADG/orcl/datafile/example.266.972400297 0

+ DATADG/orcl/datafile/ax.268.973612569 0

2. Query the reserve database

Sys@ORCL > SELECT NAME, UNRECOVERABLE_CHANGE# FROM V$DATAFILE

NAME UNRECOVERABLE_CHANGE#

--

/ data/data1/ORCL2/datafile/o1_mf_system_3dt1e9op_.dbf 0

/ data/data1/ORCL2/datafile/o1_mf_sysaux_3ct1e9nb_.dbf 0

/ data/data1/ORCL2/datafile/o1_mf_undotbs1_3gt1e9qq_.dbf 0

/ data/data1/ORCL2/datafile/o1_mf_users_3ft1e9qb_.dbf 5383754

/ data/data1/ORCL2/datafile/o1_mf_example_3et1e9ps_.dbf 0

/ data/data1/ORCL2/datafile/o1_mf_undotbs2_3ht1e9r1_.dbf 0

/ data/data1/ORCL2/datafile/o1_mf_example_3at1e9nb_.dbf 0

/ data/data1/ORCL2/datafile/o1_mf_ax_3bt1e9nb_.dbf 0

3. Compare the query results of the primary database and the standby database

Compare the values of the UNRECOVERABLE_CHANGE# column in the two query results. If the value of the UNRECOVERABLE_CHANGE # column in the main library is greater than the same column in the standby library, you need to restore these data files in the standby library.

Copy the data file corresponding to the primary database to the standby database:

SQL > alter tablespace users begin backup

SQL > exit

ASMCMD > cp + DATADG/orcl/datafile/users.259.972381717 / tmp

$scp / tmp/users.259.972381717 10.10.60.123:/data/data1/ORCL2/datafile/

SQL > alter tablespace users end backup

The standby library RENAME the old data file to the new data file:

SQL > startup mount force

SQL > alter database recover managed standby database cancel

SQL > alter system set standby_file_management=manual; # this parameter is required to be manual when the slave database performs the rename operation

SQL > alter database rename file'/ data/data1/ORCL2/datafile/o1_mf_users_3ft1e9qb_.dbf' to'/ data/data1/ORCL2/datafile/users.259.972381717'

SQL > alter system set standby_file_management=auto

SQL > alter database recover managed standby database using current logfile disconnect from session

Then you can query the instance table DEMO in the repository.

SQL > select count (1) from demo

COUNT (1)

-

1 1 00

1.2 12.1

In this case, in version 12.1, RMAN provides a convenient way for us to use restore database (or datafile) from service to restore from the main database instead of backing up and transferring data files on the main database.

Of course, Oracle's RMAN is smart enough: if the data files are in a normal state, RMAN can perform hop recovery based on their data file headers. If some blocks are marked as corrupted as a result of nologging operations, then this part of the data file needs to be recovered, and then what? There is a FORCE option in the restore command. But we may not need it. Because sometimes the data files are synchronized, the real-time logging application process is still running. At this time, in order to recover, we need to stop the application.

Once we stop the application, there is no need to perform RESOTORE DATABASE FORCE operations, because the current state of the data files is too old, even if you do not add the FORCE option RMAN will not skip these data files.

Shut down the real-time log application and turn it on to mount.

SQL > alter database recover managed standby database cancel

SQL > shutdown immediate

Database closed.

Database dismounted.

ORACLE instance shut down.

SQL > startup mount

ORACLE instance started

Log in to RMAN and use restore database (or datafile) from service for recovery

RMAN > restore database from service 'primary_db'; # the primary_db here is the alias of the tns connection string from the slave library to the main library

Starting restore at 2018-05-03 17:00:35

Using target database control file instead of recovery catalog

Allocated channel: ORA_DISK_1

Channel ORA_DISK_1: SID=29 device type=DISK

Channel ORA_DISK_1: starting datafile backup set restore

Channel ORA_DISK_1: using network backup set from service primary_db

Channel ORA_DISK_1: specifying datafile (s) to restore from backup set

Channel ORA_DISK_1: restoring datafile 00001 to / data/data1/ORCL2/datafile/o1_mf_system_02t1t9ck_.dbf

Channel ORA_DISK_1: restore complete, elapsed time: 00:00:25

Channel ORA_DISK_1: starting datafile backup set restore

Channel ORA_DISK_1: using network backup set from service primary_db

Channel ORA_DISK_1: specifying datafile (s) to restore from backup set

Channel ORA_DISK_1: restoring datafile 00003 to / data/data1/ORCL2/datafile/o1_mf_sysaux_03t1t9d3_.dbf

Channel ORA_DISK_1: restore complete, elapsed time: 00:00:15

Channel ORA_DISK_1: starting datafile backup set restore

Channel ORA_DISK_1: using network backup set from service primary_db

Channel ORA_DISK_1: specifying datafile (s) to restore from backup set

Channel ORA_DISK_1: restoring datafile 00004 to / data/data1/ORCL2/datafile/o1_mf_undotbs1_04t1t9di_.dbf

Channel ORA_DISK_1: restore complete, elapsed time: 00:00:02

Channel ORA_DISK_1: starting datafile backup set restore

Channel ORA_DISK_1: using network backup set from service primary_db

Channel ORA_DISK_1: specifying datafile (s) to restore from backup set

Channel ORA_DISK_1: restoring datafile 00006 to / data/data1/ORCL2/datafile/o1_mf_users_05t1t9dm_.dbf

Channel ORA_DISK_1: restore complete, elapsed time: 00:00:15

Finished restore at 2018-05-03 17:01:34

Of course, remember to go to the library and start the real-time log application process!

1.3 12.2

In 12.2, Oracle provides a more convenient way to restore the master database. The list of unrecorded blocks will be sent to the slave database and recorded in the slave control file. We can view the relevant information from the v$nonlogged_block view of the slave database. Instead of sending the entire data file for the main library, execute a simple command in RMAN to restore them:

RECOVER DATABASE NONLOGGED BLOCK

Stop the application of real-time log for database preparation

SQL > alter database recover managed standby database cancel

Log in to RMAN for execution

RECOVER DATABASE NONLOGGED BLOCK

Note: before performing this step, make sure that the log_archive_config parameter of the master / slave library has been set.

RMAN > recover database nonlogged block

Starting recover at 2018-05-03 14:54:22

Using target database control file instead of recovery catalog

Allocated channel: ORA_DISK_1

Channel ORA_DISK_1: SID=56 device type=DISK

Starting recovery of nonlogged blocks

List of Datafiles

=

File Status Nonlogged Blocks Blocks Examined Blocks Skipped

-

1 OK 0 0 107519

3 OK 0 0 262399

4 OK 0 0 149759

5 OK 0 0 31999

6 OK 0 0 42239

7 OK 0 16707 21532

8 OK 0 0 12799

9 OK 0 0 76799

18 OK 0 0 33279

19 OK 0 0 57599

20 OK 0 0 24959

21 OK 0 0 33279

22 OK 0 0 51199

23 OK 0 0 12799

29 OK 0 0 1310719

30 OK 0 0 12799

31 OK 0 0 33279

32 OK 0 0 52479

33 OK 0 0 923519

34 OK 0 16822 8777

35 OK 0 0 12799

37 OK 0 0 24959

Details of nonlogged blocks can be queried from v$nonlogged_block view

Recovery of nonlogged blocks complete, elapsed time: 00:00:08

Finished recover at 2018-05-03 14:54:32

Finally, don't forget to start the real-time logging application process.

To sum up, this feature in 12.2 can be tried in some scenarios such as data warehouses. In the past, we turned on force logging to cause a large number of redo logs and affect the execution efficiency of some dml statements. We can try to use the nonlogging operation to save a lot of data insertion time, and then prepare the database recovery operation in the system idle time. However, note that this operation also has drawbacks, so that the availability of your repository is greatly reduced. There is always a choice!

Https://docs.oracle.com/cd/B28359_01/server.111/b28294/scenarios.htm#i1015738

13.4.2 Recovery Steps for Physical Standby Databases

When the archived redo log file is copied to the standby site and applied to the physical standby database, a portion of the datafile is unusable and is marked as being unrecoverable. When you either fail over to the physical standby database, or open the standby database for read-only access, and attempt to read the range of blocks that are marked as UNRECOVERABLE, you will see error messages similar to the following:

ORA-01578: ORACLE data block corrupted (file # 1, block # 2521) ORA-01110: data file 1:'/ oracle/dbs/stdby/tbs_1.dbf' ORA-26040: Data block was loaded using the NOLOGGING option

To recover after the NOLOGGING clause is specified, you need to copy the datafile that contains the missing redo data from the primary site to the physical standby site. Perform the following steps:

Step 1 Determine which datafiles should be copied.

Follow these steps:

Query the primary database:

SQL > SELECT NAME, UNRECOVERABLE_CHANGE# FROM V$DATAFILE NAME UNRECOVERABLE-/ oracle/dbs/tbs_1.dbf 5216 / oracle/dbs/tbs_2.dbf 0 / oracle/dbs/tbs_3.dbf 0 / oracle/dbs/tbs_4.dbf 0 4 rows selected.

Query the standby database:

SQL > SELECT NAME, UNRECOVERABLE_CHANGE# FROM V$DATAFILE NAME UNRECOVERABLE-/ oracle/dbs/stdby/tbs_1.dbf 5186 / oracle/dbs/stdby/tbs_2.dbf 0 / oracle/dbs/stdby/tbs_3.dbf 0 / oracle/dbs/stdby/tbs_4.dbf 0 4 rows selected.

Compare the query results of the primary and standby databases.

Compare the value of the UNRECOVERABLE_CHANGE# column in both query results. If the value of the UNRECOVERABLE_CHANGE# column in the primary database is greater than the same column in the standby database, then the datafile needs to be copied from the primary site to the standby site.

In this example, the value of the UNRECOVERABLE_CHANGE# in the primary database for the tbs_1.dbf datafile is greater, so you need to copy the tbs_1.dbf datafile to the standby site.

Step 2 On the primary site, back up the datafile you need to copy to the standby site.

Issue the following SQL statements:

SQL > ALTER TABLESPACE system BEGIN BACKUP; SQL > EXIT; cp tbs_1.dbf / backup SQL > ALTER TABLESPACE system END BACKUP

Step 3 Copy the datafile to the standby database.

Copy the datafile that contains the missing redo data from the primary site to location on the physical standby site where files related to recovery are stored.

Step 4 On the standby database, restart Redo Apply.

Issue the following SQL statement:

SQL > ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION

You might get the following error messages (possibly in the alert log) when you try to restart Redo Apply:

ORA-00308: cannot open archived log 'standby1' ORA-27037: unable to obtain file status SVR4 Error: 2: No such file or directory Additional information: 3 ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below ORA-01152: file 1 was not restored from a sufficiently old backup ORA-01110: data file 1:' / oracle/dbs/stdby/tbs_1.dbf'

If you get the ORA-00308 error and Redo Apply does not terminate automatically, you can cancel recovery by issuing the following statement from another terminal window:

SQL > ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL

These error messages are returned when one or more log files in the archive gap have not been successfully applied. If you receive these errors, manually resolve the gaps, and repeat Step 4. See Section 6.3.3.1 for information about manually resolving an archive gap.

13.4.3 Determining If a Back up Is Required After Unrecoverable Operations

If you performed unrecoverable operations on your primary database, determine if a new backup operation is required by following these steps:

Query the V$DATAFILE view on the primary database to determine the system change number (SCN) or the time at which the Oracle database generated the most recent invalidated redo data.

Issue the following SQL statement on the primary database to determine if you need to perform another backup:

SELECT UNRECOVERABLE_CHANGE#, TO_CHAR (UNRECOVERABLE_TIME, 'mm-dd-yyyy hh:mi:ss') FROM V$DATAFILE

If the query in the previous step reports an unrecoverable time for a datafile that is more recent than the time when the datafile was last backed up, then make another backup of the datafile in question.

See Oracle Database Reference for more information about the V$DATAFILE view.

Https://docs.oracle.com/cd/B28359_01/server.111/b28294/manage_ls.htm#i1016645

10.5.5 Adding or Re-Creating Tables On a Logical Standby Database

Typically, you use the DBMS_LOGSTDBY.INSTANTIATE_TABLE procedure to re-create a table after an unrecoverable operation. You can also use this procedure to enable SQL Apply on a table that was formerly skipped.

Before you can create a table, it must meet the requirements described in Section 4.1.2, "Ensure Table Rows in the Primary Database Can Be Uniquely Identified". Then, you can use the following steps to re-create a table named HR.EMPLOYEES and resume SQL Apply. The directions assume that there is already a database link BOSTON defined to access the primary database.

The following list shows how to re-create a table and restart SQL Apply on that table:

Stop SQL Apply:

SQL > ALTER DATABASE STOP LOGICAL STANDBY APPLY

Ensure no operations are being skipped for the table in question by querying the DBA_LOGSTDBY_SKIP view:

SQL > SELECT * FROM DBA_LOGSTDBY_SKIP ERROR STATEMENT_OPT OWNER NAME PROC-N SCHEMA_DDL HR EMPLOYEES N DML HR EMPLOYEES N SCHEMA_DDL OE TEST_ORDER N DML OE TEST_ORDER

Because you already have skip rules associated with the table that you want to re-create on the logical standby database, you must first delete those rules. You can accomplish that by calling the DBMS_LOGSTDBY.UNSKIP procedure. For example:

SQL > EXECUTE DBMS_LOGSTDBY.UNSKIP (stmt = > 'DML',-schema_name = >' HR',-object_name = > 'EMPLOYEES'); SQL > EXECUTE DBMS_LOGSTDBY.UNSKIP (stmt = >' SCHEMA_DDL',-schema_name = > 'HR',-object_name = >' EMPLOYEES')

Re-create the table HR.EMPLOYEES with all its data in the logical standby database by using the DBMS_LOGSTDBY.INSTANTIATE_TABLE procedure. For example:

SQL > EXECUTE DBMS_LOGSTDBY.INSTANTIATE_TABLE (schema_name = > 'HR',-object_name = >' EMPLOYEES',-dblink = > 'BOSTON')

Start SQL Apply:

SQL > ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE

See Also:

Oracle Database PL/SQL Packages and Types Reference for information about the DBMS_LOGSTDBY.UNSKIP and the DBMS_LOGSTDBY.INSTANTIATE_TABLE procedures

To ensure a consistent view across the newly instantiated table and the rest of the database, wait for SQL Apply to catch up with the primary database before querying this table. You can do this by performing the following steps:

On the primary database, determine the current SCN by querying the V$DATABASE view:

SQL > SELECT CURRENT_SCN FROM data Base Bosten; CURRENT_SCN-345162788

Make sure SQL Apply has applied all transactions committed before the CURRENT_SCN returned in the previous query:

SQL > SELECT APPLIED_SCN FROM Vogue LOGSTDBYPROGRESS; APPLIED_SCN-- 345161345

When the APPLIED_SCN returned in this query is greater than the CURRENT_SCN returned in the first query, it is safe to query the newly re-created table.

Rolling a Standby Forward using an RMAN Incremental Backup To Fix The Nologging Changes (document ID 958181.1)

In this Document

Purpose

Scope

Details

STEPS

1. Follow this step-by-step procedure to roll forward a physical standby database for which nologging changes have been applied to a small subset of the database:

2. Follow this step-by-step procedure to roll forward a physical standby database for which nologging changes have been applied to a large portion of the database:

References

APPLIES TO:

Oracle Database-Enterprise Edition-Version 10.2.0.1 to 12.1.0.2 [Release 10.2 to 12.1]

Information in this document applies to any platform.

* Checked for relevance on 16murmurJulyMuy2015online *

* Checked for relevance on 27, OctMutual, 2016, October 2016

PURPOSE

This document describes a method of rolling forward a standby database using incremental backups to fix the ORA-1578 and the ORA-26040 errors that were cuased due to Nologging/Unrecoverable operation.

SCOPE

When a segment is defined with the NOLOGGING attribute and if a NOLOGGING/UNRECOVERABLE operation updates the segment, the online redo log file is updated with minimal information to invalidate the affected blocks when a RECOVERY is later performed.

This kind of NOLOGGING/UNRECOVERABLE will mark the affected blocks as corrupt during the media recovery on the standby database.Now, when you either activate the standby database, or open the standby database with the read-only option, and attempt to read the range of blocks that are marked as "UNRECOVERABLE," you see error messages similar to the following:

ORA-01578: ORACLE data block corrupted (file # 1, block # 2521)

ORA-01110: data file 1:'/ vobs/oracle/dbs/stdby/tbs_1.f'

ORA-26040: Data block was loaded using the NOLOGGING option

In this article we will be checking the steps to fix the nologging changes have been applied to a small subset of the database and the nologging changes have been applied to a large portion of the database:

A look-a-like procedure is documented in:

Oracle ®Data Guard Concepts and Administration 11g Release 1 (11.1) Part Number B28294-03

Section 13.4 Recovering After the NOLOGGING Clause Is Specified

Http://download.oracle.com/docs/cd/B28359_01/server.111/b28294/scenarios.htm#i1015738

DETAILSSTEPS1. Follow this step-by-step procedure to roll forward a physical standby database for which nologging changes have been applied to a small subset of the database:

1. List the files that have had nologging changes applied by querying the V$DATAFILE view on the standby database. For example:

SQL > SELECT FILE#, FIRST_NONLOGGED_SCN FROM V$DATAFILE WHERE FIRST_NONLOGGED_SCN > 0

FILE# FIRST_NONLOGGED_SCN

--

4 225979

5 230184

2. Stop Redo Apply on the standby database:

SQL > ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL

3. On the standby database, offline the datafiles (recorded in step 0) that have had nologging changes. Taking these datafiles offline ensures redo data is not skipped for the corrupt blocks while the incremental backups are performed.

SQL > ALTER DATABASE DATAFILE 4 OFFLINE FOR DROP

SQL > ALTER DATABASE DATAFILE 5 OFFLINE FOR DROP

4. Start Redo Apply on the standby database:

SQL > ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT

5. While connected to the primary database as the RMAN target, create an incremental backup for each datafile listed in the FIRST_NONLOGGED_SCN column (recorded in step 0). For example:

RMAN > BACKUP INCREMENTAL FROM SCN 225979 DATAFILE 4 FORMAT'/ tmp/ForStandby_%U' TAG 'FOR STANDBY'

RMAN > BACKUP INCREMENTAL FROM SCN 230184 DATAFILE 5 FORMAT'/ tmp/ForStandby_%U' TAG 'FOR STANDBY'

6. Transfer all backup sets created on the primary system to the standby system. (Note that there may be more than one backup file created.)

% scp / tmp/ForStandby_* standby:/tmp

7. While connected to the physical standby database as the RMAN target, catalog all incremental backup pieces. For example:

RMAN > CATALOG START WITH'/ tmp/ForStandby_'

8. Stop Redo Apply on the standby database:

SQL > ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL

9. Online the datafiles on the standby database

SQL > ALTER DATABASE DATAFILE 4 ONLINE

SQL > ALTER DATABASE DATAFILE 5 ONLINE

10. While connected to the physical standby database as the RMAN target, apply the incremental backup sets:

RMAN > RECOVER DATAFILE 4,5 NOREDO

11. Query the V$DATAFILE view on the standby database to verify there are no datafiles with nologged changes. The following query should return zero rows

SQL > SELECT FILE#, FIRST_NONLOGGED_SCN FROM V$DATAFILE WHERE FIRST_NONLOGGED_SCN > 0

12. Recreate the Standby Controlfile following:

Note 459411.1 Steps to recreate a Physical Standby Controlfile

13. Remove the incremental backups from the standby system:

RMAN > DELETE BACKUP TAG 'FOR STANDBY'

14. Manually remove the incremental backups from the primary system. For example, the following example uses the Linux rm command:

% rm / tmp/ForStandby_*

15. Start Redo Apply on the standby database:

SQL > ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT

Note: Starting from 12c we can use RECOVER DATABASE...FROM SERVICE clause in RMAN to generate, transfer and apply the incremental backup in a single step. Please refer below document for examples:

Note 1987763.1 ROLLING FORWARD A PHYSICAL STANDBY USING RECOVER FROM SERVICE COMMAND IN 12C

2. Follow this step-by-step procedure to roll forward a physical standby database for which nologging changes have been applied to a large portion of the database:

1. Query the V$DATAFILE view on the standby database to record the lowest FIRST_NONLOGGED_SCN:

SQL > SELECT MIN (FIRST_NONLOGGED_SCN) FROM V$DATAFILE WHERE FIRST_NONLOGGED_SCN > 0

MIN (FIRST_NONLOGGED_SCN)

-

223948

2.Stop Redo Apply on the standby database:

SQL > ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL

3.While connected to the primary database as the RMAN target, create an incremental backup from the lowest FIRST_NONLOGGED_SCN (recorded in step 0)

RMAN > BACKUP INCREMENTAL FROM SCN 223948 DATABASE FORMAT'/ tmp/ForStandby_%U' tag 'FOR STANDBY'

4.Transfer all backup sets created on the primary system to the standby system. (Note that more than one backup file may have been created.) The following example uses the scp command to copy the files:

% scp / tmp/ForStandby_* standby:/tmp

5.While connected to the standby database as the RMAN target, catalog all incremental backup piece (s)

RMAN > CATALOG START WITH'/ tmp/ForStandby_'

6.While connected to the standby database as the RMAN target, apply the incremental backups:

RMAN > RECOVER DATABASE NOREDO

7.Query the V$DATAFILE view to verify there are no datafiles with nologged changes. The following query on the standby database should return zero rows:

SQL > SELECT FILE#, FIRST_NONLOGGED_SCN FROM V$DATAFILE WHERE FIRST_NONLOGGED_SCN > 0

8. Recreate the Standby Controlfile following:

Note 459411.1 Steps to recreate a Physical Standby Controlfile

9.Remove the incremental backups from the standby system:

RMAN > DELETE BACKUP TAG 'FOR STANDBY'

10.Manually remove the incremental backups from the primary system. For example, the following removes the backups using the Linux rm command:

% rm / tmp/ForStandby_*

11.Start Redo Apply on the standby database:

SQL > ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT

Note:

If the affected files belong to a READ ONLY tablespace, those files will be ignored during backup. To bypass the issue, at Primary Database, switch the tablespace from read only to read write and back to read only again:

SQL > alter tablespace read write

SQL > alter tablespace read only

REFERENCES

NOTE:794505.1-ORA-1578 / ORA-26040 Corrupt blocks by NOLOGGING-Error explanation and solution

Rolling Forward a Physical Standby Using Recover From Service Command in 12c (document ID 1987763.1)

In this Document

Goal

Solution

References

APPLIES TO:

Oracle Database-Enterprise Edition-Version 12.1.0.1 and later

Information in this document applies to any platform.

GOAL

Rolling Forward a Physical Standby Database Using the RECOVER FROM SERVICE Command

A standby database is a transactionally-consistent copy of the production database. It enables production Oracle database to survive disasters and data corruption. If the production database becomes unavailable because of a planned or an unplanned outage, Data Guard can switch a standby database to the production role, minimizing the downtime associated with the outage. Moreover, performance of production database can be improved by offloading resource-intensive backup and reporting operations to standby systems. As you can see, it's always desirable to have standby database synchronized with the primary database.

Prior to 12c, in order to roll forward the standby database using incremental backups you would need to:

Create a control file for the standby database on the primary database.

Take an incremental backup on the primary starting from the SCN# of the standby database.

Copy the incremental backup to the standby host and catalog it with RMAN.

Mount the standby database with newly created standby control file.

Cancel managed recovery of the standby database and apply incremental backup to the standby database.

Start managed recovery of standby database.

In 12c, this procedure has been dramatically simplified. Now you can use the RECOVER... FROM SERVICE command to synchronize the physical standby database with the primary database. This command does the following:

Creates an incremental backup containing the changes to the primary database. All changes to data files on the primary database, beginning with the SCN in the standby data file header, are included in the incremental backup.

Transfers the incremental backup over the network to the physical standby database.

Applies the incremental backup to the physical standby database.

This results in rolling forward the standby data files to the same point-in-time as the primary. However, the standby control file still contains old SCN values which are lower than the SCN values in the standby data files. Therefore, to complete the synchronization of the physical standby database, the standby control file needs to be refreshed to update the SCN#.

SOLUTION

Steps to Refresh a Physical Standby Database with Changes Made to the Primary Database

Environment:

Primary Database:

DB_UNIQUE_NAME: prim (net service name 'PRIM')

Standby Database:

DB_UNIQUE_NAME:clone (net service name 'CLONE')

Use the following steps to refresh the physical standby database with changes made to the primary database:

Prerequisites:

Oracle Net connectivity is established between the physical standby database and the primary database.

You can do this by adding an entry corresponding to the primary database in the tnsnames.ora file of the physical standby database.

The password files on the primary database and the physical standby database are the same.

The COMPATIBLE parameter in the initialization parameter file of the primary database and physical standby database is set to 12.0.

Start RMAN and connect as target to the physical standby database.

Check the existing size of the Primary database and compare with the standby existing size as we need at-least the difference in size (free space) since standby is behind, if the datafile on primary has autoextended then standby file would be same in size compared to prod,so when you do the incremental rollforward it would apply the blocks and add any new one to match the size of standby file.

1. Place the physical standby database in MOUNT mode.

SHUTDOWN IMMEDIATE

STARTUP MOUNT

2. Stop the managed recovery processes on the physical standby database.

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL

NOTE: If using broker, you will need to stop MRP through DGMGRL. I.e.:

DGMGRL > edit database''set STATE='APPLY-OFF'

3. Let us identify the datafiles on standby database which are out of sync with respect to primary.

Primary:

SQL > select HXFIL File_num,substr (HXFNM,1,40), fhscn from x$kcvfh

FILE_NUM SUBSTR (HXFNM,1,40) FHSCN

1/ u01/app/oracle/oradata/prim/system01.db 1984501

3 / u01/app/oracle/oradata/prim/sysaux01.db 1984501

4 / u01/app/oracle/oradata/prim/undotbs01.d 1984501

5 / u01/app/oracle/oradata/prim/pdbseed/sys 1733076

6 / u01/app/oracle/oradata/prim/users01.dbf 1984501

7 / u01/app/oracle/oradata/prim/pdbseed/sys 1733076

8 / u01/app/oracle/oradata/prim/pdb1/system 1984501

9 / u01/app/oracle/oradata/prim/pdb1/sysaux 1984501

10 / u01/app/oracle/oradata/prim/pdb1/pdb1_u 1984501

16 / u01/app/oracle/oradata/prim/pdb3/system 1984501

17 / u01/app/oracle/oradata/prim/pdb3/sysaux 1984501

18 / u01/app/oracle/oradata/prim/pdb3/pdb1_u 1984501

19 / u01/app/oracle/oradata/prim/pdb3/test.d 1984501

13 rows selected.

STANDBy:

SQL > select HXFIL File_num,substr (HXFNM,1,40), fhscn from x$kcvfh

FILE_NUM SUBSTR (HXFNM,1,40) FHSCN

1/ u01/app/oracle/oradata/clone/system01.d 1980995

3 / u01/app/oracle/oradata/clone/sysaux01.d 1980998

4 / u01/app/oracle/oradata/clone/undotbs01. 1981008

5 / u01/app/oracle/oradata/clone/pdbseed/sy 1733076

6 / u01/app/oracle/oradata/clone/users01.db 1981012

7 / u01/app/oracle/oradata/clone/pdbseed/sy 1733076

8 / u01/app/oracle/oradata/clone/pdb1/syste 1981015

9 / u01/app/oracle/oradata/clone/pdb1/sysau 1981021

10 / u01/app/oracle/oradata/clone/pdb1/pdb1_ 1981028

16 / u01/app/oracle/oradata/clone/pdb3/syste 1981030

17 / u01/app/oracle/oradata/clone/pdb3/sysau 1981036

18 / u01/app/oracle/oradata/clone/pdb3/pdb1_ 1981043

19 / u01/app/oracle/oradata/clone/pdb3/test. 1981044

13 rows selected.

On checking SCN in datafile headers on primary (prim) and standby (clone), we note that whereas SCN

Of datafiles 5, 7 match on primary and standby, for rest of the datafiles (1, 3, 4, 6, 8, 9, 10, 16, 17) standby is lagging behind primary.

4. Note the current SCN of the physical standby database. This is required to determine, in a later step, if new data files were added to the primary database.

Query the V$DATABASE view to obtain the current SCN using the following command:

SELECT CURRENT_SCN FROM V$DATABASE

5. The RECOVER... FROM SERVICE command refreshes the standby data files and rolls them forward to the same point-in-time as the primary.

[oracle@localhost ~] $rman target/

Recovery Manager: Release 12.1.0.1.0-Production on Mon Mar 9 18:22:52 2015

Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.

Connected to target database: PRIM (DBID=4165840403, not open)

RMAN > recover database from service prim noredo using compressed backupset

Log:

Starting recover at 09-MAR-15

Using target database control file instead of recovery catalog

Allocated channel: ORA_DISK_1

Channel ORA_DISK_1: SID=32 device type=DISK

Skipping datafile 5; already restored to SCN 1733076

Skipping datafile 7; already restored to SCN 1733076

Channel ORA_DISK_1: starting incremental datafile backup set restore

Channel ORA_DISK_1: using compressed network backup set from service prim

Destination for restore of datafile 00001: / u01/app/oracle/oradata/clone/system01.dbf

Channel ORA_DISK_1: restore complete, elapsed time: 00:00:07

Channel ORA_DISK_1: starting incremental datafile backup set restore

Channel ORA_DISK_1: using compressed network backup set from service prim

Destination for restore of datafile 00003: / u01/app/oracle/oradata/clone/sysaux01.dbf

Channel ORA_DISK_1: restore complete, elapsed time: 00:00:25

Channel ORA_DISK_1: starting incremental datafile backup set restore

Channel ORA_DISK_1: using compressed network backup set from service prim

Destination for restore of datafile 00004: / u01/app/oracle/oradata/clone/undotbs01.dbf

Channel ORA_DISK_1: restore complete, elapsed time: 00:00:07

Channel ORA_DISK_1: starting incremental datafile backup set restore

Channel ORA_DISK_1: using compressed network backup set from service prim

Destination for restore of datafile 00006: / u01/app/oracle/oradata/clone/users01.dbf

Channel ORA_DISK_1: restore complete, elapsed time: 00:00:03

Channel ORA_DISK_1: starting incremental datafile backup set restore

Channel ORA_DISK_1: using compressed network backup set from service prim

Destination for restore of datafile 00008: / u01/app/oracle/oradata/clone/pdb1/system01.dbf

Channel ORA_DISK_1: restore complete, elapsed time: 00:00:07

Channel ORA_DISK_1: starting incremental datafile backup set restore

Channel ORA_DISK_1: using compressed network backup set from service prim

Destination for restore of datafile 00009: / u01/app/oracle/oradata/clone/pdb1/sysaux01.dbf

Channel ORA_DISK_1: restore complete, elapsed time: 00:00:15

Channel ORA_DISK_1: starting incremental datafile backup set restore

Channel ORA_DISK_1: using compressed network backup set from service prim

Destination for restore of datafile 00010: / u01/app/oracle/oradata/clone/pdb1/pdb1_users01.dbf

Channel ORA_DISK_1: restore complete, elapsed time: 00:00:01

Channel ORA_DISK_1: starting incremental datafile backup set restore

Channel ORA_DISK_1: using compressed network backup set from service prim

Destination for restore of datafile 00016: / u01/app/oracle/oradata/clone/pdb3/system01.dbf

Channel ORA_DISK_1: restore complete, elapsed time: 00:00:15

Channel ORA_DISK_1: starting incremental datafile backup set restore

Channel ORA_DISK_1: using compressed network backup set from service prim

Destination for restore of datafile 00017: / u01/app/oracle/oradata/clone/pdb3/sysaux01.dbf

Channel ORA_DISK_1: restore complete, elapsed time: 00:00:15

Channel ORA_DISK_1: starting incremental datafile backup set restore

Channel ORA_DISK_1: using compressed network backup set from service prim

Destination for restore of datafile 00018: / u01/app/oracle/oradata/clone/pdb3/pdb1_users01.dbf

Channel ORA_DISK_1: restore complete, elapsed time: 00:00:03

Channel ORA_DISK_1: starting incremental datafile backup set restore

Channel ORA_DISK_1: using compressed network backup set from service prim

Destination for restore of datafile 00019: / u01/app/oracle/oradata/clone/pdb3/test.dbf

Channel ORA_DISK_1: restore complete, elapsed time: 00:00:02

Finished recover at 09-MAR-15

6. Lets check the SCNs of the datafiles at primary and standby now.

Primary

-

SQL > select HXFIL File_num,substr (HXFNM,1,40), fhscn from x$kcvfh

FILE_NUM SUBSTR (HXFNM,1,40) FHSCN

1/ u01/app/oracle/oradata/prim/system01.db 1985174

3 / u01/app/oracle/oradata/prim/sysaux01.db 1985183

4 / u01/app/oracle/oradata/prim/undotbs01.d 1985194

5 / u01/app/oracle/oradata/prim/pdbseed/sys 1733076

6 / u01/app/oracle/oradata/prim/users01.dbf 1985203

7 / u01/app/oracle/oradata/prim/pdbseed/sys 1733076

8 / u01/app/oracle/oradata/prim/pdb1/system 1985206

9 / u01/app/oracle/oradata/prim/pdb1/sysaux 1985212

10 / u01/app/oracle/oradata/prim/pdb1/pdb1_u 1985218

16 / u01/app/oracle/oradata/prim/pdb3/system 1985221

17 / u01/app/oracle/oradata/prim/pdb3/sysaux 1985343

18 / u01/app/oracle/oradata/prim/pdb3/pdb1_u 1985350

19 / u01/app/oracle/oradata/prim/pdb3/test.d 1985354

Standby

-

RMAN > select HXFIL File_num,substr (HXFNM,1,40), fhscn from x$kcvfh

FILE_NUM SUBSTR (HXFNM,1,40) FHSCN

1/ u01/app/oracle/oradata/clone/system01.d 1985174

3 / u01/app/oracle/oradata/clone/sysaux01.d 1985183

4 / u01/app/oracle/oradata/clone/undotbs01. 1985194

5 / u01/app/oracle/oradata/clone/pdbseed/sy 1733076

6 / u01/app/oracle/oradata/clone/users01.db 1985203

7 / u01/app/oracle/oradata/clone/pdbseed/sy 1733076

8 / u01/app/oracle/oradata/clone/pdb1/syste 1985206

9 / u01/app/oracle/oradata/clone/pdb1/sysau 1985212

10 / u01/app/oracle/oradata/clone/pdb1/pdb1_ 1985218

16 / u01/app/oracle/oradata/clone/pdb3/syste 1985221

17 / u01/app/oracle/oradata/clone/pdb3/sysau 1985343

18 / u01/app/oracle/oradata/clone/pdb3/pdb1_ 1985350

19 / u01/app/oracle/oradata/clone/pdb3/test. 1985354

13 rows selected

From above,we can see primary and standby SCNs matching now.

However, the standby control file still contains old SCN values which are lower than the SCN values in the standby data files.

Therefore, to complete the synchronization of the physical standby database, we must refresh the standby control file to update the SCN#.

7. Use the following commands to shut down the standby database and then start it in NOMOUNT mode.

SHUTDOWN IMMEDIATE

STARTUP NOMOUNT

8. Restore the standby control file by using the control file on the primary database using service prim.

The following command restores the control file on the physical standby database by using the primary database control file.

RESTORE STANDBY CONTROLFILE FROM SERVICE

RMAN > restore standby controlfile from service prim

Starting restore at 09-MAR-15

Allocated channel: ORA_DISK_1

Channel ORA_DISK_1: SID=20 device type=DISK

Channel ORA_DISK_1: starting datafile backup set restore

Channel ORA_DISK_1: using network backup set from service prim

Channel ORA_DISK_1: restoring control file

Channel ORA_DISK_1: restore complete, elapsed time: 00:00:03

Output file name=/u01/app/oracle/oradata/clone/control01.ctl

Output file name=/u01/app/oracle/fast_recovery_area/clone/control02.ctl

Finished restore at 09-MAR-15

After this step, the names of files in the standby control file are the names that were used in the primary database.

NOTE: Depending on the configuration, the path and/or names of the standby datafiles after the standby controlfile refresh may be correct and thus steps # 9 and # 10 can be skipped.

Mount the standby database using the following command:

RMAN > alter database mount

Statement processed

Released channel: ORA_DISK_1

RMAN > report schema

Starting implicit crosscheck backup at 09-MAR-15

Allocated channel: ORA_DISK_1

Channel ORA_DISK_1: SID=21 device type=DISK

Crosschecked 9 objects

Finished implicit crosscheck backup at 09-MAR-15

Starting implicit crosscheck copy at 09-MAR-15

Using channel ORA_DISK_1

Crosschecked 2 objects

Finished implicit crosscheck copy at 09-MAR-15

Searching for all files in the recovery area

Cataloging files...

Cataloging done

List of Cataloged Files

=

File Name: / u01/app/oracle/fast_recovery_area/CLONE/archivelog/2015_03_05/o1_mf_1_17_11q13dm8_.arc

File Name: / u01/app/oracle/fast_recovery_area/CLONE/archivelog/2015_03_05/o1_mf_1_16_10q13dm8_.arc

File Name: / u01/app/oracle/fast_recovery_area/CLONE/archivelog/2015_03_05/o1_mf_1_2_bhk1ctcz_.arc

File Name: / u01/app/oracle/fast_recovery_area/CLONE/archivelog/2015_03_05/o1_mf_1_1_bhk17cw8_.arc

RMAN-06139: WARNING: control file is not current for REPORT SCHEMA

Report of database schema for database with db_unique_name CLONE

List of Permanent Datafiles

= =

File Size (MB) Tablespace RB segs Datafile Name

1 780 SYSTEM * / u01/app/oracle/oradata/prim/system01.dbf

3 730 SYSAUX * / u01/app/oracle/oradata/prim/sysaux01.dbf

4 90 UNDOTBS1 * / u01/app/oracle/oradata/prim/undotbs01.dbf

5 250 PDB$SEED:SYSTEM * / u01/app/oracle/oradata/prim/pdbseed/system01.dbf

6 5 USERS * / u01/app/oracle/oradata/prim/users01.dbf

7 590 PDB$SEED:SYSAUX * / u01/app/oracle/oradata/prim/pdbseed/sysaux01.dbf

8 260 PDB1:SYSTEM * / u01/app/oracle/oradata/prim/pdb1/system01.dbf

9 620 PDB1:SYSAUX * / u01/app/oracle/oradata/prim/pdb1/sysaux01.dbf

10 5 PDB1:USERS * / u01/app/oracle/oradata/prim/pdb1/pdb1_users01.dbf

16 260 PDB3:SYSTEM * / u01/app/oracle/oradata/prim/pdb3/system01.dbf

17 620 PDB3:SYSAUX * / u01/app/oracle/oradata/prim/pdb3/sysaux01.dbf

18 5 PDB3:USERS * / u01/app/oracle/oradata/prim/pdb3/pdb1_users01.dbf

19 50 PDB3:TEST * / u01/app/oracle/oradata/prim/pdb3/test.dbf

List of Temporary Files

=

File Size (MB) Tablespace Maxsize (MB) Tempfile Name

-

1 60 TEMP 32767 / u01/app/oracle/oradata/prim/temp01.dbf

2 20 PDB$SEED:TEMP 32767 / u01/app/oracle/oradata/prim/pdbseed/pdbseed_temp01.dbf

3 373 PDB1:TEMP 32767 / u01/app/oracle/oradata/prim/pdb1/temp01.dbf

4 71 PDB3:TEMP 32767 / u01/app/oracle/oradata/prim/pdb3/temp01.dbf

9. Update the names of the data files and the temp files in the standby control file.

Use the CATALOG command and the SWITCH command to update all the data file names.

RMAN > catalog start with''

In this case

RMAN > Catalog start with'/ u01qqapp.oracle.oradataUnip

Searching for all files that match the pattern / u01/app/oracle/oradata/clone

List of Files Unknown to the Database

= =

File Name: / u01/app/oracle/oradata/clone/pdb1/pdb1_users01.dbf

File Name: / u01/app/oracle/oradata/clone/pdb1/sysaux01.dbf

File Name: / u01/app/oracle/oradata/clone/pdb1/system01.dbf

File Name: / u01/app/oracle/oradata/clone/pdbseed/sysaux01.dbf

File Name: / u01/app/oracle/oradata/clone/pdbseed/system01.dbf

File Name: / u01/app/oracle/oradata/clone/sysaux01.dbf

File Name: / u01/app/oracle/oradata/clone/system01.dbf

File Name: / u01/app/oracle/oradata/clone/undotbs01.dbf

File Name: / u01/app/oracle/oradata/clone/users01.dbf

File Name: / u01/app/oracle/oradata/clone/pdb3/pdb1_users01.dbf

File Name: / u01/app/oracle/oradata/clone/pdb3/sysaux01.dbf

File Name: / u01/app/oracle/oradata/clone/pdb3/system01.dbf

File Name: / u01/app/oracle/oradata/clone/pdb3/test.dbf

Do you really want to catalog the above files (enter YES or NO)? Yes

Cataloging files...

Cataloging done

List of Cataloged Files

=

File Name: / u01/app/oracle/oradata/clone/pdb1/pdb1_users01.dbf

File Name: / u01/app/oracle/oradata/clone/pdb1/sysaux01.dbf

File Name: / u01/app/oracle/oradata/clone/pdb1/system01.dbf

File Name: / u01/app/oracle/oradata/clone/pdbseed/sysaux01.dbf

File Name: / u01/app/oracle/oradata/clone/pdbseed/system01.dbf

File Name: / u01/app/oracle/oradata/clone/sysaux01.dbf

File Name: / u01/app/oracle/oradata/clone/system01.dbf

File Name: / u01/app/oracle/oradata/clone/undotbs01.dbf

File Name: / u01/app/oracle/oradata/clone/users01.dbf

File Name: / u01/app/oracle/oradata/clone/pdb3/pdb1_users01.dbf

File Name: / u01/app/oracle/oradata/clone/pdb3/sysaux01.dbf

File Name: / u01/app/oracle/oradata/clone/pdb3/system01.dbf

File Name: / u01/app/oracle/oradata/clone/pdb3/test.dbf

10. Switch to cataloged copy.

RMAN > SWITCH DATABASE TO COPY

Datafile 1 switched to datafile copy "/ u01/app/oracle/oradata/clone/system01.dbf"

Datafile 3 switched to datafile copy "/ u01/app/oracle/oradata/clone/sysaux01.dbf"

Datafile 4 switched to datafile copy "/ u01/app/oracle/oradata/clone/undotbs01.dbf"

Datafile 5 switched to datafile copy "/ u01/app/oracle/oradata/clone/pdbseed/system01.dbf"

Datafile 6 switched to datafile copy "/ u01/app/oracle/oradata/clone/users01.dbf"

Datafile 7 switched to datafile copy "/ u01/app/oracle/oradata/clone/pdbseed/sysaux01.dbf"

Datafile 8 switched to datafile copy "/ u01/app/oracle/oradata/clone/pdb1/system01.dbf"

Datafile 9 switched to datafile copy "/ u01/app/oracle/oradata/clone/pdb1/sysaux01.dbf"

Datafile 10 switched to datafile copy "/ u01/app/oracle/oradata/clone/pdb1/pdb1_users01.dbf"

Datafile 16 switched to datafile copy "/ u01/app/oracle/oradata/clone/pdb3/system01.dbf"

Datafile 17 switched to datafile copy "/ u01/app/oracle/oradata/clone/pdb3/sysaux01.dbf"

Datafile 18 switched to datafile copy "/ u01/app/oracle/oradata/clone/pdb3/pdb1_users01.dbf"

Datafile 19 switched to datafile copy "/ u01/app/oracle/oradata/clone/pdb3/test.dbf"

Here, / u01/app/oracle/oradata/clone is the location of the data files on the physical standby database.

All data files must be stored in this location.

11. Use the current SCN returned in Step 4 to determine if new data files were added to the primary database since the standby database was last refreshed. If yes, these data files need to be restored on the standby from the primary database.

The following example assumes that the CURRENT_SCN returned in Step 6 is 1984232 and lists the data files that were created on the primary after the timestamp represented by this SCN:

SELECT file# FROM V$DATAFILE WHERE creation_change# > = 1984232

If no files are returned in Step 11, then go to Step 13. If one or more files are returned in Step 11, then restore these data files from the primary database as in step 12.

12. If you are not connected to a recovery catalog, then use the following commands to restore data files that were added to the primary after the standby was last refreshed (assuming datafile 21 added to the primary):

RUN

{

SET NEWNAME FOR DATABASE TO'/ u01qqapp.oracle.oradataUnition.clone`

RESTORE DATAFILE 21 FROM SERVICE prim

}

If you are connected to a recovery catalog, then use the following command to restore data files that were added to the primary after the standby was last refreshed (assuming data file 21 added to the primary):

RESTORE DATAFILE 21 FROM SERVICE prim

13. Update the names of the online redo logs and standby redo logs in the standby control file using one of the following methods:

-Use the ALTER DATABASE CLEAR command to clear the log files in all redo log groups of the standby database. RMAN then recreates all the standby redo logs and the online redo log files.

Note:

Clearing log files is recommended only if the standby database does not have access to the online redo log files and standby redo log

Files of the primary database (for ex: standby and primary at same server or using same ASM disk group). If the standby database has access to the redo log files of the primary database and the redo log file

Names of the primary database are OMF names, then the ALTER DATABASE command will delete log files on the primary database.

-Use the ALTER DATABASE RENAME FILE command to rename the redo log files.

Use a separate command to rename each log file.

To rename log files, the STANDBY_FILE_MANAGEMENT initialization parameter must be set to MANUAL.

Renaming log files is recommended when the number of online redo logs files and standby redo log files is the same

In the primary database and the physical standby database.

(Oracle Active Data Guard only) Perform the following steps to open the physical standby database:

On the primary database, switch the archived redo log files using the following command:

ALTER SYSTEM ARCHIVE LOG CURRENT

On the physical standby database, run the following commands:

RECOVER DATABASE

ALTER DATABASE OPEN READ ONLY

Start the managed recovery processes on the physical standby database by using the following command:

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION

REFERENCES

NOTE:1646232.1-ORA-19573 when trying to restore to standby with incremental backup From Primary or During any RMAN restore operation

About Me

.

● author: wheat seedlings, part of the content is sorted out from the network, if there is any infringement, please contact the wheat seedlings to delete

● article is updated synchronously on itpub (http://blog.itpub.net/26736162), blog Park (http://www.cnblogs.com/lhrbest) and personal Wechat official account (xiaomaimiaolhr).

● article itpub address: http://blog.itpub.net/26736162

● article blog park address: http://www.cnblogs.com/lhrbest

● pdf version of this article, personal introduction and wheat seedling cloud disk address: http://blog.itpub.net/26736162/viewspace-1624453/

● database written examination interview questions database and answers: http://blog.itpub.net/26736162/viewspace-2134706/

● DBA Baodian Jinri Toutiao address: http://www.toutiao.com/c/user/6401772890/#mid=1564638659405826

.

● QQ group number: 230161599 (full), 618766405

● WeChat group: you can add me Wechat, I will pull everyone into the group, if you are the one

● contact me, please add QQ friend (646634621), indicate the reason for adding

● completed in Mordor from 06:00 on 2018-07-01 to 24:00 on 2018-07-31.

Last revision time of ●: 2018-07-01 06:00 ~ 2018-07-31 24:00

The content of the ● article comes from the study notes of wheat seedlings, and some of it is sorted out from the Internet. Please forgive me if there is any infringement or improper place.

Copyright ● all rights reserved, welcome to share this article, please reserve the source for reprint

.

The micro store of ● wheat seedlings: https://weidian.com/s/793741433?wfr=c&ifr=shopdetail

A series of database books published by ● wheat seedlings: http://blog.itpub.net/26736162/viewspace-2142121/

● wheat seedlings OCP, OCM, high availability network class: http://blog.itpub.net/26736162/viewspace-2148098/

● wheat seedlings Tencent classroom home page: https://lhr.ke.qq.com/

.

Use Wechat client to scan the following QR code to follow the Wechat official account (xiaomaimiaolhr) and QQ group (DBA treasure book) of wheat seedlings, add wheat seedling Wechat, and learn the most practical database technology.

.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report