In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
It is well known that our Data Guard data synchronization is based on log streams. Therefore, it is not allowed to perform nologging operations in the main library. This is why we need to use Force Logging during the Data Guard configuration phase. But this also brings a lot of problems (SQL execution efficiency). For example, when we use the data pump for migration, we want at least downtime to complete, at this time we may consider the way of minimum log import to speed up the import, and then resynchronize the repository.
In some scenarios, we will use nologging operation to save a lot of data insertion time, and the problem caused by this operation is that if the library has a ready database, because the nologging insertion operation of the master database will not generate redo, it will not be transferred and applied on the slave database, which will lead to problems with the data of the slave database.
In Oracle 11g, if you encounter such a problem, you can solve the problem by restoring the problematic data files in the slave database, as shown in the following example:
Set force_logging to nologging mode on a master library with an active / standby relationship, and then create a table set to nologging mode
Click (here) to collapse or open
SQL > alter database no force logging
SQL > create table DEMO tablespace users pctfree 99 as select rownum n from xmltable ('1 to 1000')
SQL > alter table DEMO nologging
Then insert the data using / * + append*/ and submit
Click (here) to collapse or open
SQL > insert / * + append * / into DEMO select rownum n from xmltable ('1 to 100000')
SQL > commit
At this time, when you query the table in the database, you will see the following error message.
Click (here) to collapse or open
SQL > select count (1) from demo
Select count (1) from demo
*
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 4, block # 819)
ORA-01110: datafile 4:'/ data/data1/ORCL2/datafile/o1_mf_users_3ft1e9qb_.dbf'
ORA-26040: Data block was loaded using the NOLOGGING option
To fix this problem, you need to copy the data file containing the missing data from the main library to the physical backup library.
Step one
1. Query the main database
Click (here) to collapse or open
SQL > SELECT NAME, UNRECOVERABLE_CHANGE# FROM V$DATAFILE
NAME UNRECOVERABLE_CHANGE#
-
+ DATADG/orcl/datafile/system.270.972381717 0
+ DATADG/orcl/datafile/sysaux.265.972381717 0
+ DATADG/orcl/datafile/undotbs1.261.972381717 0
+ DATADG/orcl/datafile/users.259.972381717 6252054
+ DATADG/orcl/datafile/example.264.972381807 0
+ DATADG/orcl/datafile/undotbs2.258.972381927 0
+ DATADG/orcl/datafile/example.266.972400297 0
+ DATADG/orcl/datafile/ax.268.973612569 0
2. Query the reserve database
Click (here) to collapse or open
Sys@ORCL > SELECT NAME, UNRECOVERABLE_CHANGE# FROM V$DATAFILE
NAME UNRECOVERABLE_CHANGE#
-
/ data/data1/ORCL2/datafile/o1_mf_system_3dt1e9op_.dbf 0
/ data/data1/ORCL2/datafile/o1_mf_sysaux_3ct1e9nb_.dbf 0
/ data/data1/ORCL2/datafile/o1_mf_undotbs1_3gt1e9qq_.dbf 0
/ data/data1/ORCL2/datafile/o1_mf_users_3ft1e9qb_.dbf 5383754
/ data/data1/ORCL2/datafile/o1_mf_example_3et1e9ps_.dbf 0
/ data/data1/ORCL2/datafile/o1_mf_undotbs2_3ht1e9r1_.dbf 0
/ data/data1/ORCL2/datafile/o1_mf_example_3at1e9nb_.dbf 0
/ data/data1/ORCL2/datafile/o1_mf_ax_3bt1e9nb_.dbf 0
3. Compare the query results of the primary database and the standby database
Compare the values of the UNRECOVERABLE_CHANGE# column in the two query results. If the value of the UNRECOVERABLE_CHANGE# column in the main library is greater than the same column in the standby library, you need to restore these data files in the standby library.
Step two
Copy the data file corresponding to the main library to the standby database
Click (here) to collapse or open
SQL > alter tablespace users begin backup
SQL > exit
ASMCMD > cp + DATADG/orcl/datafile/users.259.972381717 / tmp
$scp / tmp/users.259.972381717 10.10.60.123:/data/data1/ORCL2/datafile/
SQL > alter tablespace users end backup
Step three
Rename the old data file to the new data file
Click (here) to collapse or open
SQL > alter database recover managed standby database cancel
SQL > alter system set standby_file_management=manual; # this parameter is required to be manual when the slave database performs the rename operation
SQL > alter database rename file'/ data/data1/ORCL2/datafile/o1_mf_users_3ft1e9qb_.dbf' to'/ data/data1/ORCL2/datafile/users.259.972381717'
SQL > alter system set standby_file_management=auto
SQL > alter database recover managed standby database using current logfile disconnect from session
Then you can query the instance table DEMO in the repository.
Click (here) to collapse or open
SQL > select count (1) from demo
COUNT (1)
-
101000
In this case, in version 12.1, RMAN provides a convenient way for us to use restore database (or datafile) from service to restore from the main database instead of backing up and transferring data files on the main database.
Of course, Oracle's RMAN is smart enough: if the data files are in a normal state, RMAN can perform hop recovery based on their data file headers. If some blocks are marked as corrupted as a result of nologging operations, then this part of the data file needs to be recovered, and then what? There is a FORCE option in the restore command. But we may not need it. Because sometimes the data files are synchronized, the real-time logging application process is still running. At this time, in order to recover, we need to stop the application.
Once we have stopped the application, then we do not need to perform RESOTORE DATABASE FORCE operations, because now the state of the data files is too old, even if you do not add the FORCE option RMAN will not skip these data files.
Step one
Shut down the real-time log application and turn it on to mount.
Click (here) to collapse or open
SQL > alter database recover managed standby database cancel
SQL > shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL > startup mount
ORACLE instance started
Step two
Log in to RMAN and use restore database (or datafile) from service for recovery
Click (here) to collapse or open
RMAN > restore database from service 'primary_db'; # the primary_db here is the alias of the tns connection string from the slave library to the main library
Starting restore at 2018-05-03 17:00:35
Using target database control file instead of recovery catalog
Allocated channel: ORA_DISK_1
Channel ORA_DISK_1: SID=29 device type=DISK
Channel ORA_DISK_1: starting datafile backup set restore
Channel ORA_DISK_1: using network backup set from service primary_db
Channel ORA_DISK_1: specifying datafile (s) to restore from backup set
Channel ORA_DISK_1: restoring datafile 00001 to / data/data1/ORCL2/datafile/o1_mf_system_02t1t9ck_.dbf
Channel ORA_DISK_1: restore complete, elapsed time: 00:00:25
Channel ORA_DISK_1: starting datafile backup set restore
Channel ORA_DISK_1: using network backup set from service primary_db
Channel ORA_DISK_1: specifying datafile (s) to restore from backup set
Channel ORA_DISK_1: restoring datafile 00003 to / data/data1/ORCL2/datafile/o1_mf_sysaux_03t1t9d3_.dbf
Channel ORA_DISK_1: restore complete, elapsed time: 00:00:15
Channel ORA_DISK_1: starting datafile backup set restore
Channel ORA_DISK_1: using network backup set from service primary_db
Channel ORA_DISK_1: specifying datafile (s) to restore from backup set
Channel ORA_DISK_1: restoring datafile 00004 to / data/data1/ORCL2/datafile/o1_mf_undotbs1_04t1t9di_.dbf
Channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
Channel ORA_DISK_1: starting datafile backup set restore
Channel ORA_DISK_1: using network backup set from service primary_db
Channel ORA_DISK_1: specifying datafile (s) to restore from backup set
Channel ORA_DISK_1: restoring datafile 00006 to / data/data1/ORCL2/datafile/o1_mf_users_05t1t9dm_.dbf
Channel ORA_DISK_1: restore complete, elapsed time: 00:00:15
Finished restore at 2018-05-03 17:01:34
Of course, remember to go to the library and start the real-time log application process!
In 12.2, Oracle provides a more convenient way to restore the master database. The list of unrecorded blocks will be sent to the slave database and recorded in the slave control file. We can view the relevant information from the v$nonlogged_block view of the slave database. Instead of sending the entire data file for the main library, execute a simple command in RMAN to restore them:
RECOVER DATABASE NONLOGGED BLOCK
Step one
Stop the application of real-time log for database preparation
Click (here) to collapse or open
SQL > alter database recover managed standby database cancel
Step two
Prepare database for landing rman execution
RECOVER DATABASE NONLOGGED BLOCK
Note: before performing this step, make sure that the log_archive_config parameter of the master / slave library has been set.
Click (here) to collapse or open
RMAN > recover database nonlogged block
Starting recover at 2018-05-03 14:54:22
Using target database control file instead of recovery catalog
Allocated channel: ORA_DISK_1
Channel ORA_DISK_1: SID=56 device type=DISK
Starting recovery of nonlogged blocks
List of Datafiles
=
File Status Nonlogged Blocks Blocks Examined Blocks Skipped
-
1 OK 0 0 107519
3 OK 0 0 262399
4 OK 0 0 149759
5 OK 0 0 31999
6 OK 0 0 42239
7 OK 0 16707 21532
8 OK 0 0 12799
9 OK 0 0 76799
18 OK 0 0 33279
19 OK 0 0 57599
20 OK 0 0 24959
21 OK 0 0 33279
22 OK 0 0 51199
23 OK 0 0 12799
29 OK 0 0 1310719
30 OK 0 0 12799
31 OK 0 0 33279
32 OK 0 0 52479
33 OK 0 0 923519
34 OK 0 16822 8777
35 OK 0 0 12799
37 OK 0 0 24959
Details of nonlogged blocks can be queried from v$nonlogged_block view
Recovery of nonlogged blocks complete, elapsed time: 00:00:08
Finished recover at 2018-05-03 14:54:32
Finally, don't forget to start the real-time logging application process.
To sum up, this feature in 12.2 can be tried in some scenarios such as data warehouses. In the past, we turned on force logging to cause a large number of redo logs and affect the execution efficiency of some dml statements. We can try to use the nonlogging operation to save a lot of data insertion time, and then prepare the database recovery operation in the system idle time. However, note that this operation also has drawbacks, so that the availability of your repository is greatly reduced. There is always a choice!
| | author profile |
Chen Kang, expert in database technology of Walk Science and Technology
Mainly participate in the company's product implementation, testing, maintenance and optimization.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.