In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Environment introduction: a set of linux+oracle11.2.0.3+dataguard maximize availability environment of two-node single instance was built a few days ago.
Failure phenomenon: it was found that it could not be synchronized today. An error message was found in the trace file alert_orcl.log. The MRP process could not be started.
MRP0: Background Media Recovery terminated with error 328
ORA-00328: 8386238, 8972415
ORA-00334:'/ opt/oracle/fast_recovery_area/DG_BEI/archivelog/2014_09_28/o1_mf_1_830_b2h9mjmn_.arc'
Before reporting the above error, the trace file alert_orcl.log reported a network outage error with the main library.
RFS [4]: Assigned to RFS process 20669
RFS [4]: Possible network disconnect with primary database
Sun Sep 28 13:23:48 2014
MRP0: the detailed error message of Background Media Recovery terminated with error 328 is as follows:
Mon Sep 29 14:32:28 2014
Alter database recover managed standby database using current logfile disconnect from session nodelay
Attempt to start background Managed Standby Recovery process (orcl)
Mon Sep 29 14:32:28 2014
MRP0 started with pid=27, OS id=23710
MRP0: Background Managed Standby Recovery process started (orcl)
Started logmerger process
Mon Sep 29 14:32:33 2014
Managed Standby Recovery starting Real Time Apply
Parallel Media Recovery started with 2 slaves
Waiting for all non-current ORLs to be archived...
All non-current ORLs have been archived.
Clearing online redo logfile 1 / opt/oracle/oradata/orcl/redo01.log
Clearing online log 1 of thread 1 sequence number 835
Clearing online redo logfile 1 complete
Clearing online redo logfile 2 / opt/oracle/oradata/orcl/redo02.log
Clearing online log 2 of thread 1 sequence number 834
Clearing online redo logfile 2 complete
Clearing online redo logfile 3 / opt/oracle/oradata/orcl/redo03.log
Clearing online log 3 of thread 1 sequence number 835
Clearing online redo logfile 3 complete
Media Recovery Log / opt/oracle/fast_recovery_area/DG_BEI/archivelog/2014_09_28/o1_mf_1_830_b2h9mjmn_.arc
Errors with log/ opt/oracle/fast_recovery_area/DG_BEI/archivelog/2014_09_28/o1_mf_1_830_b2h9mjmn_.arc
MRP0: Background Media Recovery terminated with error 328
Errors in file / opt/oracle/diag/rdbms/dg_bei/orcl/trace/orcl_pr00_23712.trc:
ORA-00328: 8386238, 8972415
ORA-00334::'/ opt/oracle/fast_recovery_area/DG_BEI/archivelog/2014_09_28/o1_mf_1_830_b2h9mjmn_.arc'
Managed Standby Recovery not using Real Time Apply
Recovery interrupted!
Completed: alter database recover managed standby database using current logfile disconnect from session nodelay
MRP0: Background Media Recovery process shutdown (orcl)
Mon Sep 29 14:34:06 2014
RFS [1]: Assigned to RFS process 23726
RFS [1]: Opened log for thread 1 sequence 836 dbid 1356850190 branch 829069458
Archived Log entry 77 added for thread 1 sequence 836 rlc 829069458 ID 0x50df5e0e dest 2:
Mon Sep 29 14:34:08 2014
Primary database is in MAXIMUM AVAILABILITY mode
Changing standby controlfile to RESYNCHRONIZATION level
Standby controlfile consistent with primary
RFS [2]: Assigned to RFS process 23728
RFS [2]: Selected log 4 for thread 1 sequence 838 dbid 1356850190 branch 829069458
Mon Sep 29 14:34:08 2014
RFS [3]: Assigned to RFS process 23730
RFS [3]: Selected log 5 for thread 1 sequence 837 dbid 1356850190 branch 829069458
Mon Sep 29 14:34:08 2014
Archived Log entry 78 added for thread 1 sequence 837 ID 0x50df5e0e dest 1:
Changing standby controlfile to MAXIMUM AVAILABILITY level
RFS [2]: Selected log 5 for thread 1 sequence 839 dbid 1356850190 branch 829069458
Mon Sep 29 14:34:11 2014
Archived Log entry 79 added for thread 1 sequence 838 ID 0x50df5e0e dest 1:
Process:
1. Check the recovery-related processes on the repository, it is true that there is a lack of MRP
Select process,status,sequence# from v$managed_standby
two。 Check the archive log view on the repository
Sql > select name,sequence#,applied from v$archived_log
Strange thing: the archive log that reported errors in the trace log is applied, as shown in the red box
3. Comparing the archived logs of the master database and the slave database from September to September, 2014, the data and size are different. The number of logs in the slave database is larger than that in the master database, but smaller than that in the master database, as shown below: (it is estimated that the size of the log is inconsistent caused by network interruption when transferring logs)
4. Omnipotent google search, probably due to an error in the transfer of archive logs, from the copy logs of the main database to the standby database, and then perform a restore
4.1 just in case, back up the 2014-09-28 archive log, and then delete the 2014-09-28 archive folder
4.2 copy the 2014 09 / 28 archive folder to the standby library from the scp on the main library
4.3 perform a restore operation on the standby database
SQL > alter database recover managed standby database using current logfile disconnect
Check the trace log and it looks normal.
5. Go to the repository to confirm the archive log sql > select name,sequence#,applied from v$archived_log
It shows that it can be applied normally, but there is a log (shown in the red box) that is not used. I guess because the log file from the main library copy does not have this file (only 5, the original library has 8), it is normal to check the data. The fault should be considered as solved.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.