In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-14 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Background: the customer came to us and reported that there was a version 10.2.0.4 Oracle database running on the HA architecture of Solaris Sparc 10 because the shared storage was full and inappropriate operations (this was later confirmed by Sun engineers)
Causes a database exception. When viewing the environment, shared storage cannot be mounted by cluster software. at the same time, no other errors are found in the database alarm log except for the alarm that the archive log is full. At the same time, the storage engineer confirms that the storage is normal.
After repairing the mount failure, the Sun host engineer found that the current redo log file of the database was corrupted and could not be read. As a result, we only made a forced opening of the library once.
Shared storage uses the ZFS file system.
First of all, explain the role of SMON. On the first restore, the instance still crashes after the database OPEN because of SMON. Subsequently, the 10061 event parameter, the _ smon_internal_errlimit parameter, was added, resulting in
There are fewer errors when the instance crashes.
Implement local instance recovery
Implement OPS/RAC instance recovery
Service for sorting segment sort segment application
Implement transaction recovery (rollback)
Clean up temporary temporary segments that is no longer in use
Clean up the temporary table temporary tables that has been used by aged out's cursors
Clean up dead instance's temporary table temporary tables
Delete object records that no longer exist on the OBJ$ base table
If index online rebuild fails, clean up ind$ and indpart$
Merge extents
Shrink rollback segment at the right time
In the appropriate actual offline rollback segment
Restore crash/instance recovery because datafile is not available (eg. Offline) and skipped dead transaction
Restore the dead transaction caused by crash in the foreground process
Event list of control events for SMON:
Event='10061 trace name context forever, level 10 'disable SMON cleanup temporary period (disable SMON from cleaning temp segments)
Event='10269 trace name context forever, level 10' to disable SMON merge free interval (Don't do coalesces of free space in SMON)
Event='10052 trace name context forever' to prevent SMON from cleaning up the obj$ base table
Set the hidden parameter _ column_tracking_level (column usage tracking), which defaults to 1 to enable column usage tracking. Setting this parameter to 0 will disable column tracking
Events' 10513 trace name context forever, level 2 transactions; set 10513 event to prohibit SMON from resuming dead transactions when it comes, which is very effective when we do some abnormal recovery. Of course, it is not recommended to set this event in a normal production environment.
Event='8105 trace name context forever' to prohibit SMON from cleaning up IND$ (Oracle event to turn off smon cleanup for online index build)
Events' 12500 trace name context forever, level 10 minutes. You can manually delete the records on the SMON_SCN_TIME after setting the 12500 event, and SMON will continue to update the SMON_SCN_TIME normally after the instance is restarted.
Event='10511 trace name context forever, level 1'to disable SMON OFFLINE UNDO SEGS;, but the 10511 event does not skip "Fast Ramp Up", but only limits the workload that SMON produces on UNDO SEGS. Once 10511 event is set, all generated UNDO SEGS will always remain in the ONLINE state.
Event='10512 trace name context forever,level 1' disable SMON shrink rollback segment
Event='10510 trace name context forever,level 1 'disable detection for offline rollback
Reference: https://www.cnblogs.com/macleanoracle/archive/2013/03/19/2968335.html
Parameter file used:
*. _ allow_resetlogs_corruption=TRUE
* .audit_file_dest='/opt/oracle/app/admin/orcl/adump'
* .background_dump_dest='/opt/oracle/app/admin/orcl/bdump'
* .compatible='10.2.0.3.0'
* .control_files='/dataora/orcl/control01.ctl','/dataora/orcl/control02.ctl','/dataora/orcl/control03.ctl'
* .core_dump_dest='/opt/oracle/app/admin/orcl/cdump'
* .db_block_size=8192
* .db_domain=''
* .db_file_multiblock_read_count=16
* .db_name='orcl'
* .db_recovery_file_dest='/orapool/dataora/flash_recovery_area'
* .db_recovery_file_dest_size=2147483648
* .dispatchers=' (PROTOCOL=TCP) (SERVICE=orclXDB)'
* .job_queue_processes=0
* .log_archive_dest_1='location=/orapool/dataora/arch'
* .open_cursors=30000
* .pga_aggregate_target=3424649216
* .processes=1500
* .remote_login_passwordfile='EXCLUSIVE'
* .sessions=1655
* .sga_target=1610612736
* .sort_area_size=5242880
* .undo_management='AUTO'
* .undo_tablespace='UNDOTBS1'
* .fast_start_parallel_rollback=FALSE
* .user_dump_dest='/opt/oracle/app/admin/orcl/udump'
Event='10061 trace name context forever, level 10'
_ smon_internal_errlimit=1000000
1. Restore the database
Recover database until cancel
Alter database open resetlogs
two。 Encountered an error after export
ORACLE Instance orcl (pid = 11)-Error 600encountered while recovering transaction (3,20) on object 658092.
Sat Jan 25 11:09:23 2020
Errors in file / opt/oracle/app/admin/orcl/bdump/orcl_smon_15656.trc:
ORA-00600: internal error code, arguments: [6006], [1], [], []
The index was rebuilt:
Database mounted.
Database opened.
SQL > select owner, object_name, object_type from dba_objects where object_id = 658092
OWNER
-
OBJECT_NAME
OBJECT_TYPE
-
SCOTT
IND_TEST
INDEX
SQL >
SQL > alter index scott.IND_TEST rebuild
Index altered.
Reference: https://www.eygle.com/archives/2011/07/ora-600_6006_recovery.html
3. Export data, rebuild database
4. Export data
Nohup exp\'/ as sysdba\ 'file=/new2-orapool/orcl_20200125_test.dmp owner=test &
5. Rebuild database, verify data, business recovery
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.