In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Mining log recovery misoperation 1 introduction: LogMiner is a tool provided by Oracle database to analyze transaction operations recorded in redo logs and archive logs.
Logminer is a tool for analyzing redo log information provided by oracle since 8i. It includes two package, DBMS_LOGMNR and DBMS_LOGMNR_D. It can analyze both redo log file and archived archive log file. In the process of analyzing the log, you need to use a data dictionary. Generally, you can use a data dictionary file. The 10g version can also use an online data dictionary.
Logminer can also analyze the redo log files of other databases, but you must use the data dictionary of the database where the redo logs are located, otherwise unrecognized garbled will occur. In addition, the operating system platform of the analyzed database had better be the same as that of the current Logminer database, and the block size is the same.
The function of LogMiner
1) determine the logical damage time of the database
The execution time and SCN value of the misoperation can be accurately located through LogMiner, and then the table data can be fully recovered by time-based recovery or SCN-based recovery.
SQL > select scn_to_timestamp (2599788) from dual
Scn_to_timestamp (2599788)
-
2016-05-01 06:06:01
SQL > select timestamp_to_scn (to_timestamp ('2016-5-16: 06 to_timestamp 06mm hh34:mi:ss')) FROM dual
TIMESTAMP_TO_SCN (TO_TIMESTAMP ('2016-5-16 06V 06V 06V) MUBG YYYYMUM MUBE DDHH24MIJUSS')
2599788
2) determine the fine logic recovery operations to be performed at the transaction level
Any user's DML operation and corresponding UNDO operation can be obtained through LogMiner, and the user's erroneous operation can be cancelled by executing UNDO operation.
3) perform follow-up audit
Through LogMiner, you can track all DML, DDL, and DCL operations of the Oracle database, thus obtaining information such as the time sequence in which these operations are performed, the users who performed them, and so on.
Append log
The redo log is used for routine recovery and media recovery, and the data required for these operations is automatically recorded in the redo log. However, the redo application may also need to record other column information to the redo log, and the logging process of recording other columns is called an append log.
By default, append logs are not enabled in the Oracle database, which causes LogMiner to fail to support the following characteristics by default:
1) Index clusters, chain rows, and migration rows
2) Direct path insertion
3) extract the LogMiner dictionary to the redo log
4) track DDL
5) generate SQL_REDO and SQL_UNDO information for key columns
6) LONG and LOB data types.
-how to modify the append log data mode:
SYS@test > select supplemental_log_data_min from v$database
SUPPLEME
-
NO
SYS@test > alter database add supplemental log data
Database altered.
SYS@test > select supplemental_log_data_min from v$database
SUPPLEME
-
YES
-close the append log
SYS@test > alter database drop supplemental log data
Database altered.
SYS@test > select supplemental_log_data_min from v$database
SUPPLEME
-
NO
2 Environment preparation (commit after data error delete) SYS@test > create user shall identified by shall
User created.
SYS@test > grant connect,resource to shall
Grant succeeded.
SHALL@test > create table test (id number,name varchar2 (20))
Table created.
SHALL@test > begin
2 for i in 1..100000 loop
3 insert into test values (iGrainghong')
4 end loop
5 commit
6 end
7 /
PL/SQL procedure successfully completed.
SHALL@test > select count (*) from test
COUNT (*)
-
100000
SHALL@test > select * from test where id = 999
ID NAME
--
999 zhong
SHALL@test > delete test
100000 rows deleted.
SHALL@test > commit
Commit complete.
SHALL@test > select * from test
No rows selected
3 start data mining to retrieve the submitted deleted data (1) the database is closed and append log status: SYS@test > select supplemental_log_data_min from v$database
SUPPLEME
-
NO
(2) query library redo location SYS@test > select member from v$logfile
MEMBER
/ u01/app/oracle/oradata/orcl/redo01_1.log
/ u01/app/oracle/oradata/orcl/redo02_1.log
/ u01/app/oracle/oradata/orcl/redo03_1.log
(3) generate the log mining queue SYS@test > exec dbms_logmnr.add_logfile ('/ u01Accord oradata'/ u01Accord oradata 'redo01and1.log')
PL/SQL procedure successfully completed.
SYS@test > exec dbms_logmnr.add_logfile ('/ u01ApplicationoracleUnixoradataUniverse redo02plus 1.log')
PL/SQL procedure successfully completed.
SYS@test > exec dbms_logmnr.add_logfile ('/ u01AccessAccord oradata')
PL/SQL procedure successfully completed.
-if you have just made a mistake and the log is not switched, you can only add the current redo.
SYS@test > select group#,status from v$log
GROUP# STATUS
--
1 INACTIVE
2 INACTIVE
3 CURRENT
SYS@test > col member for A50
SYS@test > select group#,member,type from v$logfile
GROUP# MEMBER TYPE
-
1/ u01/app/oracle/oradata/orcl/redo01_1.log ONLINE
2 / u01/app/oracle/oradata/orcl/redo02_1.log ONLINE
3 / u01/app/oracle/oradata/orcl/redo03_1.log ONLINE
(4) start mining SYS@test > exec dbms_logmnr.start_logmnr (options= > dbms_logmnr.dict_from_online_catalog)
PL/SQL procedure successfully completed.
(5) rollback SQL from v$logmnr_contents and rollback back SQLSYS@test > select sql_redo from v$logmnr_contents where lower (sql_redo) like'% delete%' and seg_name='TEST'
The redo data are as follows:
SQL_REDO
Delete from "SHALL". "TEST" where "ID" = '42016' and "NAME" = 'zhong' and ROWID =
'AAAWGEAAEAAAOSfABL'
Delete from "SHALL". "TEST" where "ID" = '42017' and "NAME" = 'zhong' and ROWID =
'AAAWGEAAEAAAOSfABM'
Delete from "SHALL". "TEST" where "ID" = '42018' and "NAME" = 'zhong' and ROWID =
'AAAWGEAAEAAAOSfABN'
Delete from "SHALL". "TEST" where "ID" = '42019' and "NAME" =' zhong' and ROWID =
The undo data are as follows:
SYS@test > select sql_redo from v$logmnr_contents where lower (sql_undo) like'% delete%' and seg_name='TEST'
SQL_REDO
Insert into "SHALL". "TEST" ("ID", "NAME") values
Insert into "SHALL". "TEST" ("ID", "NAME") values
Insert into "SHALL". "TEST" ("ID", "NAME") values
Insert into "SHALL". "TEST" ("ID", "NAME") values
Insert into "SHALL". "TEST" ("ID", "NAME") values
Insert into "SHALL". "TEST" ("ID", "NAME") values
Insert into "SHALL". "TEST" ("ID", "NAME") values
Insert into "SHALL". "TEST" ("ID", "NAME") values
SYS@test > set linesize 200
SYS@test > set pagesize 10000
SYS@test > create table shall as select sql_redo from v$logmnr_contents where lower (sql_undo) like'% delete%' and seg_name='TEST'
SYS@test > select * from shall
(6) finally finish mining: SYS@test > EXECUTE DBMS_LOGMNR.END_LOGMNR
PL/SQL procedure successfully completed.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.