In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-20 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains "how to use logminer". The content in the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn how to use logminer.
There are two types of LogMiner, one is to analyze the DML operation using the data dictionary of the source database, and the other is to extract the LogMiner data dictionary to the dictionary file to analyze the DDL operation.
Note: 1. It is best to configure the supplementary log when using the logmnr tool, otherwise the ddl operation will not be achieved eventually.
two。 The utl_file_dir parameter, which is the directory of the logmnr dictionary file, must be configured for the ddl operation, but not for the query dml operation.
LogMiner tools can be used to analyze both online and offline log files, that is, you can analyze your own database redo log files or other database redo log files.
In general, the main uses of LogMiner tools are:
1. Track the changes of the database: you can track the changes of the database offline without affecting the performance of the online system.
2. Back up the changes in the database: roll back specific change data and reduce the execution of point-in-time recovery.
3. optimization and expansion plan: the data growth pattern can be analyzed by analyzing the data in the log file.
First, determine the logical damage time of the database. Suppose a user mistakenly deletes the important table sales by performing drop table, the execution time and SCN value of the misoperation can be accurately located through LogMiner, and then the table data can be fully recovered by time-based recovery or SCN-based recovery.
Second, determine the fine logic recovery operations to be performed at the transaction level. Suppose that some users perform a series of DML operations and commit transactions on a table, and that one of the users has an error in the DML operation. Any user's DML operation and corresponding UNDO operation can be obtained through LogMiner, and the user's erroneous operation can be cancelled by executing UNDO operation.
Third, carry out follow-up audit. Through LogMiner, you can track all DML, DDL, and DCL operations of the Oracle database, thus obtaining information such as the time sequence in which these operations are performed, the users who performed them, and so on.
1. LogMiner basic object
Source database (source database): this database is a product database that contains redo logs and archive logs to be analyzed.
Analysis database (mining database): this database is the database to be used to perform LogMiner operations.
LogMiner dictionary: the LogMiner dictionary is used to convert internal object ID numbers and data types to object names and external data formats. When using LogMiner to analyze redo logs and archive logs, you should generate an LogMiner dictionary, otherwise you will not be able to read the analysis results.
2. LogMiner configuration requirements
(1) Source database and analysis database (source database and analysis database can be the same database)
The source database and the analysis database must run on the same hardware platform
The analysis database can be a standalone database or a source database
The version of the analysis database cannot be lower than the version of the source database
The analysis database and the source database must have the same character set.
(2) LogMiner dictionary: the LogMiner dictionary must be generated in the source database.
(3) redo log files
When analyzing multiple redo and archive logs, they must be redo and archive logs of the same source database
When analyzing multiple redo logs and archive logs, they must have the same resetlogs scn
When analyzing redo logs and archive logs must be above the Oracle8.0 version.
3. Supplementary log (suppplemental logging)
The redo log is used for routine recovery and media recovery, and the data required for these operations is automatically recorded in the redo log. However, the redo application may also need to record other column information to the redo log, and the logging process of recording other columns is called a supplementary log. By default, the Oracle database does not provide any supplementary logs, so that LogMiner cannot support the following characteristics by default:
Index clusters, chain rows, and migration rows
Direct path insertion
Extract the LogMiner dictionary to the redo log
Tracking DDL
Generate SQL_REDO and SQL_UNDO information for key columns
LONG and LOB data types.
Therefore, in order to take full advantage of the features provided by LogMiner, supplementary logs must be activated.
Syntax:
Ideally, the LogMiner dictionary file will be created after all database dictionary changes have been made, and before any redo log files to be analyzed. Starting with Oracle9i release 1 (9.0.1), you can use LogMiner to dump LogMiner dictionaries to redo log files or flat files, perform DDL operations, and dynamically apply DDL changes to LogMiner dictionaries.
In addition, supplementary logging (at least at the lowest level) should be enabled to ensure that you can take advantage of all the features provided by LogMiner. For information about using supplementary logging in LogMiner, see Oracle Database Utility.
Experiment 1: open archiving, do not open supplementary logs, and do not add logminer data dictionary, (using DBMS_LOGMNR_D.BUILD)
-- View the logs under the archive path and path:
SQL > archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 14
Next log sequence to archive 16
Current log sequence 16
SQL >
SQL >
SQL > show parameter recover
NAME TYPE VALUE
-
Db_recovery_file_dest string / u01/app/oracle/fast_recovery_ area/DBDB/newback
Db_recovery_file_dest_size big integer 9G
Db_unrecoverable_scn_tracking boolean TRUE
Recovery_parallelism integer 0
SQL >
SQL > alter session set nls_date_format='yyyy-mm-dd hh34:mi:ss'
Session altered.
SQL > select *
From (SELECT NAME
THREAD#
SEQUENCE#
APPLIED
ARCHIVED
COMPLETION_TIME
FROM V$ARCHIVED_LOG order by 6 desc) a
Where rownum select group#,THREAD#,SEQUENCE#,MEMBERS,ARCHIVED,STATUS from v$log
GROUP# THREAD# SEQUENCE# MEMBERS ARC STATUS
-
1 1 16 1 NO CURRENT
2 1 14 1 YES INACTIVE
3 1 15 1 YES INACTIVE
-- query log group:
SQL > select t1.GROUPPERIMETRY select t2.MEMBER from v$log t1.TYPE from v$log T1 inner join v$logfile T2 on t1.GROUP#=t2.GROUP# and t1.STATUSrequests current
GROUP# STATUS MEMBER TYPE
-
1 CURRENT / u01/app/oracle/oradata/DBdb/redo01.log ONLINE
The current log group is group#1,sequence# 16.
-- Simulation operation:
SQL > conn hr/hr
Connected.
SQL >
SQL > create table log_a (x int)
Table created.
SQL > insert into log_a values (1)
1 row created.
SQL > insert into log_a values (2)
1 row created.
SQL > insert into log_a values (3)
1 row created.
SQL > commit
Commit complete.
SQL > update log_a set Xbox 4 where Xuan 1
1 row updated.
SQL > commit
Commit complete.
SQL > select * from log_a
X
-
four
two
three
-- query the working log group
SQL > select group#,THREAD#,SEQUENCE#,MEMBERS,ARCHIVED,STATUS from v$log
GROUP# THREAD# SEQUENCE# MEMBERS ARC STATUS
-
1 1 16 1 NO CURRENT
2 1 14 1 YES INACTIVE
3 1 15 1 YES INACTIVE
SQL > select t1.GROUPPERIMETRY select t2.MEMBER from v$log t1.TYPE from v$log T1 inner join v$logfile T2 on t1.GROUP#=t2.GROUP# and t1.STATUSrequests current
GROUP# STATUS MEMBER TYPE
-
1 CURRENT / u01/app/oracle/oradata/DBdb/redo01.log ONLINE
-- then analyze the online redo log
SQL > exec dbms_logmnr.add_logfile ('/ u01ActionActionoradataPlacement'/ u01OnOnActionoradataUnitedDBdbPlancedredo01.logcodes and dbmswatches logmnr.new)
PL/SQL procedure successfully completed.
SQL > exec dbms_logmnr.start_logmnr (options= > dbms_logmnr.dict_from_online_catalog)
PL/SQL procedure successfully completed.
SQL > create table l1_Z1 as select * from v$logmnr_contents
Table created.
SQL > exec dbms_logmnr.end_logmnr ()
PL/SQL procedure successfully completed.
-- then check the online redo log of the analysis, and you can see the previous operations in the sql_redo.
SQL > select start_timestamp,sql_redo,sql_undo from l1_Z1 where sql_redo like'% LOG_A%'
START_TIMESTAMP SQL_REDO SQL_UNDO
- -
Delete from "SYS". "OBJ$" where "OBJ#" = '90306' and "DATAOBJ insert into" SYS "." OBJ$ "(" OBJ# "," DATAOBJ# "," OWNER# "," NAME ","
# "= '90306' and" OWNER# "= '84' and" NAME "=' LOG_A' and" N NAMESPACE "," SUBNAME "," TYPE# "," CTIME "," MTIME "," STIME "," STATUS "
AMESPACE "='1' and" SUBNAME "IS NULL and" TYPE# "='2' and", "REMOTEOWNER", "LINKNAME", "FLAGS", "OID$", "SPARE1", "SPARE2"
"CTIME" = TO_DATE ('2018-01-25 16 yyyy-mm-dd hh34:mi 1315 31mm,' yyyy-mm-dd hh34:mi "SPARE3", "SPARE4", "SPARE5", "SPARE6") values ('90306.
: ss') and "MTIME" = TO_DATE ('2018-01-25 1615 1314 31L,' yyyy-mm-, '84mm).
Dd hh34:mi:ss') and "STIME" = TO_DATE ('2018-01-25 16 purl 13 TO_DATE, yyy-mm-dd hh34:mi:ss'), TO_DATE (' 2018-01-25 16 purge 13 purge 31 min, 'yyyy-)
'yyyy-mm-dd hh34:mi:ss') and "STATUS" =' 1' and "REMOTEOWNE mm-dd hh34:mi:ss'), TO_DATE ('2018-01-25 16-13-13-31-31,' yyyy-mm-d
R "IS NULL and" LINKNAME "IS NULL and" FLAGS "='0' and" OID d hh34:mi:ss'), '1null, null, null, n
$"IS NULL and" SPARE1 "='6' and" SPARE2 "='1' and" SPARE3, NULL)
"='84 'and" SPARE4 "IS NULL and" SPARE5 "IS NULL and" SPAR
E6 "IS NULL and ROWID = 'AAAAASAABAAAVC0AAV'
START_TIMESTAMP SQL_REDO SQL_UNDO
- -
Insert into "SYS". "OBJ$" ("OBJ#", "DATAOBJ#", "OWNER#", "NAME", "delete from" SYS "." OBJ$ "where" OBJ# "= '90317' and" DATAOBJ
NAMESPACE "," SUBNAME "," TYPE# "," CTIME "," MTIME "," STIME "," STATUS # "= '90317' and" OWNER# "=' 84' and" NAME "= 'LOG_A' and" N
"," REMOTEOWNER "," LINKNAME "," FLAGS "," OID$ "," SPARE1 "," SPARE2 ", AMESPACE" ='1' and "SUBNAME" IS NULL and "TYPE#" ='2' and
"SPARE3", "SPARE4", "SPARE5", "SPARE6") values ('90317,90317' "CTIME" = TO_DATE (' 2018-01-25 17-17-17-14-14-48-48-48-14-48), 'yyyy-mm-dd hh34:mi
And "MTIME" = TO_DATE ('2018-01-25 1748 MTIME 46,' yyyy-mm-)
Yyy-mm-dd hh34:mi:ss'), TO_DATE ('2018-01-25 17 yyyy- dd hh34:mi:ss' 48 yyyy- dd hh34:mi:ss') and "STIME" = TO_DATE (' 2018-01-25 17 yyyy- dd hh34:mi:ss' 48 48 14) 46'
Mm-dd hh34:mi:ss'), TO_DATE ('2018-01-25 17 yyyy-mm-d 48 mm-dd hh34:mi:ss' 46,' yyyy-mm-d 'yyyy-mm-dd hh34:mi:ss') and "STATUS" =' 1' and "REMOTEOWNE
D hh34:mi:ss'), "IS NULL and" LINKNAME "IS NULL and" FLAGS "='0' and" OID.
, NULL); $"IS NULL and" SPARE1 "='6' and" SPARE2 "='1' and" SPARE3
"='84 'and" SPARE4 "IS NULL and" SPARE5 "IS NULL and" SPAR
START_TIMESTAMP SQL_REDO SQL_UNDO
- -
E6 "IS NULL and ROWID = 'AAAAASAABAAAVC0AAZ'
Insert into "HR". "LOG_A" ("X") values ('1'); delete from "HR". "LOG_A" where "X" ='1' and ROWID = 'AAAWDN
AAEAAAAzrAAA'
Insert into "HR". "LOG_A" ("X") values ('2'); delete from "HR". "LOG_A" where "X" ='2' and ROWID = 'AAAWDN
AAEAAAAzrAAB'
Insert into "HR". "LOG_A" ("X") values ('3'); delete from "HR". "LOG_A" where "X" ='3' and ROWID = 'AAAWDN
AAEAAAAzrAAC'
The experiment continues:
-- query:
SQL > select group#,THREAD#,SEQUENCE#,MEMBERS,ARCHIVED,STATUS from v$log
GROUP# THREAD# SEQUENCE# MEMBERS ARC STATUS
-
1 1 16 1 NO CURRENT
2 1 14 1 YES INACTIVE
3 1 15 1 YES INACTIVE
SQL > select t1.GROUPPERIMETRY select t2.MEMBER from v$log t1.TYPE from v$log T1 inner join v$logfile T2 on t1.GROUP#=t2.GROUP# and t1.STATUSrequests current
GROUP# STATUS MEMBER TYPE
-
1 CURRENT / u01/app/oracle/oradata/DBdb/redo01.log ONLINE
SQL > l
1 select *
2 from (SELECT NAME
3 THREAD#
4 SEQUENCE#
5 APPLIED
6 ARCHIVED
7 COMPLETION_TIME
8 FROM V$ARCHIVED_LOG order by 6 desc) a
9 * where rownum /
NAME THREAD# SEQUENCE# APPLIED ARC COMPLETION_T
- -
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_15_f6m8flqg_.arc 1 15 NO YES 25-JAN-18
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_14_f6m5v9ts_.arc 1 14 NO YES 25-JAN-18
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_13_f6m56dgw_.arc 1 13 NO YES 25-JAN-18
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_12_f6m4nz1x_.arc 1 12 NO YES 25-JAN-18
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_11_f6l9phsx_.arc 1 11 NO YES 25-JAN-18
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_10_f6l9pgvh_.arc 1 10 NO YES 25-JAN-18
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_9_f6l9pb8o_.arc 1 9 NO YES 25-JAN-18
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_8_f6l9p9jl_.arc 18 NO YES 25-JAN-18
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_7_f6l9p6xp_.arc 1 7 NO YES 25-JAN-18
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_6_f6l9p5k2_.arc 1 6 NO YES 25-JAN-18
10 rows selected.
-- switch between log groups:
SQL > alter system switch logfile
System altered.
-- query again
SQL > select group#,THREAD#,SEQUENCE#,MEMBERS,ARCHIVED,STATUS from v$log
GROUP# THREAD# SEQUENCE# MEMBERS ARC STATUS
-
1 1 16 1 YES ACTIVE
2 1 17 1 NO CURRENT
3 1 15 1 YES INACTIVE
SQL > select t1.GROUPPERIMETRY select t2.MEMBER from v$log t1.TYPE from v$log T1 inner join v$logfile T2 on t1.GROUP#=t2.GROUP# and t1.STATUSrequests current
GROUP# STATUS MEMBER TYPE
-
2 CURRENT / u01/app/oracle/oradata/DBdb/redo02.log ONLINE
SQL >
SQL > l
1 select *
2 from (SELECT NAME
3 THREAD#
4 SEQUENCE#
5 APPLIED
6 ARCHIVED
7 COMPLETION_TIME
8 FROM V$ARCHIVED_LOG order by 6 desc) a
9 * where rownum /
NAME THREAD# SEQUENCE# APPLIED ARC COMPLETION_T
- -
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_16_f6mbk78z_.arc 1 16 NO YES 25-JAN-18
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_15_f6m8flqg_.arc 1 15 NO YES 25-JAN-18
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_14_f6m5v9ts_.arc 1 14 NO YES 25-JAN-18
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_13_f6m56dgw_.arc 1 13 NO YES 25-JAN-18
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_12_f6m4nz1x_.arc 1 12 NO YES 25-JAN-18
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_11_f6l9phsx_.arc 1 11 NO YES 25-JAN-18
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_10_f6l9pgvh_.arc 1 10 NO YES 25-JAN-18
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_9_f6l9pb8o_.arc 1 9 NO YES 25-JAN-18
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_8_f6l9p9jl_.arc 18 NO YES 25-JAN-18
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_7_f6l9p6xp_.arc 1 7 NO YES 25-JAN-18
10 rows selected.
SQL >
SQL > archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 15
Next log sequence to archive 17
Current log sequence 17
SQL >
-- use logminer to analyze archive logs:
SQL > exec DBMS_LOGMNR.ADD_LOGFILE ('/ u01ActionActionPlacement fastfastrecoveryActionareaDBDB _ newbackDB _
PL/SQL procedure successfully completed.
SQL > exec dbms_logmnr.start_logmnr (options= > dbms_logmnr.dict_from_online_catalog)
PL/SQL procedure successfully completed.
SQL > create table dt1 as select * from v$logmnr_contents
Table created.
SQL > exec dbms_logmnr.end_logmnr ()
PL/SQL procedure successfully completed.
SQL > select start_timestamp,sql_redo,sql_undo from dt1 where sql_redo like'% LOG_A%'
START_TIMESTAMP SQL_REDO SQL_UNDO
- -
Delete from "SYS". "OBJ$" where "OBJ#" = '90306' and "DATAOBJ insert into" SYS "." OBJ$ "(" OBJ# "," DATAOBJ# "," OWNER# "," NAME ","
# "= '90306' and" OWNER# "= '84' and" NAME "=' LOG_A' and" N NAMESPACE "," SUBNAME "," TYPE# "," CTIME "," MTIME "," STIME "," STATUS "
AMESPACE "='1' and" SUBNAME "IS NULL and" TYPE# "='2' and", "REMOTEOWNER", "LINKNAME", "FLAGS", "OID$", "SPARE1", "SPARE2"
"CTIME" = TO_DATE ('2018-01-25 16 yyyy-mm-dd hh34:mi 1315 31mm,' yyyy-mm-dd hh34:mi "SPARE3", "SPARE4", "SPARE5", "SPARE6") values ('90306.
: ss') and "MTIME" = TO_DATE ('2018-01-25 1615 1314 31L,' yyyy-mm-, '84mm).
Dd hh34:mi:ss') and "STIME" = TO_DATE ('2018-01-25 16 purl 13 TO_DATE, yyy-mm-dd hh34:mi:ss'), TO_DATE (' 2018-01-25 16 purge 13 purge 31 min, 'yyyy-)
'yyyy-mm-dd hh34:mi:ss') and "STATUS" =' 1' and "REMOTEOWNE mm-dd hh34:mi:ss'), TO_DATE ('2018-01-25 16-13-13-31-31,' yyyy-mm-d
R "IS NULL and" LINKNAME "IS NULL and" FLAGS "='0' and" OID d hh34:mi:ss'), '1null, null, null, n
$"IS NULL and" SPARE1 "='6' and" SPARE2 "='1' and" SPARE3, NULL)
"='84 'and" SPARE4 "IS NULL and" SPARE5 "IS NULL and" SPAR
E6 "IS NULL and ROWID = 'AAAAASAABAAAVC0AAV'
START_TIMESTAMP SQL_REDO SQL_UNDO
- -
Insert into "SYS". "OBJ$" ("OBJ#", "DATAOBJ#", "OWNER#", "NAME", "delete from" SYS "." OBJ$ "where" OBJ# "= '90317' and" DATAOBJ
NAMESPACE "," SUBNAME "," TYPE# "," CTIME "," MTIME "," STIME "," STATUS # "= '90317' and" OWNER# "=' 84' and" NAME "= 'LOG_A' and" N
"," REMOTEOWNER "," LINKNAME "," FLAGS "," OID$ "," SPARE1 "," SPARE2 ", AMESPACE" ='1' and "SUBNAME" IS NULL and "TYPE#" ='2' and
"SPARE3", "SPARE4", "SPARE5", "SPARE6") values ('90317,90317' "CTIME" = TO_DATE (' 2018-01-25 17-17-17-14-14-48-48-48-14-48), 'yyyy-mm-dd hh34:mi
And "MTIME" = TO_DATE ('2018-01-25 1748 MTIME 46,' yyyy-mm-)
Yyy-mm-dd hh34:mi:ss'), TO_DATE ('2018-01-25 17 yyyy- dd hh34:mi:ss' 48 yyyy- dd hh34:mi:ss') and "STIME" = TO_DATE (' 2018-01-25 17 yyyy- dd hh34:mi:ss' 48 48 14) 46'
Mm-dd hh34:mi:ss'), TO_DATE ('2018-01-25 17 yyyy-mm-d 48 mm-dd hh34:mi:ss' 46,' yyyy-mm-d 'yyyy-mm-dd hh34:mi:ss') and "STATUS" =' 1' and "REMOTEOWNE
D hh34:mi:ss'), "IS NULL and" LINKNAME "IS NULL and" FLAGS "='0' and" OID.
, NULL); $"IS NULL and" SPARE1 "='6' and" SPARE2 "='1' and" SPARE3
"='84 'and" SPARE4 "IS NULL and" SPARE5 "IS NULL and" SPAR
START_TIMESTAMP SQL_REDO SQL_UNDO
- -
E6 "IS NULL and ROWID = 'AAAAASAABAAAVC0AAZ'
Insert into "HR". "LOG_A" ("X") values ('1'); delete from "HR". "LOG_A" where "X" ='1' and ROWID = 'AAAWDN
AAEAAAAzrAAA'
Insert into "HR". "LOG_A" ("X") values ('2'); delete from "HR". "LOG_A" where "X" ='2' and ROWID = 'AAAWDN
AAEAAAAzrAAB'
Insert into "HR". "LOG_A" ("X") values ('3'); delete from "HR". "LOG_A" where "X" ='3' and ROWID = 'AAAWDN
AAEAAAAzrAAC'
To sum up, without opening the supplementary log, the ddl operation can not be mined by logminer, and the dml operation can not be completely mined.
Experiment 2: open area supplementary log
Syntax: alter database add (drop) supplemental log data
SQL > SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE
SUPPLEME
-
NO
SQL > alter database add supplemental log data
Database altered.
SQL > SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE
SUPPLEME
-
YES
PS: if it is not opened here, when analyzing the archive log, you will not see the machine, os_name, user_name, etc., which will cause great trouble to the troubleshooting of the analysis operation. Supplemental lsogging (extended logs) normally, redo log only records the information necessary for recovery, but this information is not enough for us to use redo log for some other applications, such as using rowid to uniquely identify a row in redo log instead of through Primary key. There may be problems if we analyze these logs in another database and want to re-execute some dml Because the rowid represents different contents in different databases. At this point, you need some additional information (columns) to join the redo log, which is supplemental logging.
-- check:
SQL > conn hr/hr
Connected.
SQL > create table log_b (x int)
Table created.
SQL > insert into log_b values (1)
1 row created.
SQL > insert into log_b values (2)
1 row created.
SQL > insert into log_b values (3)
1 row created.
SQL > commit
Commit complete.
SQL > update log_b set Xbox 4 where Xuan 1
1 row updated.
SQL > commit
Commit complete.
SQL > select * from log_b
X
-
four
two
three
SQL >
-check the log group:
SQL > select t1.GROUPPERIMETRY select t2.MEMBER from v$log t1.TYPE from v$log T1 inner join v$logfile T2 on t1.GROUP#=t2.GROUP# and t1.STATUSrequests current
GROUP# STATUS MEMBER TYPE
-
2 CURRENT / u01/app/oracle/oradata/DBdb/redo02.log ONLINE
SQL > select group#,THREAD#,SEQUENCE#,MEMBERS,ARCHIVED,STATUS from v$log
GROUP# THREAD# SEQUENCE# MEMBERS ARC STATUS
-
1 1 16 1 YES INACTIVE
2 1 17 1 NO CURRENT
3 1 15 1 YES INACTIVE
SQL > select t1.GROUPPERIMETRY select t2.MEMBER from v$log t1.TYPE from v$log T1 inner join v$logfile T2 on t1.GROUP#=t2.GROUP# and t1.STATUSrequests current
GROUP# STATUS MEMBER TYPE
-
2 CURRENT / u01/app/oracle/oradata/DBdb/redo02.log ONLINE
SQL >
-- analyze online logs:
SQL > exec dbms_logmnr.add_logfile ('/ u01ActionActionoradata /'/ u01Accord _ oradata _ DBdb_
PL/SQL procedure successfully completed.
SQL > exec dbms_logmnr.start_logmnr (options= > dbms_logmnr.dict_from_online_catalog)
PL/SQL procedure successfully completed.
SQL > create table l1_Z2 as select * from v$logmnr_contents
Table created.
SQL > exec dbms_logmnr.end_logmnr ()
PL/SQL procedure successfully completed.
SQL > select start_timestamp,sql_redo,sql_undo from l1_Z2 where sql_redo like'% LOG_B%'
START_TIMESTAMP SQL_REDO SQL_UNDO
- -
Insert into "SYS". "OBJ$" ("OBJ#", "DATAOBJ#", "OWNER#", "NAME", "delete from" SYS "." OBJ$ "where" OBJ# "= '90320' and" DATAOBJ
NAMESPACE "," SUBNAME "," TYPE# "," CTIME "," MTIME "," STIME "," STATUS # "= '90320' and" OWNER# "=' 84' and" NAME "= 'LOG_B' and" N
"," REMOTEOWNER "," LINKNAME "," FLAGS "," OID$ "," SPARE1 "," SPARE2 ", AMESPACE" ='1' and "SUBNAME" IS NULL and "TYPE#" ='2' and
"SPARE3", "SPARE4", "SPARE5", "SPARE6") values ('90320mm recorder 90320' "CTIME" = TO_DATE (' 2018-01-25 18 SPARE5 16 SPARE6, 'yyyy-mm-dd hh34:mi)
,'84 and and "MTIME" = TO_DATE ('2018-01-25 1815 1615 06mm,' yyyy-mm- "
Yyy-mm-dd hh34:mi:ss'), TO_DATE ('2018-01-25 18 yyyy- dd hh34:mi:ss' 16 yyyy- dd hh34:mi:ss') and "STIME" = TO_DATE (' 2018-01-25 18 14 14 14) 06'
Mm-dd hh34:mi:ss'), TO_DATE ('2018-01-25 18 mm-dd hh34:mi:ss' 16 yyyy-mm-d' yyyy-mm-dd hh34:mi:ss') and "STATUS" ='1' and "REMOTEOWNE
D hh34:mi:ss'), "IS NULL and" LINKNAME "IS NULL and" FLAGS "='0' and" OID.
, NULL); $"IS NULL and" SPARE1 "='6' and" SPARE2 "='1' and" SPARE3
"='84 'and" SPARE4 "IS NULL and" SPARE5 "IS NULL and" SPAR
E6 "IS NULL and ROWID = 'AAAAASAABAAAVC0AAb'
START_TIMESTAMP SQL_REDO SQL_UNDO
- -
Insert into "HR". "LOG_B" ("X") values ('1'); delete from "HR". "LOG_B" where "X" ='1' and ROWID = 'AAAWDQ
AAEAAAAzzAAA'
Insert into "HR". "LOG_B" ("X") values ('2'); delete from "HR". "LOG_B" where "X" ='2' and ROWID = 'AAAWDQ
AAEAAAAzzAAB'
Insert into "HR". "LOG_B" ("X") values ('3'); delete from "HR". "LOG_B" where "X" ='3' and ROWID = 'AAAWDQ
AAEAAAAzzAAC'
Update "HR". "LOG_B" set "X" ='4' where "X" ='1' and ROWID update "HR". "LOG_B" set "X" ='1' where "X" ='4' and ROWID
START_TIMESTAMP SQL_REDO SQL_UNDO
- -
= 'AAAWDQAAEAAAAzzAAA'; =' AAAWDQAAEAAAAzzAAA'
SQL >
SQL > select group#,THREAD#,SEQUENCE#,MEMBERS,ARCHIVED,STATUS from v$log
GROUP# THREAD# SEQUENCE# MEMBERS ARC STATUS
-
1 1 16 1 YES INACTIVE
2 1 17 1 NO CURRENT
3 1 15 1 YES INACTIVE
SQL >
SQL > select t1.GROUPPERIMETRY select t2.MEMBER from v$log t1.TYPE from v$log T1 inner join v$logfile T2 on t1.GROUP#=t2.GROUP# and t1.STATUSrequests current
GROUP# STATUS MEMBER TYPE
-
2 CURRENT / u01/app/oracle/oradata/DBdb/redo02.log ONLINE
SQL > alter system switch logfile
System altered.
SQL > select group#,THREAD#,SEQUENCE#,MEMBERS,ARCHIVED,STATUS from v$log
GROUP# THREAD# SEQUENCE# MEMBERS ARC STATUS
-
1 1 16 1 YES INACTIVE
2 1 17 1 YES ACTIVE
3 1 18 1 NO CURRENT
SQL > select t1.GROUPPERIMETRY select t2.MEMBER from v$log t1.TYPE from v$log T1 inner join v$logfile T2 on t1.GROUP#=t2.GROUP# and t1.STATUSrequests current
GROUP# STATUS MEMBER TYPE
-
3 CURRENT / u01/app/oracle/oradata/DBdb/redo03.log ONLINE
SQL >
SQL > l
1 select *
2 from (SELECT NAME
3 THREAD#
4 SEQUENCE#
5 APPLIED
6 ARCHIVED
7 COMPLETION_TIME
8 FROM V$ARCHIVED_LOG order by 6 desc) a
9 * where rownum /
NAME THREAD# SEQUENCE# APPLIED ARC COMPLETION_T
- -
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_17_f6mcyztb_.arc 1 17 NO YES 25-JAN-18
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_16_f6mbk78z_.arc 1 16 NO YES 25-JAN-18
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_15_f6m8flqg_.arc 1 15 NO YES 25-JAN-18
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_14_f6m5v9ts_.arc 1 14 NO YES 25-JAN-18
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_13_f6m56dgw_.arc 1 13 NO YES 25-JAN-18
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_12_f6m4nz1x_.arc 1 12 NO YES 25-JAN-18
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_11_f6l9phsx_.arc 1 11 NO YES 25-JAN-18
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_10_f6l9pgvh_.arc 1 10 NO YES 25-JAN-18
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_9_f6l9pb8o_.arc 1 9 NO YES 25-JAN-18
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_8_f6l9p9jl_.arc 18 NO YES 25-JAN-18
10 rows selected.
SQL >
-- analyze archive logs:
SQL > exec dbms_logmnr.add_logfile ('/ u01ActionActionPlacement fastfastrecoveryActionareaDBDB _ newbackDB _
PL/SQL procedure successfully completed.
SQL > exec dbms_logmnr.start_logmnr (options= > dbms_logmnr.dict_from_online_catalog)
PL/SQL procedure successfully completed.
SQL > create table l1_Z2 as select * from v$logmnr_contents
Table created.
SQL > exec dbms_logmnr.end_logmnr ()
PL/SQL procedure successfully completed.
-- check the results of the analysis:
Select * from dt2 where sql_redo like'% LOG_B%' and table_name='LOG_B'
Select table_name,USERNAME,SESSION#,SERIAL#, start_timestamp,sql_redo,sql_undo from dt2 where table_name='LOG_B'
The START_TIMESTAMP fields and create records of the above two sql are different.
SQL > select table_name,USERNAME,SESSION#,SERIAL#, start_timestamp,sql_redo,sql_undo from dt2 where table_name='LOG_B'
TABLE_NAME USERNAME SESSION# SERIAL# START_TIMESTAMP SQL_REDO SQL_UNDO
- -
LOG_B HR 46 67 create table log_b (x int)
LOG_B HR 46 67 insert into "HR". "LOG_B" ("X") values ('1'); delete from "HR". "LOG_B" where "X" ='1' and ROWID
= 'AAAWDQAAEAAAAzzAAA'
LOG_B HR 46 67 insert into "HR". "LOG_B" ("X") values ('2'); delete from "HR". "LOG_B" where "X" ='2' and ROWID
= 'AAAWDQAAEAAAAzzAAB'
LOG_B HR 46 67 insert into "HR". "LOG_B" ("X") values ('3'); delete from "HR". "LOG_B" where "X" ='3' and ROWID
= 'AAAWDQAAEAAAAzzAAC'
LOG_B HR 46 67 update "HR". "LOG_B" set "X" ='4' where "X" ='1' update "HR". "LOG_B" set "X" ='1' where "X" ='4'
TABLE_NAME USERNAME SESSION# SERIAL# START_TIMESTAMP SQL_REDO SQL_UNDO
- -
And ROWID = 'AAAWDQAAEAAAAzzAAA'; and ROWID =' AAAWDQAAEAAAAzzAAA'
The experiment of adding supplementary log proves that the dml operation can be recorded and mined completely.
Experiment 3: on the basis of opening the supplementary log, add the logminer data dictionary, (using DBMS_LOGMNR_D.BUILD)
-- set the parameter utl_file_dir, which stores the logminer data dictionary for users in this directory
SQL > show parameter utl
NAME TYPE VALUE
-
Create_stored_outlines string
Utl_file_dir string
SQL >
SQL > alter system set utl_file_dir='/home/oracle/logminer' scope=spfile
System altered.
SQL > shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL >
SQL > startup
ORACLE instance started.
Total System Global Area 835104768 bytes
Fixed Size 2257840 bytes
Variable Size 549456976 bytes
Database Buffers 281018368 bytes
Redo Buffers 2371584 bytes
Database mounted.
Database opened.
SQL >
SQL >
SQL > show parameter utl
NAME TYPE VALUE
-
Create_stored_outlines string
Utl_file_dir string / home/oracle/logminer
SQL >
-- create logmnr data dictionary file
SQL > exec dbms_logmnr_d.build (dictionary_filename = > 'dictionary.ora',dictionary_location = >' / home/oracle/logminer')
PL/SQL procedure successfully completed.
SQL >
-- query:
SQL > select group#,THREAD#,SEQUENCE#,MEMBERS,ARCHIVED,STATUS from v$log
GROUP# THREAD# SEQUENCE# MEMBERS ARC STATUS
-
1 1 31 1 NO CURRENT
2 1 29 1 YES INACTIVE
3 1 30 1 YES INACTIVE
SQL > select t1.GROUPPERIMETRY select t2.MEMBER from v$log t1.TYPE from v$log T1 inner join v$logfile T2 on t1.GROUP#=t2.GROUP# and t1.STATUSrequests current
GROUP# STATUS MEMBER TYPE
-
1 CURRENT / u01/app/oracle/oradata/DBdb/redo01.log ONLINE
SQL >
-- the experiment begins:
SQL > conn hr/hr
Connected.
SQL >
SQL > create table log_c (x int)
Table created.
SQL > insert into log_c values (4)
1 row created.
SQL > insert into log_c values (4)
1 row created.
SQL > insert into log_c values (4)
1 row created.
SQL > insert into log_c values (4)
1 row created.
SQL > commit
Commit complete.
SQL > delete log_c where rownum commit
Commit complete.
SQL > update log_c set Xero5 where rownum commit
Commit complete.
SQL > create table log_c_bak as select * from log_c
Table created.
SQL >
-- query again:
SQL > select group#,THREAD#,SEQUENCE#,MEMBERS,ARCHIVED,STATUS from v$log
GROUP# THREAD# SEQUENCE# MEMBERS ARC STATUS
-
1 1 31 1 NO CURRENT
2 1 29 1 YES INACTIVE
3 1 30 1 YES INACTIVE
SQL > select t1.GROUPPERIMETRY select t2.MEMBER from v$log t1.TYPE from v$log T1 inner join v$logfile T2 on t1.GROUP#=t2.GROUP# and t1.STATUSrequests current
GROUP# STATUS MEMBER TYPE
-
1 CURRENT / u01/app/oracle/oradata/DBdb/redo01.log ONLINE
-- perform analysis online log:
SQL > exec DBMS_LOGMNR.ADD_LOGFILE (LogFileName= >'/ u01 bind)
PL/SQL procedure successfully completed.
SQL > exec sys.dbms_logmnr.start_logmnr (dictfilename= >'/ home/oracle/logminer/dictionary.ora')
PL/SQL procedure successfully completed.
SQL > create table l1_Z5 as select * from v$logmnr_contents
Table created.
SQL > exec dbms_logmnr.end_logmnr ()
PL/SQL procedure successfully completed.
SQL > select table_name,USERNAME,SESSION#,SERIAL#, start_timestamp,sql_redo,sql_undo from l1_Z5 where table_name like 'LOG_C%'
TABLE_NAME USERNAME SESSION# SERIAL# START_TIMEST SQL_REDO SQL_UNDO
- -
LOG_C HR 38 11 create table log_c (x int)
LOG_C_BAK HR 38 11 create table log_c_bak as select * from log_c
-- query and archive:
SQL > l
1 select *
2 from (SELECT NAME
3 THREAD#
4 SEQUENCE#
5 APPLIED
6 ARCHIVED
7 COMPLETION_TIME
8 FROM V$ARCHIVED_LOG order by 6 desc) a
9 * where rownum /
NAME THREAD# SEQUENCE# APPLIED ARC COMPLETION_TIME
- -
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_34_f6mjwpxs_.arc 1 34 NO YES 2018-01-25 19:46:32
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_33_f6mjwjfq_.arc 1 33 NO YES 2018-01-25 19:46:26
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_32_f6mjw7w9_.arc 1 32 NO YES 2018-01-25 19:46:18
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_31_f6mjw4yn_.arc 1 31 NO YES 2018-01-25 19:46:14
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_30_f6mh7zlf_.arc 1 30 NO YES 2018-01-25 19:18:23
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_29_f6mgv88w_.arc 1 29 NO YES 2018-01-25 19:11:38
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_28_f6mgv49x_.arc 1 28 NO YES 2018-01-25 19:11:34
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_27_f6mgtvdf_.arc 1 27 NO YES 2018-01-25 19:11:26
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_26_f6mgtrfy_.arc 1 26 NO YES 2018-01-25 19:11:22
/ u01/app/oracle/fast_recovery_area/DBDB/newback/DBDB/archivelog/2018_01_25/o1_mf_1_25_f6mghd5p_.arc 1 25 NO YES 2018-01-25 19:05:17
10 rows selected.
-- Analytical archiving: [use exec sys.dbms_logmnr.start_logmnr (dictfilename= >'/ home/oracle/logminer/dictionary.ora') to analyze archiving]
SQL > exec DBMS_LOGMNR.ADD_LOGFILE (LogFileName= >'/ u01ActionPlacement fastfastrecoveryActionareaDBDB _ new _ DBDB _
PL/SQL procedure successfully completed.
SQL >
SQL > exec sys.dbms_logmnr.start_logmnr (dictfilename= >'/ home/oracle/logminer/dictionary.ora')
PL/SQL procedure successfully completed.
SQL > create table l1_Z6 as select * from v$logmnr_contents
Table created.
SQL > exec dbms_logmnr.end_logmnr ()
PL/SQL procedure successfully completed.
SQL > select table_name,USERNAME,SESSION#,SERIAL#, start_timestamp,sql_redo,sql_undo from l1_Z6 where table_name like 'LOG_C%'
TABLE_NAME USERNAME SESSION# SERIAL# START_TIMEST SQL_REDO SQL_UNDO
- -
LOG_C HR 38 11 create table log_c (x int)
LOG_C_BAK HR 38 11 create table log_c_bak as select * from log_c
-- analyze archive logs using the dbms_logmnr.start_logmnr (options= > dbms_logmnr.dict_from_online_catalog) option
SQL > exec DBMS_LOGMNR.ADD_LOGFILE (LogFileName= >'/ u01ActionPlacement fastfastrecoveryActionareaDBDB _ new _ DBDB _
PL/SQL procedure successfully completed.
SQL > exec dbms_logmnr.start_logmnr (options= > dbms_logmnr.dict_from_online_catalog)
PL/SQL procedure successfully completed.
SQL > create table l1_Z7 as select * from v$logmnr_contents
Table created.
SQL > exec dbms_logmnr.end_logmnr ()
PL/SQL procedure successfully completed.
SQL > select table_name,USERNAME,SESSION#,SERIAL#, start_timestamp,sql_redo,sql_undo from l1_Z7 where table_name like 'LOG_C%'
TABLE_NAME USERNAME SESSION# SERIAL# START_TIMEST SQL_REDO SQL_UNDO
- -
LOG_C HR 38 11 create table log_c (x int)
LOG_C HR 38 11 insert into "HR". "LOG_C" ("X") values ('4'); delete from "HR". "LOG_C" where "X" ='4' and ROWID
= 'AAAWDfAAEAAAA2EAAA'
LOG_C HR 38 11 insert into "HR". "LOG_C" ("X") values ('4'); delete from "HR". "LOG_C" where "X" ='4' and ROWID
= 'AAAWDfAAEAAAA2EAAB'
LOG_C HR 38 11 insert into "HR". "LOG_C" ("X") values ('4'); delete from "HR". "LOG_C" where "X" ='4' and ROWID
= 'AAAWDfAAEAAAA2EAAC'
LOG_C HR 38 11 insert into "HR". "LOG_C" ("X") values ('4'); delete from "HR". "LOG_C" where "X" ='4' and ROWID
TABLE_NAME USERNAME SESSION# SERIAL# START_TIMEST SQL_REDO SQL_UNDO
- -
= 'AAAWDfAAEAAAA2EAAD'
LOG_C HR 38 11 delete from "HR". "LOG_C" where "X" ='4' and ROWID insert into "HR". "LOG_C" ("X") values ('4')
= 'AAAWDfAAEAAAA2EAAA'
LOG_C HR 38 11 update "HR". "LOG_C" set "X" ='5' where "X" ='4' update "HR". "LOG_C" set "X" ='4' where "X" ='5'
And ROWID = 'AAAWDfAAEAAAA2EAAB'; and ROWID =' AAAWDfAAEAAAA2EAAB'
LOG_C_BAK HR 38 11 create table log_c_bak as select * from log_c
LOG_C_BAK HR 38 11 insert into "HR". "LOG_C_BAK" ("X") values ('5'); delete from "HR". "LOG_C_BAK" where "X" ='5' and R
OWID = 'AAAWDgAAEAAAA2LAAA'
TABLE_NAME USERNAME SESSION# SERIAL# START_TIMEST SQL_REDO SQL_UNDO
- -
LOG_C_BAK HR 38 11 insert into "HR". "LOG_C_BAK" ("X") values ('4'); delete from "HR". "LOG_C_BAK" where "X" ='4' and R
OWID = 'AAAWDgAAEAAAA2LAAB'
LOG_C_BAK HR 38 11 insert into "HR". "LOG_C_BAK" ("X") values ('4'); delete from "HR". "LOG_C_BAK" where "X" ='4' and R
OWID = 'AAAWDgAAEAAAA2LAAC'
11 rows selected.
SQL >
Experiments show that on the basis of opening the supplementary log and adding logminer data field files, ddl/dml is fully recorded and mined. Note: when using dbms_logmnr.start_logmnr, if the specified parameter is dbms_logmnr.dict_from_online_catalog to record the ddl/dml operation, and the specified parameter is dictfilename, only the ddl operation will be recorded.
Thank you for your reading, the above is the content of "how to use logminer", after the study of this article, I believe you have a deeper understanding of how to use logminer, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.