In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Oracle LogMiner is a practical and useful analysis tool provided by Oracle Company since product 8i. Using this tool, we can easily obtain the specific contents of Oracle online / archived log files, especially this tool can analyze all DML and DDL statements for database operations. This tool is particularly useful for debugging, auditing, or backing back a particular transaction.
LogMiner can be used for log analysis, tracking database changes, backing back database changes, recovering some misoperation data, locating and viewing the accounts of misoperators, and combining tools to complete incremental data transmission based on log transaction level.
1. Installation of LogMiner
First confirm whether there is a LogMiner installation package on our database, use the DBA user login database to see if there are dbms_logmnr and dbms_logmnr_d packages, if not, we need to perform the following script installation (the installation must be performed as the DBA user).
1. SQL > @ $ORACLE_HOME/rdbms/admin/dbmslm.sql
2. SQL > @ $ORACLE_HOME/rdbms/admin/dbmslmd.sql
After execution, the installation of logminer is completed, and the first script is used to create the DBMS_LOGMNR package, which is used to analyze the log files. The second script is used to create the DBMS_LOGMNR_D package, which is used to create data dictionary files.
2. Configuration of database
Check to see if the database is in archive mode:
SQL > archive log list
If the database is in non-archive mode (No Archive Mode).
Switch the database to archive mode:
1. Close the database
SQL > shutdown immediate
SQL > startup mount
SQL > alter database archivelog
Open the database after a successful switch
SQL > alter database open
Next, open the database's expansion log: supplemental logging (extension log) normally, redo log records only the information necessary for recovery, but this information is not enough for us to use redo log for some other applications, such as using rowid to uniquely identify a row in redo log instead of through Primary key It may be problematic if we analyze these logs in another database and want to re-execute some dml, because different databases have different rowid representations. At this point, you need some additional information (columns) to join the redo log, which is supplemental logging.
Check to see if the expansion log is enabled:
SQL > select SUPPLEMENTAL_LOG_DATA_MIN from v$database
If not, the expansion log is not open.
SQL > alter database add supplemental log data
Create a user for log analysis and authorize:
CREATE USER logminer IDENTIFIED BY logminer
SQL > GRANT CONNECT, RESOURCE,DBA TO logminer
3. Configuration of LogMiner
Use the oracle user to create a directory used by logminer
Cd / u01/oracle/oradata
Mkdir logminer
Create the logminer dictionary file path:
CREATE DIRECTORY utlfile AS'/ u01qqoracle.oradataUniverse logic.'
Alter system set utl_file_dir='/u01/oracle/oradata/logminer' scope=spfile
Restart the database takes effect
Shutdown immediate
Startup
Show parameter utl_file_dir
4. Start analyzing log files
First, analyze online log files
Prepare test data
SQL > conn logminer/logminer
SQL > CREATE TABLE test (id varchar2)
INSERT INTO test (id) values ('000001')
INSERT INTO test (id) values ('000011')
Commit
Create a data dictionary file
EXECUTE dbms_logmnr_d.build (dictionary_filename = > 'dictionary.ora', dictionary_location = >' / u01max oradata')
View the database's current online log file
SELECT group#, sequence#, status, first_change#, first_time FROM V$log ORDER BY first_change#
Found that only redo001 is the current state.
Select * from v$logfile
Add the online log files that need to be parsed
Exec dbms_logmnr.add_logfile ('/ u01qqoracleqoradata dbms_logmnr.new)
Start logminer for log analysis
Exec dbms_logmnr.start_logmnr (dictfilename= >'/ u01Universe oradatal.oradata)
View log analysis results
SELECT sql_redo, sql_undo, seg_owner FROMv$logmnr_contents
WHERE seg_name='TEST' ANDseg_owner='LOGMINER'
Second, analyze archived logs
Test data preparation
CREATE TABLE test2
(id NUMBER (4) CONSTRAINT PK_test PRIMARY KEY
NAME VARCHAR2 (10)
JOB VARCHAR2 (9)
MGR NUMBER (4)
HIREDATE DATE
SAL NUMBER (7 dint 2)
COMM NUMBER (7 dint 2)
DEPTNO NUMBER (2))
INSERT INTO test2 VALUES (7369), 7902 ('17-12-1980), ('17-12-1980))
INSERT INTO test2 VALUES (7499) (7499) (7499)
INSERT INTO test2 VALUES (7521 Japanese WARDERMALDRIMMYYYYYY) ('22-2-1981)
INSERT INTO test2 VALUES (7566), 7839 ('2-4-1981), ('2-4-1981)
COMMIT
Switch logs and archive the current logs for analysis
ALTER SYSTEM SWITCH LOGFILE
View archive log fil
Select sequence#, FIRST_CHANGE#, NEXT_CHANGE#,name fromv$archived_log order by sequence# desc
Create a data dictionary file
EXECUTE dbms_logmnr_d.build (dictionary_filename = > 'dictionary.ora', dictionary_location = >' / u01max oradata')
Exec dbms_logmnr.add_logfile ('/ u01qoracleUniverse produce dbms_logmnr.new 11.2.0Uniplex dbhomeowners 1, dbsArch2, dbms_logmnr.new)
Exec dbms_logmnr.start_logmnr (dictfilename= >'/ u01Universe oradatal.oradata)
View analysis results
SELECT sql_redo, sql_undo, seg_owner FROM v$logmnr_contents
WHERE seg_name='TEST2' AND seg_owner='LOGMINER'
If you need to analyze a large number of archive logs, it is as follows:
BEGIN
Dbms_logmnr.add_logfile (
'/ u01Universe oracleActureproductUniverse 11.2.0Universe dbhomeowners 1nd dbsplex arch2characters 130908343318.dbf'
DBMS_LOGMNR.new)
Dbms_logmnr.add_logfile (
'/ u01Universe oracleActureproductUniverse 11.2.0Universe dbhomeowners 1nd dbsplex arch2characters 130908343318.dbf'
DBMS_LOGMNR.addfile)
Dbms_logmnr.add_logfile (
'/ u01Universe oracleActureproductUniverse 11.2.0Universe dbhomeowners 1nd dbsplex arch2characters 130908343318.dbf'
DBMS_LOGMNR.addfile)
END
/
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.