In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
(I) the role of the LogMiner tool
The Logminer tool is mainly used to analyze redo log and archive log files. Through this tool, you can easily get the specific contents of Oracle redo log and archive log files, and even get users' DDL and DML operations on the database. In addition, some sql statements that need to be rolled back can be analyzed. This tool is particularly suitable for debugging, auditing, and fallback on a particular object. The role of Logminer is summarized as follows:
1. Tracking changes in the database, you can track changes in data offline without affecting online performance
two。 Roll back changes in the database, roll back specific changes in data, and reduce point-in-time recovery execution
3. Determine the fine logic recovery operations to be performed at the transaction level, and obtain the required undo statements by querying log
4. Audit to see what specific users do to the database.
Note:
1.LogMiner does not support IOT tables, Long, LOB, and collection types
(2) how to use it
(1) install LogMiner
Logminer is usually installed after the database is installed. To check whether the database has installed LogMiner, you only need to check whether there are two package, dbms_logmnr and dbms_logmnr_d, in the database. If so, you have already installed them. If not, execute the following two scripts:
$ORACLE_HOME/rdbms/admin/dbmslm.sql$ORACLE_HOME/rdbms/admin/dbmslmd.sql
It is important to note that both scripts must be run as DBA. The first script is used to create the dbms_logmnr package, which is used to analyze log files, and the second script is used to create the dbms_logmnr_d package, which is used to create data dictionary files.
After the creation, there will be 4 more procedure and 3 view in the database, as shown below:
1. Dbms_logmnr_d.build: create a data dictionary file
2. Dbms_logmnr.add_logfile: add log files for analysis
3. Dbms_logmnr.start_logmnr: start analyzing log files
4. Dbms_logmnr.end_logmnr: end the analysis of log files
1.v$logmnr_dictionary: displays information about dictionary files
2.logmnr_logs: displays a list of logs to be analyzed when LogMiner starts
3.logmnr_contents: after Logminer starts, you can use this view to view the contents of the log file.
It should be noted that the view here is related to session, and other session cannot be viewed, and can only be viewed in the current session.
(2) if you want to analyze archived data, the database needs to be in archived mode. Check the archiving mode by using the following command
SQL > archive log list; Database Log Mode Archiving Mode automatic Archiving enables Archiving end Point / home/app/oracle/oradata/orcl/arch_log earliest online Log sequence 46 next Archive Log sequence 49 current Log sequence 49
(3) the spfile parameter utl_executefile_dir needs to be modified before creating the data dictionary. The database needs to be restarted because it is a static parameter.
SQL > create directory utlfile as'/ home/oracle/logmnr'; Directory createdSQL > alter system set utl_executefile_dir ='/ home/oracle/logmnr' scope=spfile
(4) generate data dictionary file (optional)
Before using the LogMiner tool, you can use dbms_logmnr_d.build to create a data dictionary file, the purpose of creating the file is to export the data dictionary to a text file, the file is optional, but without this file, LogMiner explained about the data dictionary parts, such as table names, column names, metadata will be displayed in hexadecimal form, we can not understand.
It should be noted that if the tables in the database we are going to analyze change, and the data dictionary information is also changed, then we need to regenerate the data dictionary file during the analysis.
You need to create a data dictionary file as dba, and create it into the logmnr file built above.
Begindbms_logmnr_d.build (dictionary_filename= > 'dict.ora',dictionary_location= >' / home/oracle/logmnr',options= > dbms_logmnr_d.store_in_flat_file); end;/
(5) add log files to be analyzed
LogMiner can analyze redo log files (redo log) and archive log files (archive log), adding log files to be analyzed using the dbms_logmnr.add_logfile process, the first file uses the parameter dbms_logmnr.NEW parameter, and the subsequent files use the dbms_logmnr.ADDFILE parameter.
Step1: create a list and add the first file:
BEGIN dbms_logmnr.add_logfile (logfilename= >'/ home/app/oracle/oradata/orcl/arch_log/1_49_936489730.dbf',options= > dbms_logmnr.NEW); END;/
Step2: add other files to the list
BEGIN dbms_logmnr.add_logfile (logfilename= >'/ home/app/oracle/oradata/orcl/arch_log/1_50_936489730.dbf',options= > dbms_logmnr.ADDFILE); dbms_logmnr.add_logfile (logfilename= >'/ home/app/oracle/oradata/orcl/arch_log/1_51_936489730.dbf',options= > dbms_logmnr.ADDFILE); END;/
(6) use Logminer file for analysis
LogMiner analysis log files can set restrictions or no restrictions. If you do not set restrictions, all log files loaded will be analyzed. If restrictions are set, only logs that meet the conditions will be analyzed.
(6.1) No restrictions
EXECUTE dbms_logmnr.start_logmnr (dictfilename= >'/ home/oracle/logmnr/dict.ora')
(6.2) there are restrictions
By setting several different parameters in dbms_logmnr.start_logmnr, we can narrow the range of log files to be analyzed, and by setting the start time and end time parameters, we can limit only
Analyze the log files within a certain period of time.
Parameters:
Default value interpretation of parameter type
StartScn number 0 analyzes the log file of scn > StartScn in the log file
EndScn number 0 analyzes scn in log files
< EndScn的日志文件 StartTime date 1998-01-01 分析日志文件中时间戳 >Log file for StartTime
EndTime date 2988-01-01 time stamp in analysis log file
< EndTime的日志文件 DictFileName varchar() 数据字典文件的位置 (7)观察分析结果 通过第(6)步,我们已经得到了重做日志文件的内容,通过动态数据字典v$logmnr_contents可以查看到所有的信息。 select * from v$logmnr_contents; v$logmnr_contents的部分栏位信息: 栏位 解释 --------------------------- ------------------------------------ SCN 查询的特定数据的SCN号 TIMESTAM 数据改变发生的时间 COMMIT_TIMESTAMP 数据改变提交的时间 SEG_OWNER 数据发生改变的段名称 SEG_NAME 段的所有者的名称 SEG_TYPE 数据发生改变的段类型 SEG_TYPE_NAME 数据发生改变的段类型名称 TABLE_SPACE 变化段的表空间 ROW_ID 数据变化行的ID SESSION_INFO 数据发生变化时会话信息 OPERATION 日志中记录的操作 SQL_REDO 日志中记录的指定行变化的SQL语句(正向操作) SQL_UNDO 为重做日志记录回退或恢复制定行变化的SQL语句 (三)一次LogMiner使用测试 (1)查看归档,下一个归档log是53 SQL>Archive log listDatabase log mode Archive ModeAutomatic archival EnabledArchive destination / home/app/oracle/oradata/orcl/arch_logOldest online log sequence 50Next log sequence to archive 53Current log sequence 53
(2) check the location of the data dictionary
SQL > show parameter utl_file_dirNAME TYPE VALUE--utl_file_dir string / home/oracle/LOGMNR
(3) create a test table
SQL > create table test06 2 (3 id number, 4 name varchar (15) 5); Table created SQL > insert into test06 values (1 row inserted SQL > insert into test06 values); 1 row inserted
(4) filing redo
SQL > alter system switch logfile; System altered
(5) check the filing situation
SQL > archive log listDatabase log mode Archive ModeAutomatic archival EnabledArchive destination / home/app/oracle/oradata/orcl/arch_logOldest online log sequence 50Next log sequence to archive 53Current log sequence 53SQL > archive log list;Database log mode Archive ModeAutomatic archival EnabledArchive destination / home/app/oracle/oradata/orcl/arch_logOldest online log sequence 51Next log sequence to archive 54Current log sequence 54
(6) create a data dictionary file
Begindbms_logmnr_d.build (dictionary_filename= > 'dictionary.ora',dictionary_location= >' / home/oracle/LOGMNR',options= > dbms_logmnr_d.store_in_flat_file); end;/
(7) add archive log files to be analyzed
(7.1) first find the archive file on OS:
[oracle@localhost LOGMNR] $cd / home/app/oracle/oradata/orcl/arch_ log [oracle @ localhost arch_log] $ls1_49_936489730.dbf 1_50_936489730.dbf 1_51_936489730.dbf 1_52_936489730.dbf 1_53_936489730.dbf
(7.2) create a list of archive log files
SQL > begin 2 dbms_logmnr.add_logfile (LogFileName = >'/ home/app/oracle/oradata/orcl/arch_log/1_53_936489730.dbf',Options = > dbms_logmnr.new); 3 end; 4 / PL/SQL procedure successfully completed
(7.3) if you are not sure whether the data to be analyzed is in the archive log file added above, you can add a few more.
BEGIN dbms_logmnr.add_logfile (logfilename= >'/ home/app/oracle/oradata/orcl/arch_log/1_51_936489730.dbf',options= > dbms_logmnr.ADDFILE); dbms_logmnr.add_logfile (logfilename= >'/ home/app/oracle/oradata/orcl/arch_log/1_52_936489730.dbf',options= > dbms_logmnr.ADDFILE); END;/PL/SQL procedure successfully completed
(8) start the analysis
SQL > EXECUTE dbms_logmnr.start_logmnr (dictfilename= >'/ home/oracle/LOGMNR/dictionary.ora'); PL/SQL procedure successfully completed
(9) View the results
SQL > select c. Timestamp department c. OperationLegacy c. Tablecalendar spacec. Sqlstores redoreparentionc. Sqlstores undo from v$logmnr_contents c. Where c. Segmented ownerships. TIMESTAMP OPERATION TABLE_SPACE SQL_REDO SQL_UNDO-- -2017-4-25 2 DDL Create table test06 (id number Name varchar (15) ) 2017-4-25 2 INSERT USERS insert into "LIJIAMAN". "TEST06" ("ID" "NAME") values ('1century recorder lijiaman') Delete from "LIJIAMAN". "TEST06" where "ID" ='1' and "NAME" = 'lijiaman' and ROW 2017-4-25 2 INSERT USERS insert into "LIJIAMAN". "TEST06" ("ID", "NAME") values (' 2mm LIJIAMAN'); delete from "LIJIAMAN". "TEST06" where "ID" ='2' and "NAME" = 'wangyujie' and RO
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.