In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article introduces the knowledge of "how to locate which SQL produced a large number of Redo logs". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
First of all, we need to locate and determine that the number of logs in that time period has suddenly exploded. Note that it is normal business behavior that a large number of redo log are generated in some time periods. It is possible that a large number of archived logs are generated during this time period every day. For example, a large number of jobs run centrally during this time period. In order to analyze the sudden and abnormal generation of a large number of redo log, there must be data analysis and comparison to find the time period in which a large number of redo log is generated, and narrowing the scope of analysis is the first step. Reasonable reduction of the scope can facilitate the rapid and accurate location of the problem SQL. The following SQL statement counts the relevant data indicators of the switching times of redo log respectively. This can indirectly determine that a large number of archived logs were generated during that period.
/ * Statistics the total number of redo log handoffs per day, and the comparison with the average number of handovers * / WITH T AS (SELECT TO_CHAR (FIRST_TIME, 'YYYY-MM-DD') AS LOG_GEN_DAY, TO_CHAR (SUM (TO_CHAR (FIRST_TIME,' YYYY-MM-DD'), TO_CHAR (FIRST_TIME) 'YYYY-MM-DD'), 1,0)),' 999') AS "LOG_SWITCH_NUM" FROM V$LOG_HISTORY WHERE FIRST_TIME
< TRUNC(SYSDATE) --排除当前这一天 GROUP BY TO_CHAR(FIRST_TIME, 'YYYY-MM-DD') ) SELECT T.LOG_GEN_DAY , T.LOG_SWITCH_NUM , M.AVG_LOG_SWITCH_NUM , (T.LOG_SWITCH_NUM-M.AVG_LOG_SWITCH_NUM) AS DIFF_SWITCH_NUM FROM T CROSS JOIN ( SELECT TO_CHAR(AVG(T.LOG_SWITCH_NUM),'999') AS AVG_LOG_SWITCH_NUM FROM T ) M ORDER BY T.LOG_GEN_DAY DESC; SELECT TO_CHAR(FIRST_TIME,'YYYY-MM-DD') DAY, TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'00',1,0)),'999') "00", TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'01',1,0)),'999') "01", TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'02',1,0)),'999') "02", TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'03',1,0)),'999') "03", TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'04',1,0)),'999') "04", TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'05',1,0)),'999') "05", TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'06',1,0)),'999') "06", TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'07',1,0)),'999') "07", TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'08',1,0)),'999') "08", TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'09',1,0)),'999') "09", TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'10',1,0)),'999') "10", TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'11',1,0)),'999') "11", TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'12',1,0)),'999') "12", TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'13',1,0)),'999') "13", TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'14',1,0)),'999') "14", TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'15',1,0)),'999') "15", TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'16',1,0)),'999') "16", TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'17',1,0)),'999') "17", TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'18',1,0)),'999') "18", TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'19',1,0)),'999') "19", TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'20',1,0)),'999') "20", TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'21',1,0)),'999') "21", TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'22',1,0)),'999') "22", TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'23',1,0)),'999') "23" FROM V$LOG_HISTORY GROUP BY TO_CHAR(FIRST_TIME,'YYYY-MM-DD') ORDER BY 1 DESC; 如下案例所示,2018-03-26日有一个归档日志暴增的情况,我们可以横向、纵向对比分析,然后判定在17点到18点这段时间出现异常,这个时间段与往常对比,生成了大量的redo log。 这里分享一个非常不错的分析redo log 历史信息的SQL ------------------------------------------------------------------------------------------------ REM Author: Riyaj Shamsudeen @OraInternals, LLC REM www.orainternals.com REM REM Functionality: This script is to print redo size rates in a RAC claster REM ************** REM REM Source : AWR tables REM REM Exectution type: Execute from sqlplus or any other tool. REM REM Parameters: No parameters. Uses Last snapshot and the one prior snap REM No implied or explicit warranty REM REM Please send me an email to rshamsud@orainternals.com, if you enhance this script :-) REM This is a open Source code and it is free to use and modify. REM Version 1.20 REM ------------------------------------------------------------------------------------------------ set colsep '|' set lines 220 alter session set nls_date_format='YYYY-MM-DD HH24:MI'; set pagesize 10000 with redo_data as ( SELECT instance_number, to_date(to_char(redo_date,'DD-MON-YY-HH24:MI'), 'DD-MON-YY-HH24:MI') redo_dt, trunc(redo_size/(1024 * 1024),2) redo_size_mb FROM ( SELECT dbid, instance_number, redo_date, redo_size , startup_time FROM ( SELECT sysst.dbid,sysst.instance_number, begin_interval_time redo_date, startup_time, VALUE - lag (VALUE) OVER ( PARTITION BY sysst.dbid, sysst.instance_number, startup_time ORDER BY begin_interval_time ,sysst.instance_number ) redo_size FROM sys.wrh$_sysstat sysst , DBA_HIST_SNAPSHOT snaps WHERE sysst.stat_id = ( SELECT stat_id FROM sys.wrh$_stat_name WHERE stat_name='redo size' ) AND snaps.snap_id = sysst.snap_id AND snaps.dbid =sysst.dbid AND sysst.instance_number = snaps.instance_number AND snaps.begin_interval_time>(sysdate-30 ORDER BY snaps.snap_id)) select instance_number, redo_dt, redo_size_mb, sum (redo_size_mb) over (partition by trunc (redo_dt)) total_daily, trunc (sum (redo_size_mb) over (partition by trunc (redo_dt)) / 24) hourly_rate from redo_Data order by redo_dt, instance_number /
At this stage, we only get the archive log exception (archive log explosion) during that period, so how do we locate the relevant SQL statement? We can use the following SQL to locate which objects have a large number of block changes during this period of time. As shown below, there is a lot of block modification between these two objects (of course, the objects may be tables or indexes, in this case, the two objects are actually the same table and its primary key index). Basically, we can tell that the DML statements related to this object generate a large number of redo log, of course, there may be some scenarios that are more complex and not so easy to locate.
SELECT TO_CHAR (BEGIN_INTERVAL_TIME, 'YYYY-MM-DD HH24') SNAP_TIME, DHSO.OBJECT_NAME, SUM (DB_BLOCK_CHANGES_DELTA) BLOCK_CHANGED FROM DBA_HIST_SEG_STAT DHSS, DBA_HIST_SEG_STAT_OBJ DHSO DBA_HIST_SNAPSHOT DHS WHERE DHS.SNAP_ID = DHSS.SNAP_ID AND DHS.INSTANCE_NUMBER = DHSS.INSTANCE_NUMBER AND DHSS.OBJ# = DHSO.OBJ# AND DHSS.DATAOBJ# = DHSO.DATAOBJ# AND BEGIN_INTERVAL_TIME BETWEEN TO_DATE ('2018-03-26 1700' 'YYYY-MM-DD HH24:MI') AND TO_DATE (' 2018-03-26 18 YYYY-MM-DD HH24:MI' 00mm, 'YYYY-MM-DD HH24:MI') GROUP BY TO_CHAR (BEGIN_INTERVAL_TIME,' YYYY-MM-DD HH24') DHSO.OBJECT_NAME HAVING SUM (DB_BLOCK_CHANGES_DELTA) > 0 ORDER BY SUM (DB_BLOCK_CHANGES_DELTA) DESC
At this point, we can generate AWR reports for this period of time. The SQL that produces a large amount of redo log usually comes from TOP Gets, a DML SQL statement in TOP Execution or some DML SQL statements. Combined with the objects located by the above SQL and the related SQL statements below, we can basically judge that the following two SQL generate a large number of redo log. (the first SQL is the call package, which contains a large number of DELETE and INSERT operations on this table.)
If you are not completely sure at this time, you can also use the following SQL to help determine which SQL generated a large number of redo log. In this case, the SQL statement found in the above AWR report is basically the same as the SQL captured by SQL below. Then there can be further evidence.
Note that the SQL statement executes slowly, and the relevant conditions need to be modified during execution: time and specific segment objects.
SELECT TO_CHAR (BEGIN_INTERVAL_TIME,'YYYY_MM_DD HH24') WHEN, DBMS_LOB.SUBSTR (SQL_TEXT,4000,1) SQL, DHSS.INSTANCE_NUMBER INST_ID, DHSS.SQL_ID, EXECUTIONS_DELTA EXEC_DELTA, ROWS_PROCESSED_DELTA ROWS_PROC_DELTA FROM DBA_HIST_SQLSTAT DHSS, DBA_HIST_SNAPSHOT DHS DBA_HIST_SQLTEXT DHST WHERE UPPER (DHST.SQL_TEXT) LIKE'%'- here replace AND LTRIM (UPPER (DHST.SQL_TEXT)) NOT LIKE 'SELECT%' AND DHSS.SNAP_ID=DHS.SNAP_ID AND DHSS.INSTANCE_NUMBER=DHS.INSTANCE_NUMBER AND DHSS.SQL_ID=DHST.SQL_ID AND BEGIN_INTERVAL_TIME BETWEEN TO_DATE (' 2018-03-26 1700') with a specific segment object 'YYYY-MM-DD HH24:MI') AND TO_DATE (' 2018-03-26 18purl 00mm HH24:MI')
In fact, the above analysis is basically completely targeted at the SQL statement, and the rest is to communicate with developers or Support personnel to understand whether it is a normal business logic change or abnormal behavior. If we need to dig deeper, we can use the log mining tool Log Miner to analyze it. No unfolding analysis will be done here. In fact, individuals in the judgment and analysis generated a normal period of time and the problem period of the AWR comparison report (WORKLOAD REPOSITORY COMPARE PERIOD REPORT), shown below, some of which information can also be used for analysis and comparison reference. Comparative analysis can be done for complex scenarios (because in complex scenarios, the SQL may not be accurately located only through the top AWR report)
This screenshot does not capture the relevant SQL, which is actually the SQL statement analyzed above, which is very useful in complex scenarios.
This is the end of the content of "how to locate which SQL generated a large number of Redo logs". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.