In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains "what is the content of Oracle AWR". Interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn "what is the content of Oracle AWR"?
1.AWR report header information
DB NameDB IdUnique NameRoleEditionReleaseRACCDBKTDB1107793954ktdbPRIMARYEE19.0.0.0.0YESNOInstanceInst NumStartup Timektdb2202-March-20 19:35Host NamePlatformCPUsCoresSocketsMemory (GB) hn-ekdb2Linux x86 64-bit80804753.98
Snap IdSnap TimeSessionsCursors/SessionInstancesBegin Snap:533914-May-20 08:00:03159.54End Snap:534214-May-20 11:00:14168.64Elapsed:
180.17 (mins)
DB Time:
1.23 (mins)
DB Name: database name DBid: database id
Elapsed: sampling period
DB Time: the time taken by the user to operate, excluding the time spent by the Oracle background process
DB Time is much smaller than Elapsed Time, which means that the database is relatively idle.
In 180 minutes (during which 3 snapshot data were collected), the database took 1.23 minutes.
However, for batch systems, the workload of the database is always concentrated over a period of time. If the snapshot cycle is not within this period of time, or if the snapshot cycle span is too long to include a large amount of database idle time, the analysis results are meaningless. This also shows that it is critical to select the analysis period, which represents the performance problem.
2. Load Profile
Load Profile
Per SecondPer TransactionPer ExecPer CallDB Time (s): 0.01.20.000.01DB CPU (s): 0.00.60.000.01Background CPU (s): 0.121.80.030.00Redo size (bytes): 1495.0256536.7
Logical read (blocks): 384.565979.0
Block changes:29.35026.5
Physical read (blocks): 18.03095.9
Physical write (blocks): 0.693.7
Read IO requests:3.8656.2
Write IO requests:0.353.9
Read IO (MB): 0.124.2
Write IO (MB): 0.00.7
IM scan rows:0.00.0
Session Logical Read IM:0.00.0
Global Cache blocks received:3.9667.2
Global Cache blocks served:191.832906.7
User calls:0.698.2
Parses (SQL): 3.5598.3
Hard parses (SQL): 0.01.4
SQL Work Area (MB): 0.03.6
Logons:0.118.7
User logons:0.00.4
Executes (SQL): 3.7639.6
Rollbacks:0.00.0
Transactions:0.0
Per Second and Per Transaction: these two parts are a detailed list of database resource loads, divided into resource loads per second and resource loads per transaction
Redo size: the amount of redo generated per second / per transaction (in bytes) indicates how busy the database is
Logical reads: blocks per second / logical reads generated by each transaction
Block changes: number of blocks changed per second / per transaction
Physical reads: physical reads per second / per transaction
Physical writes: blocks of physical writes per second / per transaction
User calls: number of calls per second / per transaction user
Parses: number of analyses per second / per transaction
Hard parses: hard analysis per second / per transaction
SQL Work Area: number of times per second / per thing sorted
Logons: number of logins to the database per second / per transaction
Executes: number of times per second / per transaction SQL executed
Rollbacks: number of rollbacks per second / per thing
Transactions: transactions per second
It should be noted that:
1) logical reads and physical reads, at the same time, you can also get the average number of physical reads per logical read, that is, the average of 65979 logical reads per transaction, which should be as small as possible.
2) parses and hard parses: it can be seen from the table that cpu performs an average of 3.50 parses per second (more than 100 should be noted) and 0 hard parses per second (more than 10 should be noted), that is, cpu has to deal with 0 new sql per second, which should be said to be idle.
3.instance efficiency Percentages:
Instance Efficiency Percentages (Target 100%)
Buffer Nowait%: 99.99Redo NoWait%: 100.00Buffer Hit%: 95.96In-memory Sort%: 100.00Library Hit%: 99.57Soft Parse%: 99.76Execute to Parse%: 6.46Latch Hit%: 99.95Parse CPU to Parse Elapsd%: 25.76 Non-Parse CPU:99.74Flash Cache Hit%: 0.00
Buffer Nowait%: represents the percentage of non-waiting data obtained in memory
(should not be less than 99%) Buffer Hit%: represents the percentage of blocks found by the process from memory, and the hit rate of memory blocks.
(should not be less than 99%) Library Hit%: represents the hit rate of SQL resolution in the shared pool.
(should not be less than 95%) if it is too small, there may be library cache contention in the database. Execute to Parse: is the ratio of statement execution to parsing, which can be high if you want a high SQL reuse rate. The higher the value, the more times it is repeated after a parse. The database is 6.46%, indicating that 93.54% of the sql is the new sql. Parse CPU to Parse Elapsd: percentage of time spent on total CPU in total parsing time Redo NoWait: indicates the percentage of unwaited BUFFER obtained in the LOG buffer. The ratio of In-memory sort%: sorting in memory, if too low means that a large number of sorting occurs in temporary tablespaces. (consider increasing PGA) the percentage of Soft Parse%: soft parsing (softs/softs+hards). Too low means that the SQL reuse rate is not good, so you need to adjust the application to use binding variables. (no less than 95 percent) Latch Hit:Latch is a lock that protects memory structures and can be thought of as a SERVER process gaining permission to access memory data structures. (if this value is less than 95%, there is a serious problem with the database.) Non-Parse CPU: actual elapsed time of SQL / (actual elapsed time of SQL + SQL parsing time). Too low means that parsing takes too much time.
4.shared pool statistics:
Shared Pool Statistics
BeginEndMemory Usage%: 85.8385.92% SQL with executions > 1Memory for SQL w/exec 91.5492.23% Memory for SQL w/exec > 85.4586.42
Memory Usage%: shared pool memory usage as a percentage of the total shared pool size for a database that has been running for some time. This value should be kept moderate, such as 85%. If it is too high, it will cause objects in shared pool to be brushed out of memory, resulting in an increase in hard parsing of sql statements, and memory will be wasted if it is too low. SQL with executions > 1: the sql ratio of execution times is greater than 1. If this value is too small, it means that more binding variables need to be used in applications to avoid excessive SQL parsing. Memory for SQL w/exec > 1: the percentage of memory consumed by SQL with more than 1 execution times.
5.event wait
Top 10 Foreground Events by Total Wait Time
EventWaitsTotal Wait Time (sec) Avg Wait% DB timeWait ClassDB CPU
37.8
51.4
Gc cr multi block mixed2,60627.510.55ms37.4Clusterdb file scattered read2,2172.81.27ms3.8User I/Ogc cr block 3murway2 6741.3493.28us1.8Clustergc cr multi block grant2,6001.2461.59us1.6Clusterdb file parallel read80311.25ms1.4User I/OSync ASM rebalance67211.47ms1.3OtherIPC send completion sync2206.8381.13us1.1Otherenq: PS-contention1173.5463.95us.7Otherlog file sync23.519.77ms.6Commit
It is shown in reverse order in proportion to the waiting time. When we tune, we always want to observe the most significant effect, so we should start here to determine what we should do next.
In general, CPU time is always listed first in a database that has no problem.
6.SQL resource consumption positioning:
SQL ordered by Elapsed Time:
The TOP SQL that records the total execution time (note that it is the total execution time of the SQL in the monitoring scope, not the single SQL execution time).
Elapsed Time (S): the total time it takes for the SQL statement to execute, and this sort is based on this field. Note that this time is not the time of a single SQL run, but the total time of SQL execution within the monitoring range. Unit time in seconds Elapsed Time = CPU Time + Wait Time CPU Time (s): the total CPU elapsed time when the SQL statement is executed, which will be less than or equal to the Elapsed Time time. The unit time is seconds. Executions: the total number of times the SQL statement was executed within the scope of the monitor. Elap per Exec (s): the average time it takes to execute a SQL. The unit time is seconds. % Total DB Time: the percentage of Elapsed Time time in the total database time that is SQL. SQL ID: the ID number of the SQL statement. Click to navigate to the detailed list of SQL below, and click the return of IE to return to the current SQL ID. SQL Module: shows how the SQL is connected to the database, and if it is linked with SQL*Plus or PL/SQL, then basically someone is debugging the program. Generally, the location of the sql that is executed with the foreground application link is empty. SQL Text: simple sql prompt, click SQL ID for details.
SQL ordered by CPU Time:
The TOP SQL with the longest execution time in the total CPU time is recorded (note that the execution of this SQL accounts for the total CPU time within the monitoring scope, rather than a single SQL execution time).
% Total-CPU Time as a percentage of Total DB CPU
% CPU-CPU Time as a percentage of Elapsed Time
% IO-User Imax O Time as a percentage of Elapsed Time
SQL ordered by Gets:
The TOP SQL whose execution accounts for the total buffer gets (logical IO) is recorded (note that the execution of this SQL accounts for the sum of the Gets within the monitoring scope, not the Gets of a single SQL execution).
% Total-Buffer Gets as a percentage of Total Buffer Gets
SQL ordered by Reads:
The TOP SQL that performs the total disk physical reads (physical IO) is recorded (note that the execution of this SQL accounts for the total disk physical reads within the monitoring range, not the disk physical reads occupied by a single SQL execution).
% Total-Physical Reads as a percentage of Total Disk Reads
SQL ordered by Executions:
Records the TOP SQL sorted by the number of times the SQL is executed. This sort can see the number of SQL executions within the scope of monitoring.
SQL ordered by Parse Calls:
The TOP SQL that records the number of soft parsing of the SQL.
SQL ordered by Sharable Memory:
A TOP SQL that records the size of the library cache occupied by SQL.
Sharable Mem (b): the size of the occupied library cache. The unit is bytes.
Mainly for the top three orderedby Elapsed time,orderedby CPU time,orderedby gets,orderedby read SQL observation and tuning.
7.IO Stats-- > Tablespace IO Stats
(this information is not collected in the sample AWR, so use a sample)
Tablespace
Reads
Av Reads/s
Av Rd (ms)
Av Blks/Rd
Writes
Av Writes/s
Buffer Waits
Av Buf Wt (ms)
SYSAUX
9553
0
4.07
1.65
19729
0
0
0.00
UNDOTBS
7879
0
3.21
1.00
8252
0
twenty
5.50
SYSTEM
2496
0
4.74
1.62
4469
0
0
0.00
USERS
three hundred and sixty four
0
3.08
1.57
four
0
0
0.00
TEMP
thirty-four
0
3.24
12.35
twenty-five
0
0
0.00
TEST2
four
0
47.50
1.00
four
0
0
0.00
1) Table spaces or data files with frequent read and write activities can be found. If the number of writes in temporary tablespaces is the highest, it means that the sort is too much and too large.
2) from the AVG BLKS/RD column, you can see which tablespaces have undergone the most full table scans. If the value is greater than 1, you should compare this value with the value of the initialization parameter db_file_multiblock_read_count. If they are closer, it means that most of the table spaces are full table scans.
3) check AV RD (MS), this column indicates the read time of Imax O, the value should be less than 20ms, and if it is too large, you should check whether files that read and write frequently are placed on the same disk.
Note:
For cached disks, the 1ms O time is usually less than that of the disk.
The parameter DB_FILE_MULTIBLOCK_READ_COUNT can be set in the init.ora file to facilitate disk read time, which controls the number of data blocks read in Icano at a time during a full table scan, which will improve the performance of a full table scan by reducing the number of Icano needed to scan a table. However, the result of setting this parameter is that the optimizer may perform more full table scans, so you need to set OPTIMIZER_INDEX_COST_ADJ to a value, such as 10, to eliminate this problem and drive the use of indexes.
8.Segment Statistics
1) Segments by Logical Reads or Segments by Physical Reads
You can find objects with larger logical or physical reads, and find out why, and whether you can reduce logical and physical reads by creating new indexes, or adopting partitioned tables, etc.
2) Segments by Row Lock Waits
Through this report, you can find the object with the most serious row-level lock, and you need to discuss the solution with the developer.
3) Segments by ITL Waits
This report can indicate the object with the most serious ITL waiting. If an object with heavy ITL waiting is found, the initrans parameter of the object should be set to the number of processes that operate the object concurrently.
4) Segments by Buffer Busy Waits:
Owner
Tablespace Name
Object Name
Subobject Name
Obj. Type
Obj#
Dataobj#
Buffer Busy Waits
% of Capture
SYS
SYSAUX
SYS_LOB0000007450C00009 $$
SYS_LOB_P4025
LOB PARTITION
99696
99696
two
66.67
SYS
SYSTEM
SEG$
TABLE
fourteen
eight
one
33.33
Get the object with the most serious buffer busy waits. The cause of Buffer busy waits is the data distribution problem. The key to the solution is to optimize those sql statements that have scanned too many data blocks and reduce the data blocks they want to scan. If you have optimized the sql statement, you can consider increasing the value of pctfree to reduce the number of data rows that can be contained in a data block, thereby partitioning the data rows of the object into more data blocks, or creating an hash partition table to redistribute the data.
9. Instance activity statistics
Compare the amount of sorting in memory and disk, if the disk sort is too high, you need to increase the PGA_AGGREGATE_TARGET (or increase the SORT_AREA_SIZE in the old version)
If the disk has a high read operation, it is possible to perform a full table scan, and if there are a large number of large full table scans of larger tables, you should evaluate the most commonly used queries and improve efficiency by adding indexes. A large number of inconsistent reads mean that too many indexes are used or non-selective indexes are used. If the number of dirty read buffers is higher than the number of free buffers requested (more than 5%), the DB_CACHE_SIZE is too small or not enough checkpoints have been established. If the number of split leaf nodes is high, you can consider rebuilding the growing or fragmented index.
Consistent gets: the number of blocks accessed in the buffer by queries that do not use the select for update clause. This number plus the value of DB BLOCK GETS statistics is the total number of logical read operations.
DB BLOCK GETS: the number of blocks accessed in the cache using the INSERT UPDATE DELETE OR SELECT FOR UPDATE statement.
PHYSICAL READS: no moderate amount of data was obtained from the cache. Can be read from disk, operating system cache, or disk cache to satisfy SELECT,SELECT FOR UPDATE,INSERT,UPDATE,DELETE statements
LOGICAL READS=CONSISTENT GETS+DB BLOCK GETS.
Cache hit ratio HIT RATIO= (LOGICAL READS- PHYSICAL READS) / LOGICAL READS * 100%
= CONSISTENT GETS+DB BLOCK GETS- PHYSICAL READS / CONSISTENT GETS+DB BLOCK GETS * 100%
The cache hit rate should be higher than 95%, otherwise you need to increase the DB_CACHE_SIZE.
DIRTY BUFFERS INSPECTED: the number of dirty read (modified) data buffers cleared from the LRU list. If this value exceeds 0, consider increasing the DB_WR process.
FREE BUFFER INSPECTED: the number of buffers skipped due to dirty reading, being fixed, or being busy. If the number is large, the buffer cache is too small.
PARSE COUNT: the number of times a SQL statement has been parsed.
RECURSIVE CALLS: the number of recursive calls in the database. If the number of recursive calls in a process is greater than 4, you should check the hit ratio of the data dictionary cache and whether the range of tables or indexes is too large. Unless a large amount of PL/SQL is used, recursive calls should account for less than 10% of user calls.
REDO SIZE: the amount of redo information written to the log in bytes. This information will help determine the size of the redo log.
SORTS (DISK): the number of disks sorted. Disk sorting divided by the number of memory sorting should not be higher than 5%. Otherwise, you need to resize the SORT_AREA_SIZE,PGA_AGGREGATE_TARGET.
Note: the memory allocated by SORT_AREA_SIZE is for each user, and the memory allocated by PGA_AGGREGATE_TARGET is for all sessions.
SORTS (MEMORY): the number of sorts in memory.
SORTS (ROWS): the number of rows of data that participated in the sort.
TABLE FETCH BY ROWID: the number of rows of data accessed by accessing the ROWID. A high value usually means that the application adjusts well in terms of the operation of getting the data.
TABLE FETCH CONTINUED ROW: the number of data rows obtained, either chained or migrated.
10. Statistics for data dictionary and library cache:
If the PCT miss value in the report is high, you should increase the sharing of cursors in the application or increase the size of the shared pool.
At this point, I believe you have a deeper understanding of "what is the content of Oracle AWR?" you might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.