Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to analyze the AWR report of ORACLE

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

In this issue, the editor will bring you an analysis of the AWR report on how to conduct ORACLE. The article is rich in content and analyzes and narrates it from a professional point of view. I hope you can get something after reading this article.

1. What is AWR?

AWR (Automatic Workload Repository) is the abbreviation of automatic load information base. AWR report is a performance collection and analysis tool provided by Oracle 10g later, which can provide a report on the use of the entire system resources within a period of time. Through the report, you can understand the whole operation of a system, and the generated report includes multiple parts.

AWR samples the v$active_session_history view (the ASH in memory collects information, theoretically 1 hour) every hour, saves the information to disk, and retains it for 7 days before the old record is overwritten. This sample information is saved in the wrh$_active_session_history view (the ASH information written to the AWR library, theoretically for more than an hour). The sampling frequency (1 hour) and retention time (7 days) can be adjusted according to the actual situation, which provides DBA with a more effective system monitoring tool.

2. Under what circumstances will AWR be used?

DBA monitoring and understanding of the running status and status of the database, when the database bottleneck is found in the testing process, but can not locate the specific reason, we can use the AWR report for analysis and positioning.

Database performance problems, generally in three places: IO, memory, CPU, these three places are closely related. Assuming that there are no physical failures in all three places, as the IO load increases, more memory will be needed to store it, and CPU will need to spend more time filtering the data. On the contrary, if CPU takes too much time, it may be parsing SQL statements, or filtering too much data, not necessarily related to IO or memory.

CPU: parse the SQL statement, try multiple execution plans, and finally generate an execution plan that the database thinks is good, but not necessarily optimal. Because when there are too many associated tables, the database will not exhaust all the execution plans, which will take too much time. How does oracle know that this data is what you want and the other one is not what you want? it needs to be filtered by cpu.

Memory: both SQL statements and execution plans need to be kept in memory for a period of time, and the data obtained will also be kept in memory as far as possible according to the LRU algorithm. In the process of executing SQL statements, joins and sorting between various tables also take up memory.

IO: if the data you need is not in memory, you need to fetch it from disk, which will involve physical IO. When there is too much join data between tables, and when there is not enough data in sorting and other operations, temporary tablespaces are needed, which will also consume physical io.

As explained here, the PGA allocated by ORACLE generally accounts for only 20% of the memory. For dedicated server mode, every time operations such as SQL statements and table data operations are performed in PGA, that is, only about 20% of memory can be allocated with ORACL. If multiple users perform multi-table association, and there is a lot of table data, coupled with improper association, memory will become a bottleneck, so it is very important to optimize SQL Reduce logical and physical readings.

3. How to generate awr report?

The first step is to log in to the ORACLE database server and remember the current directory or change to the directory that AWR wants to save

The second step is to connect the SQLplus user name / password @ service connection name to the oracle database instance, as shown in the following figure

The third step, you will be prompted to execute @? / rdbms/admin/awrrpt;,

The following types of AWR reports can be generated, most of which are AWR reports of raw cost instances

@? / rdbms/admin/awrrpt; this instance AWR includes

Select the instance number in @? / rdbms/admin/awrrpti; RAC

@? / rdbms/admin/awrddrpt; AWR comparison report

@? / RDBMS/admin/awrgrpt; RAC Global AWR report

Enter the format in which the AWR report is generated in html, as shown in the following figure:

Enter the start and end values: (will be listed after entering the number of days, snapvalue)

Enter the name of the AWR report: name Custom enter and then automatically produce the AWR report, as shown in the following figure:

After finishing, you can go to the specified directory and click on the AWR report file, which is test_awr.lst, as shown in the following figure:

4. Analyze the AWR report

The AWR report is very rich. Here we choose a small part of it to explain. Before analyzing the AWR report, we should first understand the hard parsing and soft parsing of Oracle. First of all, let's talk about the process of Oracle's processing of SQL. When you issue a SQL statement to deliver the Oracle, Oracle will process the SQL in several steps before executing and getting the results:

1. Syntax checking (syntax check)

Check whether the spelling of this SQL is grammatical.

2. Semantic checking (semantic check)

Such as checking whether the access object in the SQL statement exists and whether the user has the corresponding permissions.

3. Parse the SQL statement (prase)

The internal algorithm is used to parse the SQL to generate the parsing tree (parse tree) and the execution plan (execution plan).

4. Execute SQL and return the result (execute and return)

Among them, soft and hard parsing takes place in the third process.

Oracle uses the internal hash algorithm to get the hash value of the SQL, and then looks in library cache to see if the hash value exists.

Assuming it exists, compare this SQL with the one in cache

Assuming "the same", the existing parse tree and execution plan will be utilized, while the work related to the optimizer will be omitted. This is the process of soft parsing.

Of course, if either of the above two assumptions is not true, the optimizer will create a parse tree and generate an execution plan. This process is called hard parsing.

Creating parse trees and generating execution plans are expensive actions for SQL execution, so hard parsing should be avoided and soft parsing should be used as much as possible.

Open the AWR report header as shown in the following figure:

Displays the size of each area in the SGA, which can be compared to the initial parameter value.

Shared pool mainly includes library cache and dictionary cache. Library cache is used to store SQL, PL/SQL, Java classes, etc., after recent parsing (or compilation). Library cache is used to store recently referenced data dictionaries. Cache miss that occurs in library cache or dictionary cache is much more expensive than what happens in buffer cache, so shared pool is set to ensure that most recently used data can be cache.

The above figure includes the memory hit ratio of the key metrics of Oracle and the efficiency of other database instance operations, in which Buffer Hit Ratio is also known as Cache Hit Ratio,Library Hit ratio and Library Cache Hit ratio. Like the Load Profile section, there is no so-called "correct" value in this section, but can only be judged according to the characteristics of the application. In a DSS environment that uses direct reads to execute large parallel queries, 20% of the Buffer Hit Ratio is acceptable, which is totally unacceptable for an OLTP system. According to Oracle's experience, for OLTPT systems, the Buffer Hit Ratio ideal should be above 90%.

Buffer Nowait represents the non-wait ratio of data obtained in memory, and the non-wait ratio of Buffer in the buffer, which generally needs to be greater than 99% of Buffer Nowait. Otherwise, there may be contention, which can be further confirmed in later waiting events.

Buffer Hit represents the rate at which the process finds blocks from memory, and it is more important to monitor this value for significant changes than the value itself. For general OLTP systems, if this value is less than 80%, more memory should be allocated to the database. The hit rate of data blocks in the data buffer should usually be more than 95%. Otherwise, less than 95%, important parameters need to be adjusted, and less than 90% may be required to add db_cache_size. A high hit rate does not necessarily mean that the performance of the system is the best. For example, a large number of non-selective indexes are accessed frequently, which will result in the illusion of a high hit rate (a large number of db file sequential read), but a relatively low hit rate will generally affect the performance of the system and need to be adjusted. A sudden change in hit rate is often a bad message. If the hit ratio suddenly increases, you can check the TOP buffer get SQL to see the statements and indexes that cause a large number of logical reads. If the hit ratio suddenly decreases, you can check the TOP physical reads SQL to check the statements that produce a large number of physical reads, mainly those that do not use the index or the index is deleted.

Redo NoWait represents the percentage of not waiting to get BUFFER in the LOG buffer. If it is too low (see the 90% threshold), consider increasing the LOG BUFFER. When the redo buffer reaches 1m, you need to write to the redo log file, so when the redo buffer setting exceeds 1m, it is unlikely to wait for the buffer space to be allocated. Currently, redo buffer, which is generally set to 2m, should not be too large for the total amount of memory.

Library Hit: represents the ratio that Oracle retrieves a parsed SQL or PL/SQL statement from Library Cache. When an application calls a SQL or stored procedure, Oracle checks Library Cache to determine if there is a parsed version, and if so, Oracle executes the statement immediately; if not, Oracle parses the statement and assigns it a shared SQL zone in Library Cache. Low library hit ratio can lead to too much parsing, increase CPU consumption, and degrade performance. If the library hit ratio is less than 90%, you may need to increase the shared pool area. Generally speaking, the hit rate of STATEMENT in the shared area should be kept above 95%, otherwise you need to consider: increasing the shared pool; using binding variables; modifying parameters such as cursor_sharing.

Latch Hit:Latch is a lock that protects memory structures and can be thought of as a SERVER process obtaining permission to access memory data structures. Make sure that the Latch Hit is more than 99%, otherwise it means Shared Pool latch contention. It may be due to unshared SQL, or the Library Cache is too small, which can be solved by binding change or scaling up Shared Pool. Make sure it is > 99%, otherwise there will be serious performance problems. When there is a problem with this value, we can find and solve the problem with the help of the later wait time and latch analysis.

Parse CPU to Parse Elapsd: parsing actual elapsed time / (parsing actual running time + waiting time for resources in parsing), the higher the better. The calculation formula is: Parse CPU to Parse Elapsd% = 100* (parse time cpu / parse time elapsed). That is, parsing the actual running time / (parsing the actual running time + waiting for resources in the parsing). If the ratio is 100%, it means that the CPU wait time is 0 and there is no wait.

Non-Parse CPU: actual elapsed time of SQL / (actual elapsed time of SQL + SQL parsing time). Too low means that parsing takes too much time. The calculation formula is:% Non-Parse CPU = round (100*1-PARSE_CPU/TOT_CPU), 2). If this value is small, it means that parsing consumes too much CPU time. Compared with PARSE_CPU, if the TOT_CPU is high, the ratio will be close to 100%, which is good, indicating that most of the work performed by the computer is to execute the query rather than analyze the query.

Execute to Parse: is the ratio of statement execution to analysis, which can be high if the SQL reuse rate is high. The higher the value, the more times it is repeated after a parse. The calculation formula is: Execute to Parse = 100 * (1-Parses/Executions). In this example, parse is required almost five times per execution. So if the system Parses > Executions, it is possible that the ratio is less than 0. This value

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report