In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
1 introduction to AWR
AWR (full name Automatic Workload Repository) is a new feature of Oracle 10g version, a performance collection and analysis tool installed with the database. AWR can collect all aspects of performance data during the operation of the scene, and can also analyze the measurement data from the statistical data. Through the analysis report, we can understand the operation of the whole system. Therefore, the performance tuning sharper is commonly used in oracle database.
2 generate AWR report
AWR generates a report by comparing the statistics collected by two snapshots (snapshot). The report format can be TXT or HTML, and you usually choose to generate a report in HTML format that is easy to read.
The AWR report is generated as follows:
1. Use sqlplus or pl/sql to connect to the database and execute the snapshot generation command. Note that the executed user must have the DBA role:
Exec dbms_workload_repository.create_snapshot
2. Execute the awr report generation script with the following commands. Note that before executing the command, the above snapshot commands are usually executed once after the scene execution and before the end of the scene:
@ $ORACLE_HOME/rdbms/admin/awrrpt.sql
The effect is as follows:
If you use pl/sql, specify the absolute path to awrrpt.sql in the command window (Command Window) and execute the script.
3. When executing the script, enter html to generate a report in html format, as shown in the figure:
4. Enter the snapshot information within the number of days to be read. Usually enter 1, that is, the snapshot within the last 1 day, as shown in the figure:
5. Specify the id of the start snapshot and the end snapshot to be compared, as shown in the figure:
6. Enter the name of the awr report to be generated, ending with .html, and default to the snap_id entered earlier, as shown in the figure:
7. The awr report can be generated after the execution of the script. By default, the report will be generated in the current path, such as the oracle user's home directory, or use pl/sql, which is located in the tool directory by default.
3 Analysis of AWR report 3.1 AWR report components
The AWR report provides detailed data. Through the Main Report section, you can quickly understand the measurement data of SQL, instance activity, wait events, segment statistics, and so on, as shown in the figure:
3.2 AWR report Analysis
1. Preface analysis
From this section, we can know the idle degree of the database. If the DB Time is much less than the Elapsed time, the database is relatively idle. As can be seen from the figure above, the database takes 31.11 minutes in a period of 3.15 minutes, which accumulates all the CPU time. The server has 48 CPU, with an average of 0.64 CPU per CPU and 20% CPU utilization (0.64 CPU 3.15), indicating that there is not much pressure on the system.
2. Report Summary analysis
The Cache Sizes section of Report Summary shows the size of each area in SGA, where Buffer Cache is used to cache blocks on physical disks to speed up access to disk data; shared pool mainly includes library cache and dictionary cache. Library cache is used to store SQL, PL/SQL, Java classes, etc., after recent parsing (or compilation). Dictionary cache is used to store recently referenced data dictionaries. Cache miss that occurs in library cache or dictionary cache is much more expensive than what happens in buffer cache, so shared pool is set to ensure that most recently used data can be cache.
Load Profile shows the overall load of the database. Empirically, a Logons greater than 2 per second, a Hard parses greater than 100 per second, and a total parses of more than 300 per second indicate that there may be contention problems.
This part of the data shows the memory hit rate of the key indicators of Oracle and the operation efficiency of the database instance. The expected data of each index is 100%, but it is necessary to judge whether there is a bottleneck according to the characteristics of the application.
Buffer Nowait: indicates the percentage of unwaited data obtained in memory. This value of Buffer Nowait generally needs to be greater than 99%, otherwise there may be contention.
Buffer hit: indicates the percentage of blocks hit from memory. If this value is less than 80%, more memory should be allocated to the database. The hit rate of data blocks in the data buffer should usually be more than 95%.
Redo NoWait: indicates the percentage of non-waits to get Buffer in the LOG buffer, which generally needs to be greater than 90%
Library hit: indicates that the ratio of retrieving a parsed SQL statement from Library Cache should be kept at more than 95%. Otherwise, you need to consider increasing the shared pool, using binding variables, modifying cursor_sharing and other parameters.
Latch Hit: usually the Latch Hit should be greater than 99%, otherwise it means Shared Pool latch contention. It may be due to the unshared SQL, or the Library Cache is too small, which can be solved by binding variables or scaling up Shared Pool.
Parse CPU to Parse Elapsd: parsing actual running time / (parsing actual running time + waiting time for resources in parsing), the higher the better
Non-Parse CPU: actual running time of SQL / (actual running time of SQL + SQL parsing time). Too low means that parsing takes too much time.
Execute to Parse: is the ratio of statement execution to analysis, which can be high if the SQL reuse rate is high. The higher the value, the more times it is repeated after one parse.
In-memory Sort: the ratio of sorting in memory. If too low indicates that a large number of sorting takes place in the temporary tablespace, consider increasing the PGA. If it is less than 95%, it can be solved by appropriately increasing the initialization parameter PGA_AGGREGATE_TARGET or SORT_AREA_SIZE.
Soft Parse: the percentage of soft resolution (softs/softs+hards), which is similar to the hit rate of sql in the shared area. If it is too low, you need to adjust the application to use binding variables.
Memory Usage%: for databases that have been running for some time, Memory Usage usage should be stable between 75% and 90%. If it is too small, Shared Pool is wasted, while if it is higher than 90, there is contention in the shared pool and insufficient memory.
SQL with executions > 1: the sql ratio with more than 1 execution times. If this value is too small (usually 70%), it means you need to use more binding variables in the application to avoid too much SQL parsing.
Memory for SQL w/exec > 1: the proportion of memory consumed by SQL with more than 1 execution times
3. Top 5 Timed Events analysis
This section shows the five most serious waits in the system, and shows them in reverse order according to the proportion of waiting time. When diagnosing or tuning performance problems, you usually start from here and determine the direction of troubleshooting and optimization based on waiting events. In databases that have no performance problems, CPU time is always listed first.
4. SQL Statistics analysis
This section lists the SQL statements with the most serious resource consumption according to resource categories, and shows the proportion of resources they account for in the statistical period, providing a direction for performance tuning. For example, when the CPU resource is the bottleneck of system performance, optimizing the SQL statements with the most CPU time will get the greatest effect, while in the system with the most serious waiting, optimizing the SQL statements with the most Reads can often achieve obvious results.
5. IO analysis
This section lists the Icano statistics for each tablespace. Pass, Av Rd (ms) should not exceed 30, otherwise it is considered to be contention.
4 other resources
About python learning, sharing and exchange, the author opened the official account of Wechat [Little Python Community]. Interested friends can follow, welcome to join, establish our own small circle and learn python together.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.