Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Oracle tuning to determine the problematic SQL

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

SQL statement processing phase

A thorough understanding of SQL processing helps to understand SQL statistics. There are four important stages in SQL statement processing: analysis, binding, execution, and extraction.

The reverse arrow represents the processing scheme (for example, extract-(re) bind-execute-extract).

The extraction phase applies only to queries and DML statements with returning clauses.

Note: for more information on SQL statement handling, see "Oracle Database 11g Application Developers Guide: Fundamentals" and "Oracle Database 11g: Concepts". Analysis stage

Parsing is a phase of SQL statement processing. When the application executes the SQL statement, it makes an analysis call to Oracle DB. During the analysis call, Oracle DB will:

Check the syntax and semantic validity of the statement

Determines whether the process executing the statement has permission to run

Search the shared pool for a shareable match for the statement

Assign a dedicated SQL region to the statement

There are two types of analysis operations:

Soft analysis: submit a SQL statement and find a match in the shared pool. The match can be the result of a previous execution by another user. The SQL statement is shared, which is good for performance. However, soft analysis still requires syntax and security checks, which will consume system resources.

Hard analysis: a SQL statement was submitted for the first time, but no shareable match was found in the shared pool. Hard analysis is the most resource-intensive and non-scalable because it performs all the operations involved in the analysis.

If binding variables are used correctly, there may be more soft analysis, reducing hard analysis and keeping parsed statements in the library cache for longer. SQL storage

The Oracle server uses library caching and SQLAREA to store SQL statements and PL/SQL blocks. If the statement is stored in the cache, the Oracle server will:

Reduce the statement to the value of the ASCII text

Use the hash function of this number

Place the cursor of this statement on the hash chain

Hash values are not unique, and multiple statements can hash to the same value. The cursor contexts of these statements are all stored in the same hash chain. Search the hash chain to get the correct statement. Whenever a statement is submitted, the cache is searched. If no cursor handle is found, the cursor is built from the statement. When the statement is subsequently submitted, the cursor handle is found and the cursor is reused.

If the statement has been parsed and executed, and the cursor handle is still in the client cache, the cursor can be called and executed without having to search the shared pool for the statement. Whenever an analysis request is made, the analysis count statistics are incremented, but the overhead of finding statements in the session cache is significantly reduced.

Note: ideally, a hard analysis is performed when the SQL statement is first submitted, followed by a soft analysis for each other session that uses the statement. This depends on whether there is enough memory in the session cache and shared pool to retain cursor information. Use and analysis of cursors

Every developer wants his or her code to run as fast as possible. For code that uses SQL statements, this means that cursor access must be fast. The fastest possible way to access a cursor is through the cursor cache that is open in the session memory of the server session. Each open cursor in the open cursor cache has a pointer to the SGA memory location of the cursor handle. To execute the cursor, you need to use a pointer instead of parsing. An open cursor is a parsed cursor, and the cursor handle is in the library cache.

If the SESSION_CACHED_CURSORS parameter is set to a value, after the cursor is closed, the cursor information is moved to the cursor cache where the session has been closed. (prior to version 10.2.0.2, the default value was 0, which has been changed to 50. )

When the cursor is opened, the session hashes the SQL statement and performs a hash lookup in the closed cursor cache. If the cursor is found, it is moved to the open cursor cache and executed using a pointer to the cursor handle in the shared pool without any analysis.

If the cursor is not found in the session, the hash value is used to search the shared pool for the hash chain of the cursor handle. The search will be registered as an analysis. This cursor is executed if the cursor handle is found and the remaining cursors have not been released yet. This belongs to soft analysis. Use and Analysis of cursors (continued)

If the cursor has expired and released in the shared pool, or if the cursor does not exist in the shared pool, the cursor is built. This belongs to hard analysis. Building cursors requires finding metadata for individual objects, such as tables, indexes, extents, and sequences. If the metadata for these objects has not been cached in the shared pool, a recursive SQL is generated to extract the information from the data dictionary.

In some cases where a large number of cursors are committed to the shared pool and the shared pool is underallocated, the cursor may soon expire in the cache, even if it is between two fetches. This situation will lead to a large number of hard analysis.

Note: for more information about optimizing shared pools to optimize cursor processing, see the "optimizing shared pools" course. Binding phase

During the binding phase:

Oracle DB checks the statement's reference to the bound variable.

Oracle DB assigns or reassigns a value to each variable.

When binding variables are used in a statement, the optimizer will assume that cursor sharing is required and that different calls should use the same execution plan. This helps to improve performance by reducing the number of hard analytics running.

If there is a histogram, the optimizer assumes that the data allocation does not match the default assumption of the optimizer. Therefore, if you use a different execution plan, you will gain a significant advantage when calling cursors multiple times with different bound variables. In this case, adaptive cursor sharing creates a new schedule. If you do not try a new plan, performance may degrade due to some bound variable values.

Cursors share the influence of database initialization parameters and adaptive cursor sharing of Oracle Database 11g. For more information, see the course "optimizing shared pools." Execution phase

An execution plan is a series of steps that the server process uses to access and determine the required rows of data in the data buffer. Multiple users can share the same execution plan. Oracle DB performs physical or logical reads / writes on DML statements, while sorting data as needed.

Note: physical reads are disk reads; logical reads involve blocks that already exist in the database buffer cache memory. Physical reads require disk Istroke O, which takes up more resources and time.

Extraction stage

In the extraction phase, Oracle DB performs row retrieval for SELECT statements. Each extraction operation usually uses array extraction to retrieve multiple rows. Array retrieval can improve performance by reducing the number of network round trips. Each Oracle tool has its own way to resize the array; for example, in SQL*Plus, you can change the extract size by using the ARRAYSIZE setting:

SQL > show arraysize arraysize 15 SQL > set arraysize 50

SQL*Plus defaults to 15 lines at a time. Oversized arrays have little or no advantage at all. DML processing steps

Data manipulation language (DML) statements require only two processing phases:

The analysis phase is the same as the analysis phase used to process queries.

Additional processing is required during the execution phase before data changes can be made.

DML execution phase

Execute the DML statement:

1. If there are no blocks and fallback blocks in the buffer cache, the server process reads them from the data file to the buffer cache. The server process locks the row to be modified.

two。 The server process records changes made to the data buffer and restores the changes. These changes are written to the redo log buffer before the data in memory and the fallback buffer are modified. This is called "writing event records first".

3. The fallback buffer contains the value of the data before the modification operation. The fallback buffer is used to store the previous data image, so you can roll back the DML statement as needed. The data buffer records the new value of the data.

4. The user will get feedback on the DML operation (such as the number of rows affected by the operation). DML processing steps (Cont.)

DML execution phase (continued)

All in-memory blocks and fallback blocks (in the buffer cache) that are changed as a result of DML are marked as gray buffers, that is, different from their corresponding blocks on disk. The database write process (DBWR) does not write these buffers to disk immediately. When a transaction is committed, the log writing process immediately records the redo change record of changes made to these blocks in the redo log file, and the gray blocks are eventually written to disk by DBWR, which is determined by the incremental checkpoint algorithm. Note: before DBWR writes the gray block to disk, the redo change record of the gray block must be written to the redo log file.

The UPDATE, DELETE, or INSERT commands all use similar steps. The image before the DELETE command contains the column values of the row to be deleted, while the image before the INSERT command contains only the row location information.

Before committing the transaction, changes made to the block are only recorded in the memory structure and are not immediately written to disk. The instance process follows a lazy writing algorithm to improve overall performance. After the transaction is committed, the write operation will be permanent. A "submitted" message is not published until the LWGR process records the redo information to disk to ensure complete recoverability. DBWR writes blocks to disk according to the checkpoint algorithm. If the SGA is lost due to a computer failure before committing the transaction, these changes are lost at the same time. The rule is that the transaction is not permanent until it is committed.

For more information about working with database buffer caching, see the course "optimizing buffer caching." Quick submission

Oracle uses a fast commit mechanism to ensure that committed changes can be restored in the event of an instance failure.

System change number

Whenever a transaction is committed, Oracle DB assigns a unique system change number (SCN) to the transaction. Oracle DB uses SCN as an internal timestamp to keep data synchronized, which provides read consistency when retrieving data from data files. Consistency checks can be performed through the SCN instance without relying on the date and time of the operating system.

When a COMMIT is issued, the following steps are performed:

The server process saves the commit record in the redo log buffer along with the SCN.

The background log writing process (LGWR) performs an adjacent write to all redo log buffer entries up to the time the record is submitted, including the commit record, and writes to the redo log file. This ensures that even if there is an instance failure, changes will not be lost.

The server process sends a message to the user process indicating that the transaction is complete.

Finally, DBWR writes the actual block changes back to disk based on its own internal timing mechanism and incremental checkpoint settings. The function of Oracle Optimization Program

The optimizer is part of Oracle DB and is used to create an execution plan for SQL statements. Determining the execution plan is an important step in processing any SQL statement and can have a significant impact on execution time.

An execution plan is a series of actions that are performed sequentially when a statement is executed. the course "impact Optimizer" describes the details of each step. The optimizer considers many factors related to the referenced object and to the conditions specified in the query. The information required by the optimizer includes:

Statistics collected for the system (iCandle O, CPU, etc.) and scenario objects (number of rows, indexes, etc.)

Information in the dictionary

WHERE clause qualifier

Tips provided by developers

When using diagnostic tools such as Enterprise Manager, EXPLAIN PLAN, and SQL*Plus AUTOTRACE, you can see the execution plan selected by the optimizer.

Note: depending on its function, the Oracle Database 11g optimizer has two names: query optimizer or runtime optimizer and automatic optimizer. The function of Oracle Optimization Program (continued)

Optimizer action: for any SQL statements processed by Oracle Server, the optimizer does the following:

Evaluate expressions and conditions: the optimizer first evaluates expressions and conditions that contain constants as comprehensively as possible.

Statement conversion: for complex statements involved, such as associated subqueries or views, the optimizer may convert the original statement into an equivalent join statement.

Select the optimization program method: the optimization program determines the optimization goal.

Select access path: for each table accessed by the statement, the optimizer selects one or more available access paths to get the table data. If no statistics are available, such as using a bitmap index, the optimizer skips some access paths.

Select join order: for join statements that join more than two tables, the optimizer first selects which two tables to join, then which table will join to the result, and so on.

Select join method: for any join statement, the optimizer selects the operation for the join.

Note: the optimizer may not make the same decision for different versions of Oracle DB. In recent releases, the optimizer may make different decisions because more information is available.

The optimizer has two working modes. The first is the runtime optimizer, which is a common mode that creates an execution plan at run time. In this mode, the time of the optimization program is limited, and only a limited number of alternatives can be considered. The second mode is called the automatic optimizer (ATO). In this mode, the optimizer has more time to consider more options and collect statistics. ATO generates a better plan and creates an SQL profile that will help the optimizer choose a better plan whenever a SQL statement is submitted in the future. Identify bad SQL

One of the advantages of SQL is that you can write different SQL statements to produce the same results. Any SQL statement that produces the correct result is the correct SQL statement. However, different SQL may require different amounts of resources. Bad SQL may be correct, but it is inefficient and requires more resources.

Symptoms of poor SQL can be any of the features listed on the slide. The top-level SQL report shown in the next slide provides a way to find the SQL statements that consume the most system resources.

The cause of poor SQL may be improper design, poor coding, or an inefficient execution plan chosen by the optimizer. DBA has little opportunity to control the design or code, but it can influence the optimizer to generate a better execution plan.

Theoretically, there is an optimal execution plan for any given result set on a given relational dataset. The optimizer tries to find the optimal execution plan under the constraints of a given time and resources. It may take a long time to find the optimal plan. For example, you may not want to wait for the optimizer to take five minutes to generate a plan that reduces runtime by five seconds. The order in which the optimizer evaluates the trial execution plan is affected by a number of factors, including the way SQL is written. Top-level SQL report

In terms of optimization, the biggest return on investment lies in SQL optimization. Top-level SQL reports are very effective in determining which statements take up the most system resources. Research shows that usually 20% of SQL statements take up 80% of resources, while 10% of statements take up 50% of resources. This means that the performance of the entire system can be improved by identifying and optimizing top-level SQL statements.

Using top-level SQL reports simplifies the process of finding the most resource-intensive SQL statements. Both AWR and Statspack reports include a set of top-level SQL lists. Each report lists the top-level SQL statements sorted by resource usage in several categories. These categories include: elapsed time, CPU time, fetch, read, execution, parse calls, shareable memory, and version count. Each report does not include the complete SQL text, but each report then gives all the SQL text reports according to SQL_ID.

By default, all SQL statements are not included in these reports. The number of statements included is controlled by the topnsql parameter setting of AWR and the level and threshold settings in Statspack. For more information about the Statspack parameter, see the appendix, "using Statspack."

What is the implementation plan?

When a statement is executed, the server performs the scheduled steps created by the optimizer. Each step either physically retrieves the data row from the database or somehow prepares the data row for the user who issued the statement. The combination of steps used to run a statement is called an execution plan.

The execution plan includes the access method of each table accessed by the statement and the order of the tables (join order). The optimizer also uses different methods to combine rows of multiple tables (join method). The steps to execute the plan are not performed in numbered order.

By executing the plan, you can view the method selected by the optimizer. Sometimes the execution plan clearly states the reasons for the inefficiency of a statement; for example, when an index query selects a full table scan (FTS), this scan involves a number of Icano operations. In this case, the question becomes why the optimizer chose FTS. This type of problem is described in detail in the course "affecting Optimizer". See how to execute the plan

Through the EXPLAIN PLAN command, you can view the execution plan that the optimizer uses to execute the SQL statement without having to execute the SQL statement.

The SQL trace utility is used to measure timing statistics for SQL statements.

The automatic workload data Archive (AWR) is a built-in data archive in Oracle Database 11g. Oracle DB periodically takes snapshots of all its important statistics and workload information and saves snapshots in AWR, including a list of resource-intensive SQL statements. The AWR data includes the execution plan.

The V$SQL_PLAN view contains information about the SQL statement being executed and its execution plan, which is still in the shared pool.

The AUTOTRACE commands available in SQL*Plus generate PLAN_TABLE output and statistics about the performance of the query. This command provides many of the same statistics as the SQL trace, such as disk reads and memory reads.

You can use the DBMS_XPLAN package method to display the execution plan generated by the EXPLAIN PLAN command and V$SQL_PLAN query and AWR. Use the execution plan

The view execution plan is used to:

Determine the current implementation plan

Determine the effect of creating an index on a table

Find cursors that contain a specific access path (for example, full table scan or index range scan)

Determine which indexes are selected or not selected by the optimizer

Determine whether the optimizer selects a specific execution plan that developers expect (for example, nested loop joins)

You can use the execution plan to make the following decisions:

Delete or create an index

Generate statistics for database objects

Modify initialization parameter values

Migrate an application or database to a new version

By default, the execution plan is not retained when SQL is released when it expires in the shared pool. If the previously used plan is retained in a user-defined table or loaded as a baseline plan, it is possible to determine how changes in the performance of the SQL statement relate to changes in the execution plan of the statement. DBMS_XPLAN package: Overview

The DBMS_XPLAN package provides an easy way to display the output of EXPLAIN PLAN commands in several predefined formats. You can also use the DBMS_XPLAN package to display plans for statements stored in AWR. In addition, it provides a method to display SQL execution plan and SQL execution runtime statistics for cached SQL cursors based on information stored in V$SQL_PLAN fixed views and V$SQL_PLAN_STATISTICS_ALL fixed views.

The DBMS_XPLAN package provides three table functions that can be used to retrieve and display execution plans:

DISPLAY formats and displays the contents of PLAN_TABLE 's schedule.

DISPLAY_AWR formats and displays the contents of the execution plan of the SQL statement stored in AWR.

DISPLAY_CURSOR formats and displays the contents of the execution plan for any cursors loaded from the V$SQL_PLAN view. DBMS_XPLAN package: Overview (Cont.)

The method of this package contains a FORMAT parameter that lets you specify the level of detail of the schedule that is displayed.

BASIC: displays minimal planning information (action ID, object name, and operation options)

TYPICAL: default. Displays the most relevant information in the plan. When available, only partition pruning, parallelism, and predicates are displayed.

ALL: maximum level. Include the information displayed at the TYPICAL level, and add projection information and SQL statements generated for the parallel execution server (only if parallel).

SERIAL: similar to TYPICAL, but does not display parallel information, even if the plan is executed in parallel.

This package runs with the permissions of the calling user, not with the permissions of the package owner (SYS). The DISPLAY_CURSOR table function requires SELECT permissions on the following fixed views: V$SQL_PLAN, V$SESSION, and V$SQL_PLAN_STATISTICS_ALL. Using the DISPLAY_AWR function requires SELECT permissions on DBA_HIST_SQL_PLAN, DBA_HIST_SQLTEXT, and V$DATABASE. All of these permissions are automatically granted as part of SELECT_CATALOG_ROLE. However, it is recommended that you do not grant this role indiscriminately, as this can lead to security issues.

The DISPLAY_CURSOR and DISPLAY_AWR functions accept SQL_ID as an argument (as shown in the examples later in this course). The SQL_ID of the statement can be obtained by querying V$SQL or DBA_HIST_SQLTEXT. EXPLAIN PLAN command

The EXPLAIN PLAN command is used to generate an execution plan that the optimizer uses to execute SQL statements. It does not execute the statement, but simply generates a plan that may be used and inserts the plan into the table. If you look at the plan, you can see how the Oracle server executes the corresponding statements.

To use EXPLAIN PLAN, you must:

First use the EXPLAIN PLAN command to interpret the SQL statement

Use the methods in the DBMS_XPLAN package to retrieve the plan steps

PLAN_TABLE is created automatically as a global temporary table to save the output of EXPLAIN PLAN statements for all users. PLAN_TABLE is the default sample output table in which the EXPLAIN PLAN statement inserts rows that describe the execution plan.

Note: EXPLAIN PLAN may generate a plan that is different from the plan actually used by the optimizer for the following reasons:

The EXPLAIN PLAN command cannot access the binding variable.

SQL*Plus sessions may have different environments due to login triggers or session parameter settings.

V$SQLPLAN will use the actual plan. EXPLAIN PLAN command: exampl

This command inserts the execution plan of the SQL statement into the schedule and adds the name tag demo01 for later reference. The tag is optional. You can also use the following syntax:

EXPLAIN PLAN

FOR

SELECT e.last_name, d.department_name FROM hr.employees e, hr.departments d

WHERE e.department_id = d.department_id

EXPLAIN PLAN command: exportin

The DISPLAY function of the DBMS_XPLAN package can be used to format and display the last statement stored in the schedule.

The slide shows the information in the PLAN table of the example retrieved using the DBMS_XPLAN package as shown on the previous slide.

You can also use the syntax shown below to retrieve the PLAN table.

SELECT plan_table_output FROM TABLE (dbms_xplan.display ('plan_table','demo01','serial'))

The output is the same as that shown in the slide. In this example, the PLAN_TABLE,'demo01' representative statement ID can be replaced with another schedule name.

You can run the utlxpls.sql script (located in the ORACLE_HOME/rdbms/admin/ directory) to display the EXPLAIN PLAN of the last statement explained. This script uses the DISPLAY table function of the DBMS_XPLAN package. Read execution plan

You can construct an execution tree (or "analysis tree") from the execution plan to have a clearer understanding of how statements are processed. To construct the tree, start with step 1. Then find all the substeps of step 1 and draw them as child steps or branches below step 1. For each step, repeat this process to find all the substeps of the step until all the steps are drawn. Oracle DB assigns a number to each step in the execution plan, which represents the ID column of PLAN_TABLE. Each step is represented by a "node". The result of each node's operation is passed to its parent node, which uses this result as input.

The order of the steps is determined by the parent-child relationship of the steps. Each step of the execution plan retrieves rows from the database, or accepts rows from one or more other steps (also known as "row sources") as input. The child step will be executed at least once, and the result will be fed to the parent step. When a parent step has multiple child steps, the child steps are executed in the order in which the steps are located. If the lower substeps are arranged from left to right, the plan can be read from left to right and from bottom to top.

In the figure, the number corresponds to the ID value in the PLAN table (see the previous slide). The optimizer uses an index scan to retrieve rows from the DEPARTMENTS table by performing FULL INDEX SCAN on the primary key column. Then perform a FULL TABLE SCAN and SORT operation on the EMPLOYEES table. Then MERGED the two result sets to get the final result of the query. Use V$SQL_PLAN

This view provides a way to check the execution plan of recently executed cursors. The information in this view is very similar to the output of PLAN_TABLE. However, EXPLAIN PLAN shows theoretical plans that can be used when executing the corresponding statements, while V$SQL_PLAN contains plans that are actually used. For reasons such as binding variable fetch, cursor_sharing parameter setting, and so on, the execution plan obtained by the EXPLAIN PLAN statement may be different from the actual execution plan used.

V$SQL_PLAN shows the plan for a particular cursor. Each SQL statement may have multiple related cursors, each identified by a CHILD_NUMBER. For example, if the referenced object is in a different scheme, the same statement executed by different users has different related cursors. It is suggested that different or different values of bound variables may cause cursors to be different. V$SQL_PLAN can be used to view different plans for different child cursors of the same statement.

Note: another useful view is V$SQL_PLAN_STATISTICS, which provides execution statistics for each operation in the execution plan of each cached cursor. At the same time, the V$SQL_PLAN_STATISTICS_ALL view combines the information in V$SQL_PLAN with the execution statistics in V$SQL_PLAN_STATISTICS and V$SQL_WORKAREA. V$SQL_PLAN column

Almost all the columns of the V$SQL_PLAN view are displayed in the PLAN_TABLE column. In both views, the column with the same name has the same meaning.

The ADDRESS and HASH_VALUE columns can be used to join V$SQLAREA to add cursor-specific information.

The ADDRESS, HASH_VALUE, and CHILD_NUMBER columns can be used to join V$SQL to add information specific to subcursors. Query V$SQL_PLAN

You can use the DBMS_XPLAN.DISPLAY_CURSOR () function to query V$SQL_PLAN to show the current or last executed statement (as shown in the example). For a given statement, you can pass the SQL_ID value of the statement as a parameter to get the execution plan. To get the SQL_ID:

SELECT e.last_name, d.department_name

FROM hr.employees e, hr.departments d WHERE e.department_id = d.department_id

SELECT SQL_ID, SQL_TEXT FROM V$SQL

WHERE SQL_TEXT LIKE'% SELECT e.lastocrynamame%'

13saxr0mmz1s3 select SQL_id, sql_text from v$SQL...

Cfz0cdukrfdnu SELECT e.last_name, d.department_name...

The FORMAT parameter controls the level of detail of the plan. In addition to the standard values (BASIC, TYPICAL, SERIAL, and ALL), there are two supported values that display run-time statistics for cursors.

RUNSTATS_LAST: displays run-time statistics for the last execution of the cursor

RUNSTATS_TOT: displays a V$SQL_PLAN_STATISTICS view of all runtime statistics executed by a particular SQL statement since that statement was first parsed and executed

The V$SQL_PLAN_STATISTICS view provides actual execution statistics for each operation in the plan, such as the number of lines output and the time taken. All statistics are cumulative, except for the number of rows output. For example, the statistics for a join operation also include statistics for its two inputs. The statistics in V$SQL_PLAN_STATISTICS are available for cursors that have been compiled with the STATISTICS_LEVEL initialization parameter set to ALL.

The V$SQL_PLAN_STATISTICS_ALL view contains memory usage statistics (sorting or hash joins) for row sources that use SQL memory. This view connects the information in V$SQL_PLAN with the execution statistics of V$SQL_PLAN_STATISTICS and V$SQL_WORKAREA. Query AWR

You can use the DBMS_XPLAN.DISPLAY_AWR () function to display all plans stored in AWR. This example passes in SQL_ID as a parameter. The steps to complete this example are as follows:

1. Execute the SQL statement.

SQL > select / * example * / *

2 > from hr.employees natural join hr.departments

two。 Query V$SQL_TEXT for SQL_ID.

SQL > select sql_id, sql_text from v$SQL

2 > where sql_text like'% example%'

SQL_ID SQL_TEXT

F8tc4anpz5cdb select sql_id, sql_text from v$SQL...

454rug2yva18w select / * example * / * from...

3. Use SQL_ID to confirm that the DBA_HIST_SQLTEXT dictionary view has captured this statement. If the query does not return a row, the statement has not been loaded in AWR. Query AWR (continued)

SQL > SELECT SQL_ID, SQL_TEXT FROM dba_hist_sqltext WHERE SQL_ID = '454rug2yva18w'

No rows selected

Instead of waiting for the next snapshot (taken hourly), you can take an AWR snapshot manually. If SQL is not within the scope of topnsql, SQL may not be captured. To do this, you can use the MODIFY_SNAPSHOT_SETTING procedure to modify the topnsql scope to force all SQL statements to be captured. Then check in DBA_HIST_SQLTEXT to see if it has been captured:

SQL > exec-

2 > DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS (-

3 > topnsql = > 'MAXIMUM')

PL/SQL procedure successfully completed.

SQL > exec DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT ()

PL/SQL procedure successfully completed.

SQL > exec-

2 > DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS (-

3 > topnsql = > 'DEFAULT')

PL/SQL procedure successfully completed.

SQL > SELECT SQL_ID, SQL_TEXT FROM dba_hist_sqltext WHERE SQL_ID = '454rug2yva18w'

SQL_ID SQL_TEXT

454rug2yva18w select / * example * / * from...

4. Use the DBMS_XPLAN.DISPLAY_AWR () function to retrieve the execution plan:

SQL > SELECT PLAN_TABLE_OUTPUT FROM TABLE (DBMS_XPLAN.DISPLAY_AWR ('454rug2yva18w'))

PLAN_TABLE_OUTPUT

SQL_ID 454rug2yva18w

-

Select / * example * / * from hr.employees natural join hr.departments

Plan hash value: 2052257371

| | Id | Operation | Name | Rows | Bytes | Cost (% CPU) | Time |

| | 0 | SELECT STATEMENT | 7 (100) | | |

| | 1 | HASH JOIN | | 11 | 968 | 7 (15) | 00:00:01 |

| | 2 | TABLE ACCESS FULL | DEPARTMENTS | 11 | 220 | 3 (0) | 00:00:01 |

| | 3 | TABLE ACCESS FULL | EMPLOYEES | 107 | 7276 | 3 (0) | 00:00:01 |

SQL*Plus AUTOTRACE

In SQL*Plus, the execution plan and some other statistics about running the SQL command can be obtained automatically by using the AUTOTRACE setting. Unlike the EXPLAIN PLAN command, this statement actually runs. However, you can choose to hide the statement results by specifying AUTOTRACE TRACEONLY EXPLAIN.

AUTOTRACE is a convenient diagnostic tool for optimizing SQL statements. Because it is completely declarative, it is easier to use than EXPLAIN PLAN.

Command option

OFF disables automatic tracing of SQL statements

ON enables automatic tracing of SQL statements

TRACEONLY enables automatic tracking of SQL statements and hides statement output

EXPLAIN displays the execution plan, but does not display statistics

STATISTICS displays statistics, but not execution plan

Note: if you omit the EXPLAIN and STATISTICS command options, the execution plan and statistics are displayed by default. Use SQL*Plus AUTOTRACE

precondition

To access STATISTICS data, you must have access to several dynamic performance tables. DBA can grant permissions by using the PLUSTRACE role created in the plustrce.sql script. DBA must run the script as SYS, and anyone with the DBA role can grant the PLUSTRACE role to users who want to use AUTOTRACE's option STATISTICS.

Example

The slide shows an example of the AUTOTRACE command.

Control the layout of AUTOTRACE execution plan

The execution plan consists of four columns displayed in the following order:

Line number of each step in ID_PLUS_EXP

PARENT_ID_PLUS_EXP parent step line number

PLAN_PLUS_EXP report steps

Database link or parallel query server used by OBJECT_NODE_PLUS_EXP

You can change the format of these columns or hide them by using the SQL*Plus COLUMN command. For more information, see "Oracle SQL*Plus User's Guide and Reference". SQL*Plus AUTOTRACE: statistics

AUTOTRACE will display a variety of statistics, not all of which are relevant to this stage of the discussion. The most important statistics include the following:

The number of logical Ibank O of the current number of fetches of db block gets

Number of consistent gets buffer cache blocks read

Number of blocks read from disk by physical reads

Number of redo generated by redo size (for DML statement)

The number of sorts performed by sorts (memory) in memory

The number of sorts performed by sorts (disk) using temporary disk storage

Note: the number of database block fetches is the number of reads of the current block in the buffer cache. The consistent number of fetches is the number of reads of buffer cache blocks with restored data. Physical readings are block reads. Three types of statistics are typically monitored: database block fetches, consistent fetches, and physical reads. These values should be low compared to the number of rows retrieved. Sorting should be performed in memory, not on disk. SQL tracking tool

If you are using the standard version, or if you do not have a diagnostic package, use the SQL tracking tool and TKPROF to collect statistics for the SQL execution plan to compare performance. A better way to compare two execution plans is to execute these statements and compare the statistics to determine which is better. The SQL trace writes its session statistics output to a file that you can format using TKPROF. You can use these tools as well as EXPLAIN PLAN to get the best results.

SQL tracking tool:

Can be enabled for a session or instance

Report capacity and time statistics for the analysis, execution, and extraction phases

Produces output that can be formatted by TKPROF

When the SQL trace tool is enabled for a session, Oracle DB generates a trace file that contains session statistics for the trace SQL statements for that session. When the SQL trace tool is enabled for an instance, Oracle DB creates trace files for all sessions.

Note: SQL tracing involves some overhead, so you may not want to enable SQL tracing at the instance level. SQL tracking tool (Cont.)

The SQL tracking tool provides performance information about individual SQL statements. The SQL trace provides the following information, including row source information:

Analyze, execute, and extract count

CPU time and occupancy time

Physical and logical readings

Number of rows processed

Library cache misses

User name used for each analysis

Each submission and fallback

Show the actual execution planned line actions for each SQL statement

Number of rows, consistent reads, physical reads, physical writes, and time spent for each operation on the row

Note: use the TKPROF utility to get a summary of each trace file. Methods of using the SQL tracking tool

You must complete the following steps to use SQL tracing:

1. Set the appropriate initialization parameters.

two。 Enable SQL tracing.

3. Run the application (and disable tracing when complete).

4. Disable SQL tracing.

5. Close the session (closing the session also disables session-level tracing).

6. Use tkprof to format the trace file generated by the SQL trace.

7. Interpret the output and optimize the SQL statement as needed.

Running a SQL trace adds overhead. Use SQL tracing only as needed, and at the session level, not at the instance level.

Note: this example assumes a dedicated server. In a shared server environment, XA, or application-level connection sharing, multiple sessions may serve a single session. You need to track all the servers involved and combine the trace files by using the trcsess utility, and then submit them to tkprof for formatting. For more information about trcsess, see the Application Monitoring course. Initialization parameter

There are several initialization parameters related to SQL tracing.

STATISTICS_LEVEL

The STATISTICS_LEVEL initialization parameters provided by Oracle control all major statistics collection or guidance in the database. This parameter sets the statistics collection level for the database. Collect some guidance or statistics based on the settings of STATISTICS_LEVEL.

BASIC: no guidance or statistics are collected. Disable monitoring and many automatic features. Oracle recommends that you do not use this setting because it disables important Oracle features.

TYPICAL: this is the default value and ensures that all major statistics are collected while providing the best overall database performance. This setting should be sufficient for most environments. TYPICAL causes TIMED_STATISTICS to be enabled.

ALL: includes all guidance or statistics collected by the TYPICAL settings, plus timing operating system statistics and row source execution statistics.

This view lists the status of statistics or guidance controlled by STATISTICS_LEVEL. Initialization parameters (continued)

TIMED_STATISTICS

The SQL tracking tool provides a variety of information about SQL execution in a process and optionally includes timing information. If timing information is required, this parameter must be set to TRUE. The STATISTICS LEVEL parameter automatically sets this parameter. The TIMED_STATISTICS parameter in the parameter file is set in the following ways, which can be set separately from STATISTICS_LEVEL:

SQL > ALTER SYSTEM SET TIMED_STATISTICS = TRUE

For a specific session, you can also set this parameter dynamically using the following command:

SQL > ALTER SESSION SET timed_statistics=TRUE

Timing statistics are measured in microseconds.

MAX_DUMP_FILE_SIZE

If the SQL trace tool is enabled at the instance level, each call to the server produces a line of text in a file (in the file format of the operating system). The maximum size of these files (in the operating system block) is limited by this initialization parameter. This is a dynamic parameter as well as a session parameter.

Warning: the default value is UNLIMITED, so these trace files may grow to fill the file system.

DIAGNOSTIC_DEST is the root directory of the automatic diagnostic data archive. The default value for this directory is derived from the ORACLE_BASE environment variable, which under UNIX is $ORACLE_BASE/diag. The files generated when you enable the Trace tool will be placed in the subdirectory of this archive:.. / rdbms///trace.

Get information about parameter settings

You can query the V$PARAMETER view to display the current parameter values:

SQL > SELECT name, value

2 FROM v$parameter

3 WHERE name LIKE'% dest%'

Alternatively, use the following methods:

SQL > SHOW PARAMETER dest enables SQL tracing for a session

You can use the command shown to enable SQL tracing for the session. These procedure calls are useful when you want to enable or disable SQL tracing from within the PL/SQL unit.

DBA can also use the provided package to enable SQL tracing for another user's session.

SQL > EXECUTE dbms_system.set_sql_trace_in_session 2 (session_id, serial_id, true)

In this procedure call, session_id and serial_id are the values in the SID and SERIAL# columns of V$SESSION, where V$SESSION is the data dictionary view commonly used by database administrators.

To enable SQL tracing for the entire instance, use the DATABASE_TRACE_ENABLE procedure in the DBMS_MONITOR package.

Warning: instance-wide tracing will produce a large number of trace files and affect performance.

Note: before you can use a DBMS_MONITOR package, you must grant EXECUTE permission on it.

By using the 10046 event, wait event information can be written to the trace file for that session. For more information about this event, see MetaLink Note: 171647.1, "Tracing Oracle Applications". To capture wait event information, run the following SQL statement:

ALTER SESSION SET EVENTS '10046 trace name context forever,level 8 minutes; disable SQL tracing for a session

Once the optimization is complete, disable SQL tracing using one of the previous methods, replace TRUE with the word FALSE, or replace enable with disable. If SQL tracing is enabled for a single session, quitting the session also disables SQL tracing. Format trace file

Use the TKPROF command to format the trace file into readable output. The TKPROF syntax is as follows:

OS > tkprof tracefile outputfile [options]

Name of the tracefile trace output file (input for TKPROF) the name of the file in which outputfile stores the formatting results

If you execute the TKPROF command without any arguments, a usage message and a description of all TKPROF options are generated. See the complete list on the next slide. Here is the output when you execute the TKPROF command without any arguments: Usage: tkprof tracefile outputfile [explain=] [table=]

[print=] [insert=] [sys=] [sort=]

By default, the. trc file is named after SPID. SPID can be found in V$PROCESS. Here is an easier way to find the file:

SQL > ALTER SESSION SET TRACEFILE_IDENTIFIER = 'MY_FILE'

The trace file in TKPROF will then contain the "MY_FILE" string. TKPROF command options

The option shown in bold is the most commonly used option:

INSERT creates a SQL script to load TKPROF results into database tables

The order in which statements are sorted in the SORT report (see values list on the next page)

PRINT generates reports on only so many (sorted) statements (this option is especially useful in combination with the SORT option. )

EXPLAIN logs in with the specified scheme and executes EXPLAIN PLAN

SYS disables the list of recursive SQL statements executed by user SYS

AGGREGATE disables or enables the (default) behavior of TKPROF to aggregate identical SQL statements into a single record

WAITS specifies whether to record a summary of any waiting events found in the trace file

TABLE specifies that the table of the execution plan is temporarily stored before the execution plan is written to the output file (if EXPLAIN is not specified, this parameter is ignored. When several individuals use TKPROF to optimize the same scheme at the same time, destructive interference can be avoided. )

RECORD uses all non-recursive SQL statements found in the trace file to create an SQL script (this script can be used for subsequent playback optimization sessions. ) TKPROF Command options (Cont.)

Sort option

Number of times prscnt invokes analysis

CPU time taken by prscpu analysis

Time taken by prsela analysis

Disk reads during prsdsk analysis

Number of buffers read consistently during prsqry parsing

Number of buffers currently read during prscu parsing

Number of misses in the library cache during prsmis analysis

Number of executions of execnt calls

CPU time taken by execpu execution

Time spent on exeela execution

Number of disk reads during exedsk execution

Number of buffers read consistently during exeqry execution

Number of buffers currently read during execu execution

Number of rows processed during exerow execution

Number of library cache misses during exemis execution

The number of times fchcnt calls were extracted

CPU time taken by fchcpu extraction

Time spent on fchela extraction

Number of disk reads during fchdsk extraction

Number of buffers read consistently during fchqry extraction

Number of buffers currently read during fchcu extraction

Number of rows extracted by fchrow

Userid analyzes the ID of the user of the cursor

Output of the TKPROF command

The TKPROF output lists the statistics for the SQL statement according to the SQL processing steps. The steps for each row containing statistics are identified by the value of the calling column.

Analyze this step to convert the SQL statement into an execution plan and include checks for appropriate security authorizations and for the existence of tables, columns, and other reference objects.

In performing this step, the Oracle server actually executes the statement. For INSERT, UPDATE, and DELETE statements, this step modifies the data (including sorting operations if needed). For the SELECT statement, this step determines the selected row.

Extract this step to retrieve the rows returned by the query and sort them as needed. The execution of the extraction applies only to SELECT statements. Note: the values of "analysis" include "hard analysis" and "soft analysis". Hard analysis refers to the development of the execution plan (including optimization); it is then stored in the library cache. Soft analysis means that when a SQL statement is sent to the database for analysis, the database finds that it is in the library cache and only needs to confirm things such as access rights. Hard analysis can be expensive, especially because of the need for optimization. In terms of library caching activities, soft analysis is usually expensive. Output of TKPROF command (continued)

The output will be explained on the next page.

The sample output is as follows:

SQL ID: 6assxhyzbq5jf

Select max (cust_credit_limit)

From customers where cust_city = 'Paris'

Call count cpu elapsed disk query current rows

--

Parse 1 0.00 0.02 00 00

Execute 1 0.00 0.00 00 00

Fetch 2 0.01 0.26 1455 1457 01

--

Total 4 0.02 0.28 1455 1457 0 1

Output of TKPROF command (continued)

TKPROF displays the following statistics for each statement next to the CALL column:

The number of times Count parses, executes, or extracts statements (check that the value of this column is greater than 1 before interpreting statistics in other columns. Unless you use the AGGREGATE = NO option, TKPROF aggregates exactly the same statement execution into a summary table. )

Total CPU time (in seconds) spent by CPU on all parse, execute, or extract calls

Total time (in seconds) spent by Elapsed on all parse, execute, or extract calls

The total number of data blocks physically read from data files on disk by all Disk analyze, execute, or extract calls

The total number of buffers retrieved by all Query parse, execute, or extract calls in consistent mode (for queries, the buffer is usually retrieved in consistent mode. )

The total number of buffers retrieved in the current mode of Current (for DML statements, the buffer is usually retrieved in the current mode. However, the segment header block is always retrieved in the current mode. )

The total number of rows processed by the Rows SQL statement (this total does not include rows processed by the subquery of the SQL statement. For the SELECT statement, the number of rows returned is used for the extraction step. For UPDATE, DELETE, and INSERT statements, the number of rows processed is used to execute the step. )

Notes

DISK is equivalent to the physical read of v$sysstat or AUTOTRACE.

QUERY is equivalent to the consistent fetch number of v$sysstat or AUTOTRACE.

CURRENT is equivalent to the number of database block fetches for v$sysstat or AUTOTRACE.

Recursive call

To execute a SQL statement issued by a user, the Oracle server must execute other statements occasionally. These statements are called recursive SQL statements. For example, if you insert a row in a table and the table space is insufficient to hold the row, the Oracle server will make a recursive call to dynamically allocate space. Recursive calls are also generated when there is no data dictionary information in the data dictionary cache and must be retrieved from disk.

If a recursive call occurs when the SQL trace tool is enabled, TKPROF will clearly mark the recursive SQL statement in the output file. You can disable listing recursive calls in the output file by setting the SYS=NO command line parameter. Note that statistics for recursive SQL statements are always included in the list of SQL statements that cause recursive calls.

Library cache misses

TKPROF also lists the number of library cache misses generated by the analysis and execution steps for each SQL statement. These statistics will appear in a single row under the tabular statistics. Recursive calls (continued)

Row source operation

This information provides the number of rows processed by each operation on the row, as well as other row source information, such as physical reads and writes; cr = consistent reads, pw = physical writes, pr = physical reads, time = time (in microseconds), cost = cost estimate, size = row source bytes estimate, card = cardinality (rows).

Analyze user ID

This is the ID of the user who last parsed the statement.

Row source operation

The row source operation shows the data source that the SQL statement executes. This information is included only if the cursor is closed during the trace. If the line source action does not appear in the trace file, you may want to view the EXPLAIN PLAN.

Carry out the plan

If you specify the EXPLAIN parameter on the TKPROF command line, TKPROF uses the EXPLAIN PLAN command to generate an execution plan for each tracked SQL statement. TKPROF also shows the number of rows processed at each step of the execution plan.

Note: note that the execution plan is generated when the TKPROF command is run, not when the trace file is generated. If an index has been created or deleted since the trace statement, the results may be significantly different.

Optimize program mode or hint

This represents the optimizer hint used during the execution of the statement. If not prompted, the optimizer mode used is displayed.

...

Select max (cust_credit_limit)

From customers where cust_city = 'Paris' call count cpu elapsed disk query current rows

--

Parse 1 0.00 0.00 00 00

Execute 1 0.00 0.00 00 00

Fetch 2 0.00 0.00 77 77 0 1

--

Total 4 0.01 0.01 77 77 01

Misses in library cache during parse: 1

Optimizer mode: ALL_ROWS

Parsing user id: 85 (SH)

Rows Row Source Operation

1 SORT AGGREGATE (cr=77 pr=77 pw=77 time=0 us)

77 TABLE ACCESS BY INDEX ROWID CUSTOMERS (cr=77 pr=77 pw=77 time=555 us

Cost=85 size=1260 card=90)

77 INDEX RANGE SCAN CUST_CUST_CITY_IDX (cr=2 pr=2 pw=2 time=1 us cost=1

Size=0 card=90) (object id 75264)

TKPROF output without index: exampl

The example in the slide shows a result set (row) that is being extracted from the CUSTOMERS table for multiple execution. It requires a CPU extraction time of .12 seconds. This statement is executed by scanning the CUSTOMERS table throughout the table, as can be seen from the row source operation of the output.

The statement must be optimized.

Note: if the value of CPU or elapsed is 0, timed_statistics is not set. TKPROF output with index: exampl

The result shown in the slide shows that when an index is created on the CUST_CITY column, the CPU time is reduced to .01 seconds. These results are achieved because the statement uses an index to retrieve data. In addition, because this example executes the same statement again, most of the data blocks are already in memory. Through the rational use of indexes, the performance can be significantly improved. Use the SQL tracking tool to identify areas with potential for improvement.

Note: indexes should not be built unless necessary. Because references to rows must be added, changed, or deleted, indexes are bound to slow down the processing of INSERT, UPDATE, and DELETE commands. Unused indexes should be deleted. However, you can use the index monitoring feature to identify and delete any unused indexes, or use the SQL access guide to determine unused indexes without having to process all application SQL through EXPLAIN PLAN.

Generate optimizer trace

The optimizer can be made to track cost decisions (made through commands) through commands. This method is occasionally used to provide Oracle technical support with additional information about the behavior of the optimizer.

ALTER SESSION SET EVENTS

'10053 trace name context forever, level 1'

You can use the following command to modify the location of the optimizer trace and the name of the trace file that are in the same location as other trace files:

ALTER SESSION SET TRACEFILE_IDENTIFIER='opt'

The trace file does not need to be formatted, but it is quite large, so be sure to increase the allowable size of the trace in the session using the following command:

ALTER SESSION SET MAX_DUMPFILE_SIZE=UNLIMITED

Stop tracing in the session by exiting the session or by using the following command:

ALTER SESSION SET EVENTS

'10053 trace name context off'

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report