In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
In this issue, the editor will bring you instructions on how to carry out the Oracle implementation plan. The article is rich in content and analyzes and narrates it from a professional point of view. I hope you can get something after reading this article.
If we want to analyze the performance problems of a SQL, we usually first look at the execution plan of the SQL to see if there is a problem with each step of the execution of the SQL. If a SQL normally executes well, but one day its performance is very poor, and if you rule out the system resources and the cause of blocking, then you can basically conclude that there is something wrong with the execution plan.
Understanding the execution plan becomes a prerequisite for SQL optimization. SQL optimization here refers to the location of SQL performance problems, which can be solved after positioning.
I. there are three ways to view the execution plan
1.1Setting autotrace
Serial number
Command
explain
one
SET AUTOTRACE OFF
This is the default value, that is, turn off Autotrace
two
SET AUTOTRACE ON EXPLAIN
Show only the execution plan
three
SET AUTOTRACE ON STATISTICS
Show only executed statistics
four
SET AUTOTRACE ON
It contains two items: 2, 2 and 3.
five
SET AUTOTRACE TRACEONLY
Similar to ON, but does not display the execution result of the statement
SQL > set autotrace on
SQL > select * from dave
ID NAME
--
8 Anqing
1 dave
2 bl
1 bl
2 dave
3 dba
4 sf-express
5 dmm
Eight rows have been selected.
Carry out the plan
Plan hash value: 3458767806
| | Id | Operation | Name | Rows | Bytes | Cost (% CPU) | Time |
| | 0 | SELECT STATEMENT | | 8 | 64 | 2 (0) | 00:00:01 |
| | 1 | TABLE ACCESS FULL | DAVE | 8 | 64 | 2 (0) | 00:00:01 |
Statistical information
0 recursive calls
0 db block gets
4 consistent gets
0 physical reads
0 redo size
609 bytes sent via SQL*Net to client
416 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
8 rows processed
SQL >
1.2 use SQL
SQL > EXPLAIN PLAN FOR sql statement
SQL > SELECT plan_table_output FROM TABLE (DBMS_XPLAN.DISPLAY ('PLAN_TABLE'))
Example:
SQL > EXPLAIN PLAN FOR SELECT * FROM DAVE
It has been explained.
SQL > SELECT plan_table_output FROM TABLE (DBMS_XPLAN.DISPLAY ('PLAN_TABLE'))
Or:
SQL > select * from table (dbms_xplan.display)
PLAN_TABLE_OUTPUT
Plan hash value: 3458767806
| | Id | Operation | Name | Rows | Bytes | Cost (% CPU) | Time |
| | 0 | SELECT STATEMENT | | 8 | 64 | 2 (0) | 00:00:01 |
| | 1 | TABLE ACCESS FULL | DAVE | 8 | 64 | 2 (0) | 00:00:01 |
Eight rows have been selected.
Carry out the plan
Plan hash value: 2137789089
| | Id | Operation | Name | Rows | Bytes | Cost (% CPU) | Time |
-
| | 0 | SELECT STATEMENT | | 8168 | 16336 | 29 (0) | 00:00:01 |
| | 1 | COLLECTION ITERATOR PICKLER FETCH | DISPLAY | 8168 | 16336 | 29 (0) | 00:00:01 |
-
Statistical information
25 recursive calls
12 db block gets
168 consistent gets
0 physical reads
0 redo size
974 bytes sent via SQL*Net to client
416 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
8 rows processed
SQL >
1.3 use the Toad,PL/SQL Developer tool
II. Cardinality (cardinality) / rows
The cardinality value represents the number of records that CBO expects to return from a row source (row source), which may be a table, an index, or a subquery. In the execution plan in Oracle 9i, Cardinality is abbreviated to Card. In 10g, the Card value is replaced by rows.
This is an execution plan for 9i, and we can see the keyword Card:
Carry out the plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=402)
1 0 TABLE ACCESS (FULL) OF 'TBILLLOG8' (Cost=2 Card=1 Bytes=402)
For the execution plan of Oracle 10g, the keyword is replaced by rows:
Carry out the plan
Plan hash value: 2137789089
| | Id | Operation | Name | Rows | Bytes | Cost (% CPU) | Time |
-
| | 0 | SELECT STATEMENT | | 8168 | 16336 | 29 (0) | 00:00:01 |
| | 1 | COLLECTION ITERATOR PICKLER FETCH | DISPLAY | 8168 | 16336 | 29 (0) | 00:00:01 |
-
The value of Cardinality is critical for CBO to make the correct execution plan. If the Cardinality value obtained by CBO is not accurate (usually caused by no analysis or the analysis data is too old), there will be a deviation in the calculation of the cost of the implementation plan, which will lead to the wrong formulation of the implementation plan by CBO.
When there is a subquery in a multi-table associative query or a subquery in SQL, the Cardinality value of each associated table or subquery has a great impact on the main query. It can even be said that CBO relies on each associated table or subquery Cardinality value to calculate the final execution plan.
For multi-table queries, CBO uses the number of rows returned by each associated table (Cardinality) to determine which access method to use for table association (such as Nested loops Join or hash Join).
Three ways of multi-table connection to explain HASH JOIN MERGE JOIN NESTED LOOP in detail
Http://blog.csdn.net/tianlesoftware/archive/2010/08/20/5826546.aspx
For a subquery, its Cardinality determines whether the subquery uses an index or a full table scan to access the data.
III. SQL's implementation plan
Generating the execution plan of SQL is a very important step for Oracle to hard parse the SQL. It works out a plan to tell Oracle how to access the data when executing the SQL: index or full table scan, Hash Join or Nested loops Join, etc. For example, it is the most resource-saving for a SQL to access data by using an index, and as a result, the execution plan made by CBO is a full table scan, then the performance of this SQL must be relatively poor.
Hard and soft parsing of Oracle SQL
Http://blog.csdn.net/tianlesoftware/archive/2010/04/08/5458896.aspx
Example:
SQL > SET AUTOTRACE TRACEONLY;-- shows only the execution plan, not the result set
SQL > select * from scott.emp a scott.emp b where a.empno=b.mgr
13 lines have been selected.
Carry out the plan
Plan hash value: 992080948
-
| | Id | Operation | Name | Rows | Bytes | Cost (% CPU) | Time |
-
| | 0 | SELECT STATEMENT | | 13 | 988 | 6 (17) | 00:00:01 |
| | 1 | MERGE JOIN | | 13 | 988 | 6 (17) | 00:00:01 |
| | 2 | TABLE ACCESS BY INDEX ROWID | EMP | 14 | 532 | 2 (0) | 00:00:01 |
| | 3 | INDEX FULL SCAN | PK_EMP | 14 | | 1 (0) | 00:00:01 |
| | * 4 | SORT JOIN | | 13 | 494 | 4 (25) | 00:00:01 |
| | * 5 | TABLE ACCESS FULL | EMP | 13 | 494 | 3 (0) | 00:00:01 |
-
Predicate Information (identified by operation id):
4-access ("A". "EMPNO" = "B". "MGR")
Filter ("A". "EMPNO" = "B". "MGR")
5-filter ("B". "MGR" IS NOT NULL)
Statistical information
0 recursive calls
0 db block gets
11 consistent gets
0 physical reads
0 redo size
2091 bytes sent via SQL*Net to client
416 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
13 rows processed
SQL >
The picture is the execution plan viewed by the Toad tool. In Toad, the order of execution is clearly shown. But it's not so straightforward in SQLPLUS. But we can also judge: generally judging by the indent length, the indentation with the largest indentation is executed first, and if there are two lines of indentation, then execute the above first.
3.1 Fields in the implementation plan are explained:
ID: a serial number, but not the order in which it is executed. The order of execution is judged by indentation.
Operation: the content of the current operation.
Rows: the Cardinality,Oracle of the current operation estimates the returned result set of the current operation.
Cost (CPU): a value (cost) calculated by Oracle to indicate the cost of SQL execution.
Time:Oracle estimates the time of the current operation.
3.2 the predicate states:
Predicate Information (identified by operation id):
4-access ("A". "EMPNO" = "B". "MGR")
Filter ("A". "EMPNO" = "B". "MGR")
5-filter ("B". "MGR" IS NOT NULL)
Access: indicates that the value of this predicate condition will affect the access strength of the data (table or index).
Filter: indicates that the value of the predicate condition does not affect the access strength of the data, but only acts as a filter.
In the predicate, we should mainly pay attention to access. We should consider the conditions of the predicate and whether the access path used is correct.
3.3 Statistics description:
Db block gets: the number of block read from buffer cache
Consistent gets: the number of block of undo data read from buffer cache
Physical reads: the number of block read from disk
Redo size: the size of the redo generated by DML
Sorts (memory): the amount of sort performed in memory
Sorts (disk): the amount of sort performed on disk
Physical Reads is usually what we are most concerned about. If this value is very high, it means that a large number of data should be requested from disk to Buffer Cache, which usually means that there are a large number of full table scan SQL statements in the system, which will affect the performance of the database, so try to avoid full table scan statements. For full table scan SQL statements, it is recommended to add relevant indexes and optimize SQL statements to solve the problem.
There is a conversion formula between the three parameters physical reads, db block gets and consistent gets:
The hit ratio of the data buffer usage = 1-(physical reads / (db block gets + consistent gets)).
You can view the hit ratio of the data buffer with the following statement:
SQL > SELECT name, value FROM v$sysstat WHERE name IN ('db block gets',' consistent gets','physical reads')
The hit rate of the query result Buffer Cache should be more than 90%, otherwise you need to increase the size of the data buffer.
Recursive Calls: Number of recursive calls generated at both the user and system level.
Oracle Database maintains tables used for internal processing. When it needs to change these tables, Oracle Database generates an internal SQL statement, which in turn generates a recursive call. In short, recursive calls are basically SQL performed on behalf of your SQL. So, if you had to parse the query, for example, you might have had to run some other queries to get data dictionary information. These would be recursive calls. Space management, security checks, calling PL/SQL from SQL-all incur recursive SQL calls .
DB Block Gets: Number of times a CURRENT block was requested.
Current mode blocks are retrieved as they exist right now, not in a consistent read fashion. Normally, blocks retrieved for a query are retrieved as they existed when the query began. Current mode blocks are retrieved as they exist right now, not from a previous point in time. During a SELECT, you might see current mode retrievals due to reading the data dictionary to find the extent information for a table to do a full scan (because you need the "right now" information, not the consistent read) During a modification, you will access the blocks in current mode in order to write to them. (DB Block Gets: the number of requested blocks that can be met in buffer)
The current mode block means the number of blocks that are exactly extracted in the operation, not the number of blocks produced in the case of consistent reads. Normally, the blocks extracted by a query are the blocks that exist at the point in time at the start of the query, and the current block is the block that exists at this time, not the number of blocks before or after that point in time.
Consistent Gets: Number of times a consistent read was requested for a block.
This is how many blocks you processed in "consistent read" mode. This will include counts of blocks read from the rollback segment in order to rollback a block. This is the mode you read blocks in with a SELECT, for example. Also, when you do a searched UPDATE/DELETE, you read the blocks in consistent read mode and then get the block in current mode to actually do the modification. (Consistent Gets: the data blocks required for data consistency reading of the total number of data requests in the rollback segment Buffer)
The concept here is how many blocks need to be processed in the consistent read state when processing your operation. The main reason for these blocks is that during your query, because other sessions manipulate the data blocks, the blocks to be queried have been modified, but since our query is called before these changes, we need to query the front image of the data blocks in the rollback segment. To ensure the consistency of the data. This gives birth to consistent reading.
Physical Reads: Total number of data blocks read from disk. This number equals the value of "physical reads direct" plus all reads into buffer cache. (Physical Reads: the number of Buffer Cache blocks read from disk after the instance is started)
Is the number of blocks read from disk, the main reason for which is:
(1) these blocks do not exist in the database cache
(2) full table scan
(3) disk sorting
The relationship between them can be summarized as follows:
Logical reads refer to the number of blocks that Oracle reads from memory. Generally speaking, it is' consistent gets' +'db block gets'. When the required data block cannot be found in memory, it needs to be fetched from disk, so 'physical reads'' is generated.
Sorts (disk):
Number of sort operations that required at least one disk write. Sorts that require I/O to disk are quite resource intensive. Try increasing the size of the initialization parameter SORT_AREA_SIZE.
Bytes sent via SQL*Net to client:
Total number of bytes sent to the client from the foreground processes.
Bytes received via SQL*Net from client:
Total number of bytes received from the client over Oracle Net.
SQL*Net roundtrips to/from client:
Total number of Oracle Net messages sent to and received from the client.
For more information, see the Oracle online documentation:
Statistics Descriptions
Http://download.oracle.com/docs/cd/E11882_01/server.112/e10820/stats002.htm#i375475
3.4 dynamic analysis
If you have the following hints in the implementation plan:
Note
-
-dynamic sampling used for the statement
This suggests that the technology currently used by the user CBO requires the user to take these factors into account when analyzing the plan. When this prompt appears, the current table uses dynamic sampling. We thus infer that the table may not have been analyzed.
There are two things that happen here:
(1) if the table has not been analyzed, then CBO can obtain the analysis data by dynamic sampling, or execute the plan correctly.
(2) if the table has been analyzed, but the analysis information is too old, then CBO will not use dynamic sampling, but will use these old analysis data, which may lead to incorrect execution plan.
Summary:
When looking at the execution plan, you need to look at predicates and prompts in addition to the execution plan itself. SQL efficiency is judged by overall information.
Chapter II related articles:
What is the implementation plan (explain plan)
Execution plan: a description of the execution process or access path of a query statement in ORACLE.
Second, how to view the implementation plan
1: press F5 under PL/SQL to view the execution plan. Third-party tools such as toad.
Many people think that the execution plan of PL/SQL can only see basic information such as cardinality, optimizer, cost and so on. In fact, this can be set in the PL/SQL tool. You can see a lot of other information, as shown below
2: perform the following steps under SQL*PLUS (both the PL/SQL command window and the SQL window are available)
The copy code is as follows:
SQL > EXPLAIN PLAN FOR
SELECT * FROM SCOTT.EMP;-the SQL script to be parsed
SQL > SELECT * FROM TABLE (DBMS_XPLAN.DISPLAY)
3: execute the following command under SQL*PLUS (some commands are not valid under PL/SQL):
The copy code is as follows:
SQL > SET TIMING ON-- controls the display of execution time statistics
SQL > SET AUTOTRACE ON EXPLAIN-- this setting includes execution plan, script data output, no statistics
SQL > execute SQL statements that need to view the execution plan
SQL > SET AUTOTRACE OFF-No AUTOTRACE reports are generated, which is the default mode
SQL > SET AUTOTRACE ON-this setting includes execution plans, statistics, and script data output
SQL > execute SQL statements that need to view the execution plan
SQL > SET AUTOTRACE OFF
SQL > SET AUTOTRACE TRACEONLY-- this setting will have execution plans, statistics, and no script data output
SQL > execute SQL statements that need to view the execution plan
SQL > SET AUTOTRACE TRACEONLY STAT-- this setting contains only statistics
SQL > execute SQL statements that need to view the execution plan
SET AUTOT [RACE] {ON | OFF | TRACE [ONLY]} [explain] [stats]]
Reference document: SQLPlus User's Guide and Reference Release 11.1
Note: the PL/SQL Developer tool does not fully support all SQL*Plus commands, such as SET AUTOTRACE ON. Executing this command under the PL/SQL Developer tool will result in an error
SQL > SET AUTOTRACE ON
Cannot SET AUTOTRACE
4:SQL_TRACE can be enabled globally as a parameter or in a specific SESSION in the form of a command
4.1 enable globally, specify SQL_TRACE = true in the parameter file (pfile/spfile). When SQL_TRACE is enabled globally, it will cause all process activities to be tracked, including background processes and user processes, which will usually lead to serious performance problems, so use it cautiously in the production environment.
Tip: by enabling SQL_TRACE globally, we can track the activities of all background processes, a lot of abstract instructions in the document, by tracking the real-time changes of the file, we can clearly see the close coordination between the various processes.
4.2 at the current SESSION level, the background database recursive activity of the current operation can be found by tracking the current process (which is especially effective when studying new database features). When studying the execution of SQL, you can find the background
Mistakes, etc.
The copy code is as follows:
SQL > ALTER SESSION SET SQL_TRACE=TRUE
SQL > SELECT * FROM SCOTT.EMP
SQL > ALTER SESSION SET SQL_TRACE = FALSE
So how to view the relevant information at this time? Whether you don't see any information after executing the above script in the SQL*PLUS or PL/SQL DEVELOPER tool, you can query the trace log information through the following script
The copy code is as follows:
SELECT T.VALUE | |'/'| | LOWER (RTRIM (I.INSTANCE, CHR (0) | |'_ ora_' | |
P.SPID | | '.trc' TRACE_FILE_NAME
FROM
(SELECT P.SPID
FROM V$MYSTAT M, V$SESSION S, V$PROCESS P
WHERE M.STATISTIC# = 1
AND S.SID = M.SID
AND P.ADDR = S.PADDR
) P
(SELECT T.INSTANCE
FROM V$THREAD T, V$PARAMETER V
WHERE V.NAME = 'thread'
AND (V.VALUE = 0 OR T.THREAD# = TO_NUMBER (V.VALUE))
) I
(SELECT VALUE FROM V$PARAMETER WHERE NAME='user_dump_dest') T
The help information for TKPROF is as follows
The copy code is as follows:
TKPROF option
Option description
Name of the TRACEFILE trace output file
The name of the file formatted by OUTPUTFILE
Sort order of SORT=option statements
The first n statements of PRINT=n print
EXPLAIN=user/password runs EXPLAIN PLAN with the specified user name
INSERT=filename generates INSERT statements
SYS=NO ignores recursive SQL statements that run as user sys
AGGREGATE= [Y | N] if the specified AGGREGATE= NO TKPROF does not aggregate the same
Multiple users of SQL text
RECORD=filename records the statements found in the trace file
TABLE=schema.tablename puts the execution plan in the specified table instead of the default PLAN_TABLE
You can type tkprof in the operating system to get a list of all available options and output
Note sorting options are
Sort option description
Prscnt execnt fchcnt calls to analyze the number of times the extraction was performed
CPU time taken by prscpu execpu fchcpu analysis to perform extraction
Time taken by prsela exela fchela analysis to perform extraction
Number of disk reads during prsdsk exedsk fchdsk analysis during extraction
Number of buffers used for continuous reads during prsqry exeqry fchqry analysis during extraction
The number of buffers used for the current read during prscu execu fchcu analysis performing the extraction
Number of library cache misses during prsmis exemis analysis execution
Number of rows processed during exerow fchrow analysis execution
Userid analyzes the user ID of the user of the cursor
TKPROF statistics
Count: number of calls executed
CPU: number of seconds to use CPU
Elapsed: total time spent
Disk: number of physical reads
Query: logical reads that are read continuously
Current: logical reads in current mode
Rows: number of rows processed
TKPROF Statistics
Statistical meaning
The number of times Count parses or executes statements and the number of extraction calls made for statements
The processing time of each phase of CPU is in seconds. If the statement is found in the shared pool, it is 0 for the analysis phase.
Elapsed elapsed time in seconds is usually not very useful because other processes affect elapsed time
Physical data blocks read by Disk from database files this statistic may be low if the data is buffered
The logical buffer retrieved by Query for continuous read is usually used for SELECT statements
The logical buffer retrieved by Current in the current mode is usually used for DML statements
The row processed by the Rows external statement displays it at the extraction stage for the SELECT statement and at the execution time for the DML statement
The sum of Query and Current is the total number of logical buffers accessed
Execute the following command: tkprof D:\ ORACLE\ PRODUCT\ 10.2.0\ DB_1\ RDBMS\ TRACE/wgods_ora_3940.trc h:\ out.txtoutputfile explain=etl/etl
After executing the above command, you can view the generated text file
The copy code is as follows:
TKPROF: Release 10.2.0.1.0-Production on Wednesday May 23 16:56:41 2012
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Trace file: d:\ ORACLE\ PRODUCT\ 10.2.0\ DB_1\ RDBMS\ TRACE/wgods_ora_3940.trc
Sort options: default
*
Count = number of times OCI procedure was executed
Cpu = cpu time in seconds executing
Elapsed = elapsed time in seconds executing
Disk = number of physical reads of buffers from disk
Query = number of buffers gotten for consistent read
Current = number of buffers gotten in current mode (usually for update)
Rows = number of rows processed by the fetch or execute call
*
ALTER SESSION SET SQL_TRACE = TRUE
Call count cpu elapsed disk query current rows
--
Parse 0 0.00 0.00 00 00
Execute 1 0.00 0.00 00 00
Fetch 0 0.00 0.00 00 00
--
Total 1 0.00 0.00 00 00
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: 89 (ETL)
*
Begin: id: = sys.dbms_transaction.local_transaction_id; end
Call count cpu elapsed disk query current rows
--
Parse 2 0.00 0.00 00 00
Execute 2 0.00 0.00 00 0 2
Fetch 0 0.00 0.00 00 00
--
Total 4 0.00 0.00 00 0 2
Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: 89 (ETL)
*
SELECT *
FROM
SCOTT.EMP
Call count cpu elapsed disk query current rows
--
Parse 2 0.00 0.00 00 00
Execute 1 0.00 0.00 00 00
Fetch 1 0.00 0.00 0 7 0 14
--
Total 4 0.00 0.00 0 7 0 14
Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: 89 (ETL)
Rows Execution Plan
SELECT STATEMENT MODE: CHOOSE
TABLE ACCESS MODE: ANALYZED (FULL) OF 'EMP' (TABLE)
*
ALTER SESSION SET SQL_TRACE = FALSE
Call count cpu elapsed disk query current rows
--
Parse 1 0.00 0.00 00 00
Execute 1 0.00 0.00 00 00
Fetch 0 0.00 0.00 00 00
--
Total 2 0.00 0.00 00 00
Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: 89 (ETL)
*
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
Call count cpu elapsed disk query current rows
--
Parse 5 0.00 0.00 00 00
Execute 5 0.00 0.00 00 0 2
Fetch 1 0.00 0.00 0 7 0 14
--
Total 11 0.00 0.00 0 7 0 16
Misses in library cache during parse: 2
Misses in library cache during execute: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
Call count cpu elapsed disk query current rows
--
Parse 0 0.00 0.00 00 00
Execute 0 0.00 0.00 00 00
Fetch 0 0.00 0.00 00 00
--
Total 0 0.00 0.00 00 00
Misses in library cache during parse: 0
User SQL statements in session.
Internal SQL statements in session.
SQL statements in session.
Statement EXPLAINed in this session.
*
Trace file: d:\ ORACLE\ PRODUCT\ 10.2.0\ DB_1\ RDBMS\ TRACE/wgods_ora_3940.trc
Trace file compatibility: 10.01.00
Sort options: default
Session in tracefile.
User SQL statements in trace file.
Internal SQL statements in trace file.
SQL statements in trace file.
Unique SQL statements in trace file.
SQL statements EXPLAINed using schema:
ETL.prof$plan_table
Default table was used.
Table was created.
Table was dropped.
Lines in trace file.
Elapsed seconds in trace file.
4.3 track the progress of other users. In many cases, we need to track the progress of other users, not the current user, through the system package provided by ORACLE.
DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION to finish it.
For example:
The copy code is as follows:
SELECT SID, SERIAL#, USERNAME FROM V$SESSION WHERE USERNAME = 'ETL'
EXEC DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION (61, 76, true)
EXEC DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION (61pm 76pm false)
5 take advantage of the 10046 event
The copy code is as follows:
ALTER SESSION SET TRACEFILE_IDENTIFIER = 10046
ALTER SESSION SET EVENTS='10046 trace name context forever, level 8'
SELECT * FROM SCOTT.EMP
ALTER SESSION SET EVENTS = '10046 trace name context off'
Then you can use a script to see the location of the tracking file.
SELECT T.VALUE | |'/'| | LOWER (RTRIM (I.INSTANCE, CHR (0) | |'_ ora_' | |
P.SPID | | '.trc' TRACE_FILE_NAME
FROM
(SELECT P.SPID
FROM V$MYSTAT M, V$SESSION S, V$PROCESS P
WHERE M.STATISTIC# = 1
AND S.SID = M.SID
AND P.ADDR = S.PADDR
) P
(SELECT T.INSTANCE
FROM V$THREAD T, V$PARAMETER V
WHERE V.NAME = 'thread'
AND (V.VALUE = 0 OR T.THREAD# = TO_NUMBER (V.VALUE))
) I
(SELECT VALUE FROM V$PARAMETER WHERE NAME='user_dump_dest') T
The query result is a wgods_ora_28279.trc file, but the corresponding tracking file is not found when you go to the corresponding directory. Instead, the trace file is as follows: wgods_ora_28279_10046.trc
6 take advantage of the 10053 event
It's kind of like 10046, skip it here,
7 system View
Through some system views below, you can see some piecemeal information about the implementation plan, which you can study more if you are interested.
The copy code is as follows:
SELECT * FROM V$SQL_PLAN
SELECT * FROM V$RSRC_PLAN_CPU_MTH
SELECT * FROM V$SQL_PLAN_STATISTICS
SELECT * FROM V$SQL_PLAN_STATISTICS_ALL
SELECT * FROM V$SQLAREA_PLAN_HASH
SELECT * FROM V$RSRC_PLAN_HISTORY
Third, understand the implementation plan
1. Execution sequence
The principle of execution order is: from top to bottom, from right to left
From top to bottom: there are generally multiple nodes in the execution plan, with nodes of the same level (or juxtaposition), priority execution at the top and post execution at the bottom.
From right to left: there are also multiple child nodes under a node, starting with the rightmost child node.
Of course, you can also check the execution order through the functions it provides in the PL/SQL tool. As shown in the following figure:
two。 Field explanation in the execution plan
SQL >
The noun explains:
Recursive calls recursive call
The number of block read from buffer cache by db block gets the number of blocks currently requested. The current mode block means the number of blocks that are exactly extracted during the operation, rather than the normal situation generated by consistent reading. The block extracted by a query is the database that exists at the point in time when the query starts, and the current block exists at this time. Not the number of blocks before or after this point in time.
The number of block of undo data read by consistent gets from buffer cache the total number of data requests required for data consistency reading in the rollback segment Buffer. The concept here is that when you are dealing with your operation, you need to process multiple blocks in the consistent read state. The main reason for these blocks is that during the query process, because other sessions operate on the data blocks. The block to be queried has been modified, but because our query is called before these modifications, we need to query the front image of the data block in the rollback segment to ensure data consistency. This results in consistent reading.
Physical reads physical reads are the number of blocks read from disk. The main reasons are:
1: these blocks do not exist in the database cache.
2: full table scan
3: disk sorting
The size of the redo generated by redo size DML
The amount of sorting performed by sorts (memory) in memory
The amount of sorting performed by sorts (disk) on disk
2091 bytes sent via SQL*Net to client sent 2091 bytes of data from SQL*Net to the client
The 416 bytes received via SQL*Net from client client sent 416 bytes of data to SQL*Net.
Reference document: SQLPlus User's Guide and Reference Release 11.1
The relationship among db block gets, consistent gets and physical reads can be summarized as follows: logical read refers to the number of blocks read by ORACLE from memory, generally speaking:
Consistent gets + db block gets. When the required block of data cannot be found in memory, it needs to be fetched from disk, resulting in a physical read.
3. View the specific content
1 > Plan hash Value
This line is the hash value of this statement, and we know that the execution plan generated by ORACLE for each ORACLE statement is placed in SHARE POOL, which is hard parsed for the first time to produce a hash value. The hash value is compared the next time it is executed, and hard parsing will not be performed if it is the same.
2 > COST
COST has no unit and is a relative value, which is used by ORACLE to evaluate the cost of CBO and choose the execution plan when SQL parses the execution plan in CBO. There is no clear meaning, but it is very useful in comparison.
Formula: COST= (Single Block I COST O COST + MultiBlock I Bank O Cost + CPU Cost) / Sreadtim
3 > explanation of the above execution plan column fields:
Id: the sequence of execution, but not the order of execution. The order of execution is judged by Operation indentation (the principle of right-most, top-first execution is used to look at the hierarchical relationship, and at the same level, if an action does not have a sub-ID, it will be executed first. Generally judging by the indentation length, the indentation with the largest indentation is executed first, and if there are two lines indented the same, then execute the above first. )
Operation: the content of the current operation.
Name: operands
Rows: that is, the Cardinality (cardinality) before the 10g version, and Oracle estimates the number of result set rows returned by the current operation.
Bytes: indicates the number of bytes returned after performing this step.
Cost (CPU): represents an execution cost of executing to this step, indicating the cost of SQL execution.
Time:Oracle estimates the time of the current operation.
4. The predicate states:
Predicate Information (identified by operation id):
2-filter ("B". "MGR" IS NOT NULL)
4-access ("A". "EMPNO" = "B". "MGR")
Access: indicates that the value of this predicate condition will affect the access strength of the data (full table scan or index).
Filter: indicates that the value of the predicate condition does not affect the access strength of the data, but only acts as a filter.
In the predicate, we should mainly pay attention to access. We should consider the conditions of the predicate and whether the access path used is correct.
5. Dynamic analysis
If you have the following hints in the implementation plan:
Note
-
-dynamic sampling used for the statement
This suggests that the technology currently used by the user CBO requires the user to take these factors into account when analyzing the plan. When this prompt appears, the current table uses dynamic sampling. We thus infer that the table may not have been analyzed.
There are two things that happen here:
(1) if the table has not been analyzed, then CBO can obtain the analysis data by dynamic sampling, or execute the plan correctly.
(2) if the table has been analyzed, but the analysis information is too old, then CBO will not use dynamic sampling, but will use these old analysis data, which may lead to incorrect execution plan.
IV. Table access mode
1.Full Table Scan (FTS) full table scan
2.Index Lookup index scan
There are 5 methods of index lookup:
Index unique scan-unique scan of the index
Finding a value through a unique index often returns a single ROWID, and if there are UNIQUE or PRIMARY KEY constraints (which guarantee that statements access only a single row), ORACLE
Often achieve unique scanning
Method for looking up a single key value via a unique index. Always returns a single value, You must supply AT LEAST the leading column of the index to access data via the index.
Index range scan-Index local scan
Index range scan is a method for accessing a range values of a particular column. AT LEAST the leading column of the index must be supplied to access data via the index. Can be used for range operations (e.g. >
< >=
< , >=
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.