In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Finding Trace Files
Trace files are produced by database engine server processes running on the database server. This means they're written to a disk accessible from the database server. In version 10.2, depending on the type of the process producing the trace files, they're written in two distinct directories:
Dedicated server processes create trace files in the directory configured through the
User_dump_dest initialization parameter.Background processes create trace files in the directory configured through the background_dump_dest initialization parameter.
User_dump_dest and background_dump_dest is useless after 11g, use diagnostic_dest
As of version 11.1, with the introduction of the Automatic Diagnostic Repository, the user_dump_dest and background_dump_dest initialization parameters are deprecated in favor of the diagnostic_dest initialization parameter. Because the new initialization parameter sets the base directory only, you can use the v$diag_info view to get the exact location of the trace files. The following queries show the difference between the value of the initialization parameter and the location of the trace files:
SQL > SELECT value FROM v$parameter WHERE name = 'diagnostic_dest'
VALUE
-
/ u00/app/oracle
SQL > SELECT value FROM v$diag_info WHERE name = 'Diag Trace'
VALUE
/ u00/app/oracle/diag/rdbms/dbm11203/DBM11203/trace
Note that the v$diag_info view provides information for the current session only.
As of version 11.1, it's much easier to query either the v$diag_info or v$process views, as shown in the following examples:
SQL > SELECT value FROM v$diag_info WHERE name = 'Default Trace File'
SELECT p.tracefile FROM v$process p, v$session s WHERE p.addr = s.paddr AND s.sid = sys_context ('userenv','sid')
Tkprof
Grammar
$tkprof
Usage: tkprof tracefile outputfile [explain=] [table=]
[print=] [insert=] [sys=] [sort=]
Table=schema.tablename Use 'schema.tablename' with' explain=' option.
Explain=user/password Connect to ORACLE and issue EXPLAIN PLAN.
Print=integer List only the first 'integer' SQL statements.
Aggregate=yes | no
Insert=filename List SQL statements and data inside INSERT statements.
Sys=no TKPROF does not list SQL statements run as user SYS.
Record=filename Record non-recursive statements found in the trace file.
Waits=yes | no Record summary for any wait events found in the trace file.
Sort=option Set of zero or more of the following sort options:
Prscnt number of times parse was called
Prscpu cpu time parsing
Prsela elapsed time parsing
Prsdsk number of disk reads during parse
Prsqry number of buffers for consistent read during parse
Prscu number of buffers for current read during parse
Prsmis number of misses in library cache during parse
Execnt number of execute was called
Execpu cpu time spent executing
Exeela elapsed time executing
Exedsk number of disk reads during execute
Exeqry number of buffers for consistent read during execute
Execu number of buffers for current read during execute
Exerow number of rows processed during execute
Exemis number of library cache misses during execute
Fchcnt number of times fetch was called
Fchcpu cpu time spent fetching
Fchela elapsed time fetching
Fchdsk number of disk reads during fetch
Fchqry number of buffers for consistent read during fetch
Fchcu number of buffers for current read during fetch
Fchrow number of rows fetched
Userid userid of user that parsed the cursor
This can reduce the complexity of the analyzed file and make it easy to view.
Parameter description
Tracefile: the trace file you want to analyze
Outputfile: formatted file
Explain=user/password@connectstring
Table=schema.tablename
PRINT: only the first N SQL statements of the output file are listed. The default is all SQL statements.
AGGREGATE: if = NO, multiple identical SQL are not summarized. If yes, merge the same sql in the trace file.
An INSERT:SQL statement that stores statistics for trace files in a database. After TKPROF creates the script, the results are entered into the database.
SYS: if set to yes, the operation of all sys users (including the recursive sql raised by the user's sql statement), which reduces the complexity of the parsed file and is easy to view.
TABLE: the user name and table name used to hold the temporary table before exporting to the output file.
EXPLAIN: determine the execution plan for each SQL statement and write the execution plan to the output file. If there is no explain, what we see in the trace file is the actual execution path of SQL, and there will be no execution plan of sql.
Sort: sort the sql statements of trace files as needed, one of the more useful sorting options is fchela, which sorts the results of the analysis by elapsed time fetching (remember to set the initialization parameter timed_statistics=true), and the resulting file will display the most time-consuming sql first.
Interpretation of Tkprof command output
First explain the meaning of the columns in the output file:
CALL: the processing of each SQL statement is divided into three parts
Parse: this step converts the SQL statement into an execution plan, which includes checking for the correct authorization and the existence of tables, columns, and other referenced objects.
Execute: this step is really for Oracle to execute the statement. For insert, update, delete operations, this step modifies the data, and for select operations, it simply determines the selected record.
Fetch: returns the records obtained in the query statement, in which only the select statement will be executed.
COUNT: the number of times this statement has been parse, execute, fetch.
CPU: this statement is the time in seconds of cpu consumed by all parse, execute, and fetch.
ELAPSED: all the total time spent in parse, execute, fetch in this statement.
DISK: the number of blocks physically read from a data file on disk.
QUERY: the number of buffer obtained by all parse, execute, fetch in consistent read mode. Buffer in consistency mode is used to provide a snapshot of a consistent read to a long-running transaction, and the cache actually stores the state in the header.
CURRENT: the number of buffer obtained in current mode. Generally, buffer is obtained when performing insert, update or delete operations in current mode. In current mode, if a new cache is found in the cache to be sufficient for the current transaction, the buffer will be read into the cache.
ROWS: the number of records returned by all SQL statements, excluding the number of records returned in the subquery. For select statements, the record is returned at the fetch step, and for insert, update, and delete operations, the record is returned at the execute step.
Performance Analysis in trace File
1. If the ratio of analysis to execution is 1, sql parsing will be performed every time the query is executed. If the ratio of analysis to execution is close to 0, it means that the query performs many soft parsing, which reduces the scalability of the system.
2. If it is shown in the trace file that all or almost all sql are executed once, it may be because binding variables are not used correctly.
3. If the ratio of (Fetch Count) to the number of rows obtained is close to 1, and the number of rows is greater than 1, the application does not perform mass fetch operations, and each language / API has the ability to complete this function, that is, fetching multiple lines at a time. If you do not take advantage of this feature to go in batches, it is possible to spend much more time going back and forth between the client and the server. In addition to creating crowded network conditions, this excessive round trip will also be much slower than getting many lines in a single call, and how to instruct the application to batch fetch will depend on the language / API.
4. If there is a big difference between CPU time and elasped time, it means that you may have spent a lot of time waiting for something. If it takes an CPU time to execute, but it takes a total of 10 seconds, this means that 90% of the elapsed time is waiting for a resource. For example, waiting by a session, or physical IO waiting for a large number of queries, etc.
5. Long CPU or elapsed time is often the most resource-consuming sql, which requires our attention.
6. The disk IO can be judged by the proportion of the disk IO to the logical IO and the disk/query+current. If the disk IO is too large, it may be that the db_buffer_size is too small. Of course, this is also related to the specific characteristics of SQL.
7. Query+current/rows requires an average number of block per row. If it is too large (more than 20), the efficiency of SQL statements is too low, and the data is too scattered, so you can consider reorganizing objects.
Sample trace Fil
TKPROF: Release 10.2.0.1.0-Production on Sun Apr 22 11:00:58 2012
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Trace file: orcl_ora_4299.trc
Sort options: fchela
*
Count = number of times OCI procedure was executed
Cpu = cpu time in seconds executing
Elapsed = elapsed time in seconds executing
Disk = number of physical reads of buffers from disk
Query = number of buffers gotten for consistent read
Current = number of buffers gotten in current mode (usually for update)
Rows = number of rows processed by the fetch or execute call
*
The following statements encountered an error during parse:
BEGIN: a = 99; END
Error encountered: ORA-06550
Alter session set sql_trace=off
Error encountered: ORA-00922
*
Error connecting to database using: system/manager
ORA-01017: invalid username/password; logon denied
EXPLAIN PLAN option disabled.
*
Select count (*)
From
T where id =: a
Call count cpu elapsed disk query current rows
--
Parse 1 0.00 0.00 00 00
Execute 1 0.00 0.00 00 00
Fetch 2 0.03 0.04 99 106 0 1
--
Total 4 0.04 0.05 99 106 0 1
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 55
Rows Row Source Operation
1 SORT AGGREGATE (cr=106 pr=99 pw=0 time=46182 us)
50422 INDEX FAST FULL SCAN T_IDX (cr=106 pr=99 pw=0 time=4489223 us) (object id 52998)
*
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
Call count cpu elapsed disk query current rows
--
Parse 10 0.00 0.01 00 00
Execute 12 0.01 0.03 0 0 03
Fetch 10 0.07 0.08 99 316 0 5
--
Total 32 0.09 0.13 99 316 0 8
Misses in library cache during parse: 8
Misses in library cache during execute: 8
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
Call count cpu elapsed disk query current rows
--
Parse 47 0.03 0.05 0 0 0
Execute 210 0.05 0.07 0 0 0
Fetch 328 0.06 0.04 0 852 0 976
--
Total 585 0.15 0.16 0 852 0 976
Misses in library cache during parse: 22
Misses in library cache during execute: 22
12 user SQL statements in session.
210 internal SQL statements in session.
222 SQL statements in session.
0 statements EXPLAINed in this session.
*
Trace file: orcl_ora_4299.trc
Trace file compatibility: 10.01.00
Sort options: fchela
1 session in tracefile.
12 user SQL statements in trace file.
210 internal SQL statements in trace file.
222 SQL statements in trace file.
30 unique SQL statements in trace file.
2131 lines in trace file.
130 elapsed seconds in trace file.
Let's take a look at the following explanation.
The compilation of a SQL statement consists of two phases: the parse phase and the execute phase. When the time comes to parse a SQL statement, Oracle first checks to see if the parsed representation of the statement already exists in the library cache. If not, Oracle will allocate a shared SQL area within the library cache and then parse the SQL statement.
At execution time, Oracle checks to see if a parsed representation of the SQL statement already exists in the library cache. If not, Oracle will reparse and execute the statement. -- the error will come out from this step. Could be various reaconsequentlyns why it was flushed out of lc.
Misses in library cache during parse: the number of hard parsing that occurs in parsing. If it is soft parsing, Misses in library cache during parse will be 0.
Misses in library cache during execute: the number of hard parsing that occurred during the call phase. If hard parsing does not occur when the call is executed, the line Misses in library cache during execute will not exist.
The execution plan is divided into two parts. The first part is called Row Source Operation, which is the execution plan written to the trace file when the cursor is closed and trace is turned on. This means that if the application reuses the cursors without closing them, no new execution plan for the reuse cursors will be written to the trace file. The second part, called the execution plan (Execution Plan), is generated by the TKPROF that specifies the explain parameter. Since this is generated later, it does not necessarily match the first part exactly. In case you see that the two are inconsistent, the former is correct. Both execution plans provide the number of rows returned for each operation in the execution plan (not processed-- note) through the Rows column. For each row source operation, the following runtime statistics may also be provided:
Cr is the number of blocks logically read out in consistency mode.
Pr is the number of blocks physically read from the disk.
Pw is the number of blocks physically written to disk.
Time is the total elapsed time in microseconds. It is important to note that statistical values are not always accurate. In fact, sampling may be used to reduce overhead.
Cost is the evaluation cost of the operation. This value is only available in Oracle 11g.
Size is the estimated amount of data (in bytes) returned by the operation. This value is only available in Oracle 11g.
Card is the estimated number of rows returned by the operation. This value is only available in Oracle 11g.
All the information about the trace file is given at the end of the output file. First you can see the name of the trace file, the version number, and the value of the parameter sort used for this analysis. Then, the number of all sessions and the number of SQL statements are given.
Optimizer mode: ALL_ROWS indicates that the optimizer is in all_rows mode
Parsing user id: 55 means that the user id is 55
Tips: we can use grep total report1.txt to quickly check the areas that need our attention, such as the above example. If we think we need the sql related to 0.05, we can directly open the keyword report1.txt search 0.05s.
Examples of TKPROF Statement
This section provides two brief examples of TKPROF usage. For an complete example of TKPROF output, see "Sample TKPROF Output".
TKPROF Example 1
If you are processing a large trace file using a combination of SORT parameters and the PRINT parameter, then you can produce a TKPROF output file containing only the highest resource-intensive statements. For example, the following statement prints the 10 statements in the trace file that have generated the most physical I/O:
TKPROF ora53269.trc ora53269.prf SORT = (PRSDSK, EXEDSK, FCHDSK) PRINT = 10
TKPROF Example 2
This example runs TKPROF, accepts a trace file named examp12_jane_fg_sqlplus_007.trc, and writes a formatted output file named outputa.prf:
TKPROF examp12_jane_fg_sqlplus_007.trc OUTPUTA.PRF
EXPLAIN=scott/tiger TABLE=scott.temp_plan_table_a INSERT=STOREA.SQL SYS=NO
SORT= (EXECPU,FCHCPU)
This example is likely to be longer than a single line on the screen, and you might need to use continuation characters, depending on the operating system.
Note the other parameters in this example:
The EXPLAIN value causes TKPROF to connect as the user scott and use the EXPLAIN PLAN statement to generate the execution plan for each traced SQL statement. You can use this to get access paths and row source counts.
Note:If the cursor for a SQL statement is not closed, then TKPROF output does not automatically include the actual execution plan of the SQL statement. In this situation, you can use the EXPLAIN option with TKPROF to generate an execution plan.
The TABLE value causes TKPROF to use the table temp_plan_table_a in the schema scott as a temporary plan table.The INSERT value causes TKPROF to generate a SQL script named STOREA.SQL that stores statistics for all traced SQL statements in the database.The SYS parameter with the value of NO causes TKPROF to omit recursive SQL statements from the output file. In this way, you can ignore internal Oracle Database statements such as temporary table operations.The SORT value causes TKPROF to sort the SQL statements in order of the sum of the CPU time spent executing and the CPU time spent fetching rows before writing them to the output file. For greatest efficiency, always use SORT parameters.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
-Source database prodSQL > select name from vs. tablespace
© 2024 shulou.com SLNews company. All rights reserved.