In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
[SQL] Oracle SQL monitor
Chapter 1 buried SQL Optimization weapon-- Oracle SQL monitor
DBAplus Community | 2015-11-26 07:00
Reprint statement: this article is an original article by the DBA+ community, which must be reproduced together with the full text of this Subscription account QR code, with the author's name and source: DBA+ community (dbaplus).
It is said that there is a free tool in the Oracle Enterprise Edition database, which is a sharp tool for SQL optimization, and that is Oracle SQL monitor. Next, Zhou Jun, an original expert in the DBA+ community, will give you a chance to popularize this buried artifact.
Zhou Jun
DBA+ community original expert
He has more than 14 years of technical support experience in Oracle database, served as team leader of non-IBM logo product technical support team in East China during 7 years of IBM, and is also the technical leader of Oracle software support service in IBM China. Currently working for Oracle, focusing on the design and implementation of Oracle data integration solutions. Obtain Oracle 8i~11g OCP, Oracle 10g OCM, DB2 administrator, PMP and other certificates.
A preface
To be honest, I used to not like to use fancy graphical interface tools for SQL optimization, recently participated in Oracle RWP training, I found that Oracle 11g introduced SQL monitor is really easy to use, is a buried SQL optimization tool. The most important thing is that Oracle SQL monitor is free to use in the Oracle Enterprise Edition database. Now I would like to share with you how to use SQL monitor to simplify our SQL optimization work.
Second, how to open SQL monitor report
Method one
Step1: open the Oracle EM console home page, switch to the performance page, and click SQL Monitoring in the lower right corner.
Step2: select a time range, and you can sort SQL statements by duration or database time.
In Oracle 11g, the SQL is captured by sql monitor when one of the following conditions is met, and the monitoring data is recorded in the v$sql_monitor view.
When SQL is executed in parallel, it will be immediately monitored in real time
When a SQL single process is running, it will also be monitored if it consumes more than 5 seconds of CPU or IZP O time.
Sql statement using / * + monitor * / prompt
Step3: before selecting the statement you want to optimize for SQL, click the check mark in the status bar of the first column, and Oracle will showcase the execution of the SQL statement in front of you.
Method two
In the top-level session in the lower-left corner of the performance page, click the SQL statement ID that you want to view.
Click the icon in the status bar in the first column on the SQL monitoring page.
Method three
If you do not configure Oracle EM, but know the SQLID corresponding to the SQL statement to be optimized, you can use SQL monitor to see the actual execution plan of the SQL statement in the database through the following script.
Enable log tracing in SecureCRT and select the saved log file (suffix html)
Execute in SQLPLUS
Set trimspool on
Set arraysize 512
Set trim on
Set pagesize 0
Set linesize 1000
Set long 1000000
Set longchunksize 1000000
Spool sqlmon.html
Select / * + noparallel * / dbms_sqltune.report_sql_monitor (sql_id= > 'input your SQL ID', report_level= >' ALL', type= > 'ACTIVE') from dual
Spool off
Cat sqlmon.html
Turn off the log trace in SecureCRT and open the saved file to see the SQL execution plan.
Third, how to optimize SQL by using SQL monitor
After opening the SQL execution plan using SQL monitor
We usually find the more resource-intensive steps in the SQL execution plan based on the activity of the rightmost CPU and wait.
Then check to see if there is a big difference between the number of rows estimated by Oracle and the actual number of rows returned. If there is little difference between the estimated number of rows and the actual number of rows, at least the current statistics on the corresponding database objects are accurate.
In this example, there is little difference between the estimated number of rows returned by Oracle and the actual number of returned rows. The Oracle optimizer uses Bloom filtering and HASH right connection execution plan, and then we usually check the filter conditions of SQL to determine whether the correct index and other optimization measures are used. I will not expand them here.
The following is a case I recently encountered that uses SQL monitor for rapid SQL optimization.
The SQL Text:SQL statement is relatively long, and I truncated some of the representative SQL.
With SQL monitor, we can quickly locate the steps that need to be focused on in the execution plan.
We see that the SQL statement has been running for 5.4 hours, and the number of rows estimated by Oracle is very different from the actual number of rows, indicating that the statistics on the relevant tables are inaccurate.
We collected statistics on the DMS_CONTAINERS and DMS_CONTAINER_ JN tables, and immediately after the statistics were re-collected, Oracle used the correct index of the IYC_CNTRID field on the DMS_CONTAINER_ JN table, but the less selective TYPE field index was still used on the DMS_ contains table.
Looking further at the SQL statement, we find that the SQL accesses the DMS_ contents table through the view YMS_GUI_LOAD_CONTAINERS_VW, which is defined as follows:
Because there is an IYC_TYPE field in the view where condition, Oracle gives priority to the index on the IYC_TYPE field to access the DMS_ contents table. Under CBO, Oracle will not automatically select a separate index on other fields for access (unless the AND-EQUAL prompt is manually set), so the index on the more filtered YC_LSTUPDDT field is not selected. If you need Oracle to use indexes on other fields, the easiest way is to create a federated index on the IYC_TYPE and YC_LSTUPDDT fields, and Oracle automatically analyzes the combination of the two columns when analyzing the index column to select the composite index.
Four summaries
Through the previous introduction, I believe you are deeply impressed by the gorgeous and intuitive interface of Oracle SQL monitor. Let me summarize the steps of SQL optimization using Oracle SQL monitor:
Through SQL monitor monitoring, we can quickly find abnormally running SQL statements, and if you know the corresponding SQL ID of SQL, you can also use SQL monitor to view the real execution plan of SQL statements in the database through scripts.
Look at the SQL execution plan and quickly find the key steps in the SQL execution plan through the activity proportion of CPU and WAIT.
By comparing the number of rows estimated by Oracle with the actual number of rows returned, we can quickly determine whether we need to re-collect statistics and help us analyze whether there is a problem with the SQL execution plan chosen by the Oracle optimizer.
For specific SQL optimization methods, please refer to the Oracle SQL optimization topic that Master Ding Jun shared in the DBA+ community (follow DBA+ community Wechat official account: dbaplus, reply "001" to see this article), I will not expand further here.
If you want to do good work, you must first sharpen its tools. Friends, what are you waiting for? try Oracle SQL monitor, a buried SQL optimization tool.
SQL Monitor, a feature worth mastering.
Original address: SQL Monitor, a feature worth mastering by jeanron100
In the first chapter, we can master these characteristics of SQL Monitoring, and it is easy to optimize SQL.
DBAplus Community | 2015-12-29 07:01
Reprint statement: this article is an original article by the DBA+ community, which must be reproduced together with the full text of this Subscription account QR code, with the author's name and source: DBA+ community (dbaplus).
Catalogue
Terminology description
Overview
What SQL will be monitored by SQL MONITORING?
Find the Real Time SQL Monitoring entrance
Detailed explanation of Real Time SQL Monitoring
1 terminology description
Before we formally introduce Real Time SQL Monitoring, let's focus on some of the terms to be used next.
Table Queue, message buffer, used in parallel operations, used for communication between PX processes, or communication between PX processes and QC processes, is some page in memory, the size of each message buffer is controlled by the parameter parallel_execution_message_size, 11GR2 version defaults to 16K, before each major version this value is different, please refer to the ORACLE official documentation for details.
Wall time, duration refers to physical time, clock time.
To the left of HASH JOIN, the build side of hash join, which is usually a small table.
To the right of HASH JOIN, the probe side of hash join, which is usually a large table.
M stands for millions.
Line source row source, which refers to performing a plan-specific one-line operation, such as:
The first column of the execution plan above, Id column 0-5, each row is a row source
2 Overview
There are always some new features in each version of Oracle that impress us. SQL MONITORING is such a new feature to me, although it is not yet widely known, it is provided in the 11GR1 version, and this function has been continuously enhanced in subsequent versions (11GR2 not 12CR1), indicating that ORACLE attaches great importance to it. It can centralize all the key performance statistics involved in the query on one page. This feature is automatically enabled, especially for statements with parallel queries. This function has been shared many times in foreign ORACLE user groups, but there is little introduction to it in our country at present. This paper mainly introduces the core features of Oracle Real Time SQL Monitoring, in order to make DBA have a new means (more advanced means) to diagnose SQL performance, and then improve the efficiency of optimization.
SQL optimization is a must-have skill for DBA, but even if a veteran DBAs with rich experience in SQL optimization expect to frown on dozens or even hundreds of lines of execution plans, how can he quickly know:
Which line source consumes the most resources in such a large execution plan. If the execution plan of a SQL contains 5 line sources, and line source 1 consumes 3% of the DB TIME, then even if you eliminate all the 3% DB TIME, it will only improve the performance of SQL by 3%, and the overall DB TIME improvement is not obvious.
How do I know which type of resource is consumed the most in the whole SQL execution process, IO? CPU?, this gives us an overall understanding of the performance of SQL. You may observe the performance indicators and say, "Oh, this is a SQL with heavy IO. If you need to greatly improve the performance of SQL, you may want to consider improving the capability of the database system IO."
For an executing SQL statement, how do you know where it is currently executing? Do you even know how long it will take to complete this step?
How do you know what wait events are experienced during the execution of this SQL statement, or even which of these waits is the most severe?
It is not easy to know the answers to these questions before 11G, through the association of various V$ views, and the results shown are not clear at a glance. After 11G, all this information can be found in SQL MONITORING, and SQL MONITORING provides more than the functions mentioned above. Through SQL MONITORING, you can easily obtain the binding variables of statements, monitor the whole process of index creation and the remaining workload of index creation. The text will focus on the core functions of SQL MONITORING, and readers will be invited to dig for other relevant information.
3 what SQL will be monitored by SQL MONITORING
For most OLTP systems, SQL is relatively simple, each time running time is very fast, the response time of most SQL should be below 10MS, and the complexity of optimization is relatively low. The emergence of SQL MONITORING function is not to help DBA discover and diagnose OLTP SQL performance problems, but to speed up the efficiency of DBA optimizing data warehouse class SQL, these SQL are partial to OLAP system. It is characterized by low concurrency, long running time and high SQL complexity. Any SQL that meets the following conditions will be monitored by SQL MONITORING:
If the serial execution of SQL, the consumption of CPU time or IO time is more than 5 seconds, then these SQL will be monitored. This behavior can be controlled by modifying the implicit parameter _ sqlmon_threshold. The default is 5 seconds. If set to 0, the SQL MONITORING function will be turned off. Note that what I am talking about here is the CPU time or IO time consumed by SQL, not the execution time of SQL. The reason why we need to limit CPU time or IO time is to prevent the database from having a large number of lock/latch at some point, then there will be a large number of SQL meeting the condition of 5 seconds execution time, while SQL monitoring itself is more resource-consuming and needs to copy runtime performance statistics to SGA. Each monitored SQL has a separate memory structure, in 11G may lead to a lot of latch competition, CPU soaring, 12C has optimized this problem does not exist. If you find that your SQL runs significantly longer than 5 seconds but is not monitored by SQL MONITORING, you should carefully check whether it is because SQL itself does not consume more than 5 seconds of CPU or IO (due to locks, network)? ).
All SQL executed in parallel will be monitored, and there is no need to wait more than 5 seconds for CPU or IO. It is also easy to understand that the general SQL of parallel queries is the SQL of report class or heavy task class, so the function of SQL MONITORING will be turned on automatically.
Adding the SQL of HINT / + monitor / will enable the SQL MONITORING function immediately.
In addition to the above conditions, you also need to check whether some system parameters are set correctly:
Statistics_level needs to be TYPICAL (default) or ALL.
Control_management_pack_access needs to be DIAGNOSTIC+TUNING (default)
SQL MONITORING does not monitor very large execution plans, does not monitor execution plans of more than 300 lines by default, and can be controlled by the implicit parameter _ sqlmon_max_planlines. Please refer to MOS ID:1613163.1 for details.
4 find the Real Time SQL Monitoring entrance
You can find and use the features of SQL Monitoring from these locations:
Enterprise Manager
Enterprise Manager Cloud Control (EMCC)
SQL Developer
DBMS_ SQLTunE package
Here we mainly introduce how to use Real Time SQL Monitoring through the Enterprise Manager Cloud Control (EMCC) page, and there will be a separate section at the end of the article that briefly describes how to get the output of Real Time SQL Monitoring through the SQL package Real Time SQL Monitoring _ SQLTUNE.
First, log in to Oracle EMCC, go to the "Database" page, and select the database you want to monitor. Here, take clouddb1 as an example:
After entering the clouddb1 database, select the SQL monitoring feature under "performance".
Then go to the main interface of SQL MONTORING, which contains the recently monitored SQL. Check the status bar to know the current execution status of SQL.
The status column generally contains the following four states: run, complete, queue, error, mouse over the symbol in the status column, you will see the state represented by the symbol. All of these states are easy to understand except the queuing status, which only appears in versions greater than 11GR2. ORACLE 11GR2 has made great changes in parallel settings: automatic parallelism adjustment, parallel statement queue, and memory parallel execution. Enabling these three new features requires setting the parameter parallel_degree_policy to auto and the default value to MANUAL. After setting it to auto, all three new features are turned on. Here we mainly talk about parallel statement queues. After enabling this feature, you can specify the total number of available parallel child processes by setting parallel_servers_target. When running statements, if you find that the number of available parallel processes is less than the target number of statements to be run, then the statements will be put in the queue and wait until there are enough parallel processes available. Once queued, you will see that the current running status of the statement is [queue] on the monitoring page of SQL MONITORING.
The SQL list displayed on the SQL monitoring page is not for a specific SQL text, but for a specific execution of a specific SQL statement, so if two sessions run the same statement, you will see two separate entries in this list, so it behaves differently from v$sql. If two sessions are running the same SQL, there is only one record that summarizes the SQL execution statistics in the v$sql. In view of this feature of SQL MONITOR, it allows you to check what is wrong with this statement for a specific user (who is complaining about performance problems), instead of first observing and summarizing the statement-level performance metrics of all users, such as those in v$sql, and then trying to find out the user problems you are concerned about.
5 detailed explanation of Real Time SQL Monitoring
After finding the entrance to Real Time SQL Monitoring, you can click SQL_ID to go to the Real Time SQL Monitoring display page. Isn't that cool?
Perhaps you are still very strange to many indicators on the page, don't worry, I will introduce the core part of the page next.
5.1 General Information
The general information section shows some basic details of SQL execution, such as the text of SQL, the degree of parallelism used in SQL statements, the start time of SQL execution, the last refresh time of this statistic, the user who executed SQL, the fetch operation and so on. Note that [SQL text] is followed by … (red box), click on it, and you will get the full text of the SQL statement. Starting with version 11.2.0.2, you will also see the value of the bound variable.
For example:
In the above code, I execute a query on a SQL with bound variables with a degree of parallelism of 6, and then click on the later part of [SQL text] through the SQL MONITORING page of EMCC. A new window appears
Click show SQL Binds:
Now you can see the list of bound variables, isn't it very convenient?
Of course, as mentioned above, SQL MONITORING is mainly used to monitor SQL of data warehouse type. For long-running queries, whether binding variables should be used is a separate topic. In data warehouses, the general best practice is: bind variables should not be used for long-running queries, because SQL parsing only takes up negligible time relative to the running time of the entire SQL. And the SQL execution frequency of the data warehouse system itself is also very low, so we can consider hard parsing all queries, even if this will consume some response time (negligible), but we try our best to ensure that each set of parameter combinations can generate an optimal execution plan, because in terms of binding variables and text constants, text constants are the most suitable to generate an optimal execution plan.
To get to the point, as demonstrated above, it becomes easy to get the value of the bound variable for the monitored query on Oracle 11.2.0.2 or later. In the past, you may need to query vSecretsqlbind to get the value of the bound variable (historical binding value), or even through commands such as oradebug errorstack to get the value of the binding variable currently running SQL. Now you can finally abandon these methods.
The [general information] module also contains the number of fetch calls operations. To put it simply, it represents the number of interactions between the database and the application network. For operations such as count,sum, this value is usually 1 and only one network interaction is needed. However, for queries like select *, the larger the amount of data returned, the greater the value.
For example, we know from the above picture that it took 11 times to complete the transmission of 1000 records returned. 90 records are transmitted at a time.
We will continue to talk about the fetching operation in later chapters, so let's stop here.
5.2 time and wait statistics
This section is very small, but the amount of information is very large, showing the duration, database time, and percentage of wait activity. Move your mouse over the bar chart of database time and you will see the time spent on various waiting events and CPU, which lets you know where your SQL time is spent, whether there are more IO operations or more CPU operations, which is great. The wait activity bar chart shows the detailed wait event distribution. For example, the blue part in the above figure represents the entire statement execution, and 45% of the wait occurs on the wait event direct path read temp. Note that the wait activity bar chart refers to the wait event time in the database, not the entire database time, that is, it does not include CPU time. This part of the information can tell DBA what waiting events were encountered during the execution of SQL, and the proportion of these waiting events. Of course, you should pay attention to the most eye-catching of these waiting events.
You may be confused about the meaning of duration and database time, but it doesn't matter. We'll focus on the meaning of these two times and the difference between them:
5.3 duration
The duration (wall time) is the time that users pay close attention to. It shows how long a statement has been active, it represents the time span from the beginning to the end of the statement, and for the running statement, it is the time from the start to the current execution. Of course, end users may have to wait longer, because in addition to the response time of the database, time may also be spent on the application system or on the network between the database and the application server. For example, the duration of SQL is 5 seconds, but it takes 1 second for the application itself to process the data, so the time felt by the client may be 6 seconds instead of the 5 seconds seen by the database.
It needs to be emphasized again that duration measures the time span from the start of SQL execution, that is, from the time the cursor is opened to the time the cursor is closed or cancelled, which means that if the database completes a query within 1 minute, but the subsequent millions of results can only return a few rows at a time, from an application point of view, this query will take a long time (millions of network transfers) to complete. But the database side only took a little time to process. For the case of network transmission, SQL MONITORING will have an indicator to respond, that is, the fetch calls already mentioned above. If the network transmission volume is relatively large, this value will be relatively large.
For example, I use monitor hint to force ORACLE to monitor this SQL. This SQL will return a large number of result sets to the client. We use EMCC to monitor the relevant monitoring information of this SQL:
The value of the fetch operation is very large because the database has a lot of network interaction with the application (in this case, the SQLPLUS client).
5.4 Database time
Database time, or DB Time, shows the total time a query takes to execute in the database. As far as DML operations are concerned, database time is basically equal to duration, because DML operations do not return result sets and there is no network interaction time, but if you run a SELECT statement and return many rows, then the duration is generally greater than the database time. Because a lot of network time is counted in the duration, the database will send the data to the application in batches, so for a select statement you may see that the database time is only 30 seconds, while the duration is 50 seconds, of which 30 seconds are used to serve your request, and the rest of the time the database is idle, waiting for the application to fetch the next batch of data. Let's take a look at a diagram that has been used above, and for this query, the duration is much longer than the database time because a large number of result sets are returned to the client.
For serial queries, the duration is greater than or equal to database time, but for parallel queries, things change: when running a parallel query, multiple server processes (perhaps even server processes on multiple servers) participate, and each server process can use a separate CPU resource, that is, 1 minute of wall time. However, the database time may be 1 minute (N is the number of CPU logical cores), and the final database time for parallel queries may be much longer than the duration (wall time).
The duration of this statement is only 19 seconds, while the database time is 1.8 minutes. You can see from the Execution Plan section of [General Information] that this statement is run with a degree of parallelism of 6, which means that there are multiple processes in the database serving the query at the same time, and the database time of all these sessions plus the time of the coordination process are summed up into the database time. Database time represents roughly how much time must be completed in the database, but because the statement is parallel, you don't have to wait that long.
5.5 IO Statistics
Shows some key IO statistics when the statement is executed.
The number of buffer fetches, or buffer gets, shows the total number of logical IO in the database tier. The IO request contains the total number of IO requests, such as db file sequential read,db file scattered read,direct path read, and so on, and the IO bytes represent the total number of IO bytes read / written. More detailed statistics can be obtained from different colored parts of the bar chart, such as the IO bytes of 10GB shown above, of which 42% are generated by read requests, and so on. It should be noted that the statistics of the number of IO requests and the number of IO bytes here may exceed your expectations. For example, a full table scan for a 1G table is sorted by a certain field, not only the IO of the table itself, but also the IO of the read and write temporary period generated by the sort.
5.6 details
The details page itself contains some sub-tab pages, for example, the above figure contains [Plan Statistics], [Plan], [parallel], [activity], [Metrics] sub-tab pages. The details page mainly contains some statistics on the rowsource details of the execution plan.
First of all, let's take a look at the leftmost column of the page, if a row source operation is performed in parallel, it will be identified by multiple villain icons, on the contrary, if it is executed serially, it will be identified by a villain icon, if you observe carefully, you will find that multiple villain icons sometimes have a color distinction (below), which is due to the different roles of producers and consumers in parallel execution. For example, in the red picture, the producer is responsible for scanning the table, while the blue process in the picture is the consumer responsible for sorting the received data.
In addition, we will see a right arrow indication behind the small multi-person icon, which indicates that the current query is in progress, and the execution plan is being executed to this (some) row source (the row source where the right arrow is located), that is, through SQL MONITORING, we can know which row source the current execution plan is executed to, nice job! If the SQL execution ends, or if the line source execution ends, these small right arrows will also disappear.
Then let's take a look at [estimated rows] and [actual rows]. If there is a big difference between them, it may be because your database system statistics are missing or obsolete, or because of the optimizer's own algorithm. I will not introduce information about CBO optimization here, and readers can refer to the relevant articles themselves. The [actual number of rows] column truly reflects the amount of work required by the scanning process.
Although Oracle CBO technology has been developed for such a long time, and there are many related books and articles, the author believes that the best book to read and understand CBO is still jonathon lewis's cost-based ORACLE optimization law, but the Chinese version seems to be unavailable.
The [timesheet column] (Timeline) is an important column through which you can know the active time period for each line of the SQL execution plan. The Timeline column in the figure above constructs a visual time progress diagram based on the activities of each row of the execution plan. It is based on the sampling of ASH. Starting from ORACLE 11G, ASH collects information based on each row source of the SQL execution plan, and looks at the numbers in parentheses in the Timeline column header in the figure above. It shows the total running time of the SQL, which is 21 seconds here. Now it is easy to use this. Then observe the bar chart and bar chart position on each row source for visual analysis. Then we move our attention to the [temporary] column, which represents the temporary tablespace consumed during the operation of SQL. If we find that the temporary tablespace consumed by SQL is very large, we can check whether the parallelism is not enough or the PGA parameter setting is too small. However, for operations such as index creation, large table HASH JOIN, SQL sorting, it is also impossible to avoid the use of temporary periods completely.
The [IO requests] column represents the number of IO requests generated by each row source, and according to the visual bar chart, it is easy to find out which row source generates the most IO requests.
The [percentage of activity] column represents the wait generated in this row source, and CPU is also counted, and its data is obtained from ASH's base table, v$active_session_history, that is, the data is sampled (once per second). For example:
SQL_PLAN_LINE_ID, SQL_PLAN_OPERATION and SQL_PLAN_OPERATION columns have been added to 11G's v$active_session_history view, which can track the specific row source information where waiting occurs, which is very useful for performance diagnosis with DBA. During the execution of the above SQL, ASH sampled a total of 6 pieces of data, of which five wait events are direct path read and one is empty, that is, the CPU operation. SQL_PLAN_LINE_ID represents the row source that generated the wait. Here is 6. Take a look at the [Row ID] column in the figure below. The row source with ID of 6 performs a full table scan operation. The bar chart of [activity percentage] shows that five direct path read waits have been generated, which is exactly consistent with the result of our query from the ASH view.
Therefore, the [percentage of activity] column is very important, through which we can know where the DB TIME is going. For example, in the figure above, most of the DB TIME is spent in the full table scan phase. Except for the full table scan row source, all the row sources are empty, but this does not mean that the other row sources did not take time to fetch the database, but because the content of this piece is sampled according to ASH, and if the row source operation is performed quickly, ASH is not easy to capture. Through the [activity percentage] column, we can locate the row source that consumes the most resources, and then optimize it with pertinence to achieve twice the result with half the effort.
Let's look at the [parallel] tab page of [details], which gathers the work done by each parallel process. Because the parallelism of ORACLE uses the producer-consumer model, it first groups the processes according to the instance, and then groups the processes according to the producers and consumers. In my test example, there is only one RAC node instance and the other node is shut down.
Switch to the activities page of details to show how many processes are active from the start to the end of the execution time span. Because we have set parallelism 6, we can see from the page that six processes are active most of the time in the later stages of SQL execution.
The Metrics page of [details] shows the occupancy of CPU, memory, and IO during the query.
Here we basically reduce the core content of SQL MONITORING, this page contains a lot of information, and the page itself is very interactive, you can give full play to your curiosity on this page, put your mouse anywhere you can, and explore the treasure inside.
5.7 Monitoring the index creation process through SQL MONITORING
The reason for writing a special section to monitor the index creation process through SQL MONITORING is due to the needs of a customer who wants to know how long it will take to rebuild the index. Even if he does not know the exact time, he also wants to know roughly how much work has been done and how much work has been completed since the index creation started. If you are using the 11G version, you can easily implement this requirement through SQL MONITORING.
Generally speaking, the creation of the index is divided into two stages: first, the full table scanning process, and second, the sorting process of creating the index.
Here we give an example of index creation. To ensure that SQL MONITORING can monitor this statement, I use parallel statements, and of course, I can create a table large enough to ensure that the index creation statements run long enough to turn on SQL MONITOR.
Alter index tt rebuild online parallel 6
After the index creation starts, we observe the [details] section of the SQL MONITORING page. The multi-person icon in the leftmost column of the [details] page shows that this is a statement executed in parallel, where red is the producer and is responsible for scanning the HASH_T1 table. According to the parent operation PX BLOCK ITERATOR of the full table scan row source (ID=8), we can know that PX SLAVE works according to ROWID. Each PX SLAVE is responsible for scanning a part of the data blocks (here is a partition table, which does not work by partition). The scanned data is written to TQ10000 table queue, and the consumer PX SLAVE process reads data from table queue TQ10000, so the data is transferred by two groups of parallel processes through the table queue TQ10000 table. The consumer PX SLAVE sorts the received data and creates an index.
As can be seen from the above figure, the arrow to the right is in the sorting and indexing stage (row source 3pjin4), the full table scan phase is over, sorting and indexing are being carried out. According to the [actual number of rows] column, we can know that the number of rows that have been sorted is 531K, while the really needed sort quantity is 47m. You can know the total amount of sorting to be sorted from other row sources (this information is shown in the actual row series of ID=5-8). The next task is to refresh the page to see when the actual number of rows sorted can reach 47m. The [Progress percentage] column shows the percentage completed, which is very meaningful for the estimation of workload. It is a pity that [progress percentage] cannot be displayed for many operations, and in many cases, [progress percentage] will appear dynamically during the running of SQL. However, this is not important for index creation, and you can get a rough idea of the remaining workload based on the actual amount of data and the amount of sorting that has been completed.
The sorting of 28m has been completed after the refresh.
Refresh again, and find that the index creation has been completed, because the leftmost has no right arrow, and the actual number of rows (row source 4) has been 47m, completing the sorting of all the data. We look at the column [activity percentage]. During the whole index creation process, most of the time is spent on the row source ID=4, that is, sorting and indexing operations. From the [timesheet] (Timeline) column, we can also see that it has the longest active time span.
5.8 an example
The core features of SQL MONITORING are described above, but it can only help us to find performance problems, how to solve problems it can not help us, the process of solving problems also requires us to have a variety of SQL optimization skills, such as CBO knowledge, operating system knowledge and so on.
Let's use a small example to advance our understanding of SQL MONITORING:
Physical information for the table:
Hash_t1 size 4416m
Query statement
The SQL run time is 2 minutes. Due to the use of parallelism, we can see from the figure above that DB TIME 12.6 minutes is much longer than the SQL run time of 2 minutes.
According to the [activity percentage] column, we can see that HASH JOIN itself accounts for the largest amount of database time, reaching 51% (the result of ASH sampling). Then take a closer look at the relevant contents of the implementation plan. PX SEND BROADCAST is used on the left side of HASH JOIN, but the amount of data that needs to be broadcast is very large, with 40m. Due to the use of 6 concurrency, each PX process receives 40m of data after the broadcast. So the total amount of data is 240m. The data volume of the table on the right side of HASH JOIN is also 40m, which is the same as the size of the table on the left side of HASH JOIN. The distribution of data generally includes broadcasting and the addition of replicat to HASH,12C. For the case of small tables on the left side of HASH JOIN, broadcast distribution is generally used. For the cases of large tables on the left and right of HASH JOIN, it is generally better to use HASH distribution. Here, because each PX consumer process needs to fully hold the data of the left table of HASH JOIN, it consumes a large amount of temporary space, reaching 4GB.
Let's see how it works to try using HASH distribution: SQL runtime has been reduced from 2 minutes to 1.3 minutes, and DB TIME has been reduced from 12.6 minutes to 8.3 minutes.
Check the execution plan to find that the data distribution method has changed to PX SEND HASH, using the HASH distribution method, observing that the amount of data in [actual rows and columns] is the same as the number of records in the table, and the temporary tablespace has been reduced from 4GB to 966MB. The percentage of activity taken by the HASH JOIN operation itself also dropped from 51% to 22%.
5.9 related views
All the beautiful diagrams on Grid Control come from v$ or DBA_ views. It may not be necessary to visit these v$ views frequently for daily monitoring and tuning tasks, but it is still useful to know where this information comes from, because it makes it easier to customize monitoring and advanced diagnosis of problems. For example, Walker Technologies, where I work, is developing a monitoring system automatically. We are also considering adding the function of SQL MONITORING to our monitoring product, and it will be very beneficial for us to develop this feature if we can understand the underlying view of these features. Here are some key views to know:
The GV$SQL_MONITOR view contains monitoring data when the statement is executed. When there are multiple sessions running the same statement, this view will have multiple records corresponding to it, so make sure that you use the correct search filter to get the SQL execution you are concerned about. For example, you should pay attention to whether SID and INST_ID are the sessions you are looking for, PX_QCSID and PX_QCINST_ID for parallel execution, and if you are diagnosing a currently running query, check to see if the column status shows an executing status.
The GV$SQL_PLAN_MONITOR view contains performance metrics for each line of the execution plan, which are monitored and updated in real time.
The GV$ACTIVE_SESSION_HISTORY view contains columns such as SQL_PLAN_LINE_ID, SQL_PLAN_OPERATION, and SQL_PLAN_OPTIONS starting with ORACLE 11GR1. By querying these columns, plus SQL_ID, you can find the most prominent row source in the SQL execution plan, not just the most significant execution plan.
5.10 Control SQL Monitoring
SQL,SQL monitoring for parallel execution is immediately enabled for them, regardless of how long they have been running. SQL,SQL monitoring for serial execution is not immediately enabled because SQL monitoring is not used to monitor normally fast OLTP queries, which are executed many times per second. However, if a SQL serial query consumes more than 5S of CPU and IO wait time, it will be considered a long query, and SQL monitoring will also be enabled for such queries. This happens automatically, and the query statement does not need to be rerun.
You can use prompts MONITOR and NO_MONITOR to control whether a statement is monitored or not. If you don't want to monitor a long-running SQL, you can use the NO_MONITOR HINT. It should be noted that there is a HINT NO_MONITORING that is very similar to NO_MONITOR, which has nothing to do with the function of SQL MONITORING. Through the HINT of NO_MONITORING, you can disable the monitoring of the predicate usage of the first related fields in the table, and the monitoring data will be put into the sys.col_usage$.
5.11 use SQL to get the output of SQL MONITORING
After a series of introductions above, it is very easy to get the output of SQL MONITORING through the SQL package. I won't spend any more time explaining the output here. I've already talked about most of the information above.
This is mainly done through dbms_sqltune.report_sql_monitor. The following SQL means that the query SQL_ID is 4vnz8232nugv9, and the most recent SQL MONITORING information is printed in text format:
The type of the dbms_sqltune.report_sql_monitor package can have HTML,ACTIVE (11GR2) and XML in addition to the text given in my example. If you want to view the execution statistics of a SQL history instead of the most recent one, you can do so by specifying the SQL_EXEC_ID parameter. For details, please refer to:
Https://oracle-base.com/articles/11g/real-time-sql-monitoring-11gr1#active_html_reports_offline
About Me
.
● this article is sorted out from the network
● article is updated synchronously on itpub (http://blog.itpub.net/26736162), blog Park (http://www.cnblogs.com/lhrbest) and personal Wechat official account (xiaomaimiaolhr).
● article itpub address: http://blog.itpub.net/26736162/abstract/1/
● article blog park address: http://www.cnblogs.com/lhrbest
● pdf version of this article and wheat seedling cloud disk address: http://blog.itpub.net/26736162/viewspace-1624453/
● database written examination interview questions database and answers: http://blog.itpub.net/26736162/viewspace-2134706/
● QQ group: 230161599 WeChat group: private chat
● contact me, please add QQ friend (646634621), indicate the reason for adding
● completed in Mordor from 09:00 on 2017-07-01 to 22:00 on 2017-07-31.
The content of the ● article comes from the study notes of wheat seedlings, and some of it is sorted out from the Internet. Please forgive me if there is any infringement or improper place.
Copyright ● all rights reserved, welcome to share this article, please reserve the source for reprint
.
Pick up your phone and use Wechat client to scan the picture on the left below to follow the Wechat official account of wheat seedlings: xiaomaimiaolhr, scan the QR code on the right to join the QQ group of wheat seedlings, and learn the most practical database technology.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.