In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains the "installation and use of statpack", the content of the explanation is simple and clear, easy to learn and understand, the following please follow the editor's ideas slowly in depth, together to study and learn "the installation and use of statpack" bar!
Brief introduction
Oracle Statspack is a powerful tool for diagnosing database performance. Through Statspack, we can easily identify all bottlenecks of Oracle and record the performance status of the database.
Installation steps
1. Statpack principle:
Run oracle's own script to generate a series of statistical tables.
Generate snapshots and sample.
Generate a report based on the snapshot.
Second, installation preparation
Check some parameter values
-- job_queue_process: value range is 0 to 1000. How many job processes can be created? in order to set up automatic tasks and perform data collection, this parameter is greater than zero.
Alter system set job_queue_processess=60
-timed_statistics, set to true, so that the collected time information is stored in dynamic performance views such as V$sessstats and V$sysstats, but it consumes resources. You can set it to true before using Satspack, and dynamically modify this parameter to false after sampling.
Alter system set timed_statistics=true
two。 Script
To use this feature, you need to run the oracle native script to generate a series of tables and views in the database to collect all kinds of information. The script is located in% oracle_home%\ rdbms\ admin (the directory of the 224 database is / oracle/app/product/10.2.0/Db_1/rdbms/admin). Under oracle816 is a set of files that start with stat, and later versions are a set of files that begin with sp.
III. Installation
Switch to oracle user in 1.shell: su-oracle
two。 Log in to sqlplus as sysdba.
9i and later, you can log in as sys as sysdba: sqlplus / as sysdba
(it is best to go to all the directories of the script% oracle_home%\ rdbms\ admin for easy execution of the script)
3. Create a tablespace to hold sampled data
Create tablespace perfstat datafile'/ data/cams/perfstat.dbf' size 500m
The report data of Statspack still takes up a lot of space, especially in the case of multiple consecutive samples, so it should not be too small, at least 100m, otherwise the creation of the object will fail.
Query tablespaces: select file_name from dba_data_files
4. Run the script to install statspack
Execute the command in sqlplus:
@ $ORACLE_HOME/rdbms/admin/spcreate.sql
-- the script creates a user perfstat, which needs to be specified.
Enter the value of perfstat_password: perfstat
-- you need to enter the tablespace used by the user perfstat: specify the newly created tablespace.
Enter the value of default_tablespace: perfstat
-- you need to specify the temporary tablespace used by the user perfstat.
Enter the value of temporary_tablespace: tmp10
-- if the installation is successful, you can see the following information:
Creating Package STATSPACK...
The package has been created.
There are no mistakes.
Creating Package Body STATSPACK...
The package body has been created.
There are no mistakes.
NOTE:
SPCPKG complete. Please check spcpkg.lis for any errors.
-- View error messages
Under NT:
Host find "ORA-" * .LIS
Host find "err-" * .LIS
Under Unix:
Grep ORA- * .lis
Grep err * .lis
If there is an error, you can run a script to delete the content: @% oracle_home%\ rdbms\ admin\ spdrop.sql
(note: also run a script under sysdba to delete related objects)
Then rerun the script installation.
This installation process creates a series of tables for storing sampled data.
View the new table:
Select dt.table_name from dba_tables dt where dt.owner='PERFSTAT'
Conn perfstat/perfstat
Select table_name from user_tables
After the installation is complete, the system will automatically switch to the perfstat user.
4. Test statpack
Running statspack.snap produces system snapshots, running twice and producing two snapshots.
SQL > execute statspack.snap
The PL/SQL process completed successfully.
SQL > execute statspack.snap
The PL/SQL process completed successfully.
Then execute the script @% oracle_home%\ rdbms\ admin\ spreport.sql to generate a report based on two points in time.
-- you need to enter the start snapshot ID and the end snapshot ID, as well as the report file name.
(snapshot ID must exist)
Specify the Begin and End Snapshot Ids
~ ~
Enter a value for begin_snap: 1
Begin Snapshot Id specified: 1
Enter the value of end_snap: 13
End Snapshot Id specified: 13
Enter the value of report_name: e:\ s1report.txt
-- View the resulting snapshot
Select t. Snapshotbook from STATS$SNAPSHOT from STATS$SNAPSHOT t (t. SnapshotTime recording execrated timestamp s from STATS$SNAPSHOT t)
5. Planning automatic tasks
After installation, we can set up scheduled tasks, regularly sample, and collect data.
Use the script spauto.sql to define automatic tasks.
@ $ORACLE_HOME/rdbms/admin/spauto.sql
Spauto.sql in the key content:
Begin
Select instance_number into: instno from v$instance
Dbms_job.submit (: jobno, 'statspack.snap;', trunc (sysdate+1/24,'HH'),' trunc (SYSDATE+1/24,''HH'')', TRUE,: instno)
Commit
End
By executing spquto, a data collection plan is established that is executed hourly.
This Job task defines the interval between collecting data (performing statspack.snap) as 1 hour:
24 hours a day, 1440 minutes->
1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
1pm 48mm MI 'once every half an hour
1According to 144, every 10 minutes.
1According to 288, every 5 minutes.
With regard to the sampling interval, one hour is usually recommended, and a shorter one, such as half an hour, can be set if there is a special need. But shorter is not recommended. Because the execution of statpack itself requires message resources, too short sampling will have a great impact on the performance of the system (and even make the execution of statspack appear in the sampled data)
VI. Generate a report
Just like the test.
Connect with perfstat users
Execute the script% oracle_home%\ rdbms\ admin\ spreport.sql, enter the start snapshot ID and end snapshot ID, and the report file name to generate the statspack report.
Note: a statspack report cannot span a downtime, but before or after a continuous interval, the information collected is still valid. You can select samples before or after to generate report.
If you stop the computer in the middle, you will receive the following error message:
An error occurred on line 1:
ORA-20200: The instance was shutdown between snapshots 1 and 31
ORA-06512: in line 48
VII. Follow-up processing
1. Remove scheduled tasks
When you finish a sampling report, you should remove the job task in time. It is very dangerous to miss an unattended job.
-- View and remove tasks
Select job, next_date, next_sec, what from user_jobs
Execute dbms_job.remove ('ID')
2. Delete historical data
1), delete sampled data
As long as the corresponding data in the stat$snapshot data table is deleted, the data in other tables will be cascading deleted accordingly.
Select max (snap_id) from stats$snapshot
Delete from stats$snapshot where snap_idexecute statspack.snap (irecording snapshots level = > 0pm iMagnetics parameter = > 'true')
Through this statement
All future collection levels will be level 0.
If you just want to modify the collection level, you can ignore the i_modify_parameter parameter, as follows:
SQL > execute statspack.snap (iSnapshot level = > 10)
2, snapshot threshold-- applies only to Sql statements obtained in the stats$sql_summary table.
Because each snapshot collects special data and sql statements, stats$sql_summary will soon become the largest table in statspack.
-- View current thresholds
Select executions_th,disk_reads_th,parse_calls_th,buffer_gets_th from stats$statspack_parameter
Various thresholds:
1) number of executions_th--sql statements executed (default = 100)
2) number of disk reads executed by disk_reads_th--sql statements (default = 1000)
3) the number of parse calls executed by the parse_calls_th--sql statement (default = 1000)
4) the number of buffer fetches executed by the buffer_gets_th--sql statement (default = 10000)
Any threshold value that exceeds the above parameter will result in a record.
We can change the default value of the threshold by calling the statspack.modify_statspack_parameter function:
SQL > execute statspack.modify_statspack_parameter (iSuppli bufferGetWord = > 1000000 recordings iDiskreadsreads = > 100000
IX. Statspack report analysis
The Statspack report is divided into the following sections
1. Database overall information
Include instance, version, RAC, CPU, physical memory, oracle memory settings, etc.
two。 Resource consumption per transaction per second
3. The hit rate of each component of the instance
4. Shared pool overview (Shared Pool Statistics)
5. Top 5 wait events with the longest wait time (Top 5 Timed Events)
Contains the first 5 waiting events, cpu occupancy between the two samples, memory allocation and other information. The waiting events of each version of Oracle are not exactly the same, and the number increases with the increase of the version. The explanation of various waiting events has been introduced in the previous "Learning dynamic performance Table" series of articles, which can be searched by interested friends.
6. DB all wait events (Wait Events)-events with Total wait time > = 0.001.
7. Background wait event (Background Wait Events)-- Total wait time > = 0.001 event.
8. Wait events displayed in a column (Wait Event Histogram)-shows the proportion of different response times for each wait event
9. SQL (SQL ordered by CPU) sorted by CPU cost
10. SQL (SQL ordered by Elapsed) sorted by execution time
11. SQL (SQL ordered by Gets) sorted by BufferGets
twelve。 SQL (SQL ordered by Reads) sorted by physical read
13. SQL (SQL ordered by Executions) sorted by number of execution
14. SQL (SQL ordered by Parse Calls) sorted by the number of parsing calls
15. Statistics of activities recorded by the instance (Instance Activity Stats)
16. IO statistics for tablespaces (Tablespace IO Stats)
17. IO Statistics of data Files (File IO Stats)
18. Data file reading bar chart form statistics (File Read Histogram Stats)
19. Buffer Pool Statistics (Buffer Pool Statistics)-Statistics including instance recovery, recommendations for setting buffer pool size, and so on.
20. PGA Statistics (PGA Aggr Target Stats)-including PGA cache hit rates, bar chart statistics, PGA setting recommendations, etc.
21. Memory footprint of processes (Process Memory Summary Stats)-including processes that take up more memory, etc.
twenty-two。 Summary of undo segment
23. Undo segment statistics
24. Current status of latch
25. Latch sleep waiting statistics
twenty-six。 Latch failure condition
twenty-seven。 Data dictionary cache performance statistics (Dictionary Cache Stats)
twenty-eight。 Activity of the library cache (Library Cache Activity)
twenty-nine。 Rule set (Rule Sets)
thirty。 Shared pool setting recommendations (Shared Pool Advisory)
thirty-one。 SGA Summary (SGA Memory Summary)
thirty-two。 SGA Statistics (SQL Memory Statistics)
thirty-three。 System parameters (init.ora Parameters)
The report is very long and indeed very comprehensive. How to analyze, we still need to learn again.
X. practical application
After preliminary analysis, it is decided to automatically take snapshots of five time points in the time period from 8:00 to 18:00, every two hours, and store the snapshot information every day, and form a report for the snapshots of 10:00 and 14:00.
Run 5 job in sqlplus. These 5 job automatically generate snapshots at different time points every day, and the script content collected at 8: 00: 00:
Begin
Select instance_number into: instno from v$instance
Dbms_job.submit (: jobno, 'statspack.snap;', trunc (sysdate) + (8: 60) / (24: 60),' trunc (SYSDATE+1,''HH'')', TRUE,: instno)
Commit
End
The job content at other points in time is similar, except that the second parameter value of the dbms_job_submit () function is changed.
Execute the five job in sqlplus.
Because the performance report is to be generated automatically, and the spreport.sql file in the admin directory needs to type the snapshot number and the report name during execution, this file is rewritten, and after the file is rewritten, the report is automatically generated for the snapshots at 10:00 and 16: 00 that day, named as sp_ date _ start snapshot number _ end snapshot number .txt storage path under / oracle directory. The sql file that automatically generates the report needs to be executed regularly, and the dbms_job.submit () of oracle does not support the call to the sql file, so call the crontab timing tool in shell to execute the sql file that generates the report:
Enter crontab-e 0 23 * sqlplus sys/sys as sysdba @ / oracle/app/product/10.2.0/Db_1/rdbms/admin/spreport.sql under oracle user
Indicates that the following tasks are performed at 23:00 every day
Thank you for your reading, the above is the content of "the installation and use of statpack". After the study of this article, I believe you have a deeper understanding of the installation and use of statpack, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.