In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly shows you "how to view and collect oracler statistics", the content is easy to understand, clear, hope to help you solve your doubts, the following let the editor lead you to study and learn "how to view and collect oracler statistics" this article.
View statistics for a table
SQL > alter session set NLS_DATE_FORMAT='YYYY-MM-DD HH24:MI:SS'
Session altered.
SQL > select t.TABLENAMEMagazine t.NUMparts ROWSherot.BLOCKSherot.LASTRANALYZED from user_tables t where table_name in ('T1 records written T2')
TABLE_NAME NUM_ROWS BLOCKS LAST_ANALYZED
T1 2000 30 2017-07-16 14:02:23
T2 2000 30 2017-07-16 14:02:23
View statistics for indexes on a table
SQL > select table_name,index_name,t.blevel,t.num_rows,t.leaf_blocks,t.last_analyzed from user_indexes t where table_name in ('T _ 1','T _ 1')
TABLE_NAME INDEX_NAME BLEVEL NUM_ROWS LEAF_BLOCKS LAST_ANALYZED
T1 IDX_T1_OBJ_ID 1 2000 5 2017-07-16 12:06:33
T2 IDX_T2_OBJ_ID 1 2000 5 2017-07-16 14:02:23
T2 IDX_T2_OBJ_TYPE 1 2000 5 2017-07-16 14:02:23
T2 IDX_T2_OBJ_NAME 1 2000 8 2017-07-16 14:02:23
T2 IDX_T2_DATA_OBJ_ID 1 1198 3 2017-07-16 14:02:23
T2 IDX_T2_STATUS 1 2000 5 2017-07-16 14:02:23
T2 IDX_T2_CREATED 1 2000 6 2017-07-16 14:02:23
T2 IDX_T2_LAST_DDL_TIME 1 2000 6 2017-07-16 14:02:23
8 rows selected.
Oracle collects statistics about tables and indexes in the database at a fixed time. By default, it collects statistics from Monday to Friday at 10:00 for 4 hours, and Saturday and Sunday at 6 a.m. for 20 hours.
Oracle can specifically manage the record changes of a table. When the record changes of a table in a day do not exceed the specified threshold, oracle will not collect statistics on the table.
Modify the automatic collection time of statistics
SQL > set linesize 200
SQL > col REPEAT_INTERVAL for A60
SQL > col DURATION for A30
SQL > select t1.windowkeeper name recorder t1.repeatables intervalrect t 1.recording from dba_scheduler_windows T1 redirecting dbathing trainer recording wingroupnames members T2
2 where t1.window_name=t2.window_name and t2.window_group_name in ('MAINTENANCE_WINDOW_GROUP','BSLN_MAINTAIN_STATS_SCHED')
WINDOW_NAME REPEAT_INTERVAL DURATION
-
MONDAY_WINDOW freq=daily;byday=MON;byhour=22;byminute=0; bysecond=0 + 000 04:00:00
TUESDAY_WINDOW freq=daily;byday=TUE;byhour=22;byminute=0; bysecond=0 + 000 04:00:00
WEDNESDAY_WINDOW freq=daily;byday=WED;byhour=22;byminute=0; bysecond=0 + 000 04:00:00
THURSDAY_WINDOW freq=daily;byday=THU;byhour=22;byminute=0; bysecond=0 + 000 04:00:00
FRIDAY_WINDOW freq=daily;byday=FRI;byhour=22;byminute=0; bysecond=0 + 000 04:00:00
SATURDAY_WINDOW freq=daily;byday=SAT;byhour=6;byminute=0; bysecond=0 + 000 20:00:00
SUNDAY_WINDOW freq=daily;byday=SUN;byhour=6;byminute=0; bysecond=0 + 000 20:00:00
7 rows selected.
Turn off automatic statistics collection
BEGIN
DBMS_SCHEDULER.DISABLE (
Name = >'"SYS". "SATURDAY_WINDOW"'
Force = > TRUE)
END
/
Modify the duration of automatic statistics
BEGIN
DBMS_SCHEDULER.SET_ATTRIBUTE (
Name = >'"SYS". "SATURDAY_WINDOW"'
Attribute = > 'DURATION'
Value = > numtodsinterval (240)
END
/
Modify automatic statistics start time
BEGIN
DBMS_SCHEDULER.SET_ATTRIBUTE (
Name = >'"SYS". "SATURDAY_WINDOW"'
Attribute = > 'REPEAT_INTERVAL'
Value = > 'freq=daily;byday=SAT;byhour=22;byminute=0; bysecond=0')
END
/
Turn on automatic statistics collection
BEGIN
DBMS_SCHEDULER.ENABLE (
Name = >'"SYS". "SATURDAY_WINDOW"')
END
/
SQL > set linesize 200
SQL > col REPEAT_INTERVAL for A60
SQL > col DURATION for A30
SQL > select t1.windowkeeper name recorder t1.repeatables intervalrect t 1.recording from dba_scheduler_windows T1 redirecting dbathing trainer recording wingroupnames members T2
Where t1.window_name=t2.window_name and t2.window_group_name in ('MAINTENANCE_WINDOW_GROUP','BSLN_MAINTAIN_STATS_SCHED')
WINDOW_NAME REPEAT_INTERVAL DURATION
-
MONDAY_WINDOW freq=daily;byday=MON;byhour=22;byminute=0; bysecond=0 + 000 04:00:00
TUESDAY_WINDOW freq=daily;byday=TUE;byhour=22;byminute=0; bysecond=0 + 000 04:00:00
WEDNESDAY_WINDOW freq=daily;byday=WED;byhour=22;byminute=0; bysecond=0 + 000 04:00:00
THURSDAY_WINDOW freq=daily;byday=THU;byhour=22;byminute=0; bysecond=0 + 000 04:00:00
FRIDAY_WINDOW freq=daily;byday=FRI;byhour=22;byminute=0; bysecond=0 + 000 04:00:00
SATURDAY_WINDOW freq=daily;byday=SAT;byhour=22;byminute=0; bysecond=0 + 000 04:00:00
SUNDAY_WINDOW freq=daily;byday=SUN;byhour=6;byminute=0; bysecond=0 + 000 20:00:00
7 rows selected.
Collect statistics manually
Collect table statistics
Exec dbms_stats.gather_table_stats (ownname = > 'USER',tabname = >' TEST',estimate_percent = > 10 methodological opt = > 'for all indexed columns')
Exec dbms_stats.gather_table_stats (ownname = > 'USER',tabname = >' TAB_NAME',CASCADE= > TURE)
Collect partition statistics for a partition table
Exec dbms_stats.gather_table_stats (ownname = > 'USER',tabname = >' RANGE_PART_TAB',partname = > 'pamphlet 201312' for all indexed columns',cascade= > TRUE)
Collect index statistics
Exec dbms_stats.gather_index_stats (ownname = > 'USER',indname = >' IDX_OBJECT_ID',estimate_percent = >'10')
Collect table and index statistics
Exec dbms_stats.gather_table_stats (ownname = > 'USER',tabname = >' TEST',estimate_percent = > 10 methodological opt = > 'for all indexed columns',cascade= > TRUE)
Collect statistics for a user
Exec dbms_stats.gather_schema_stats (ownname= > 'CS',estimate_percent= > 10 ALL' degree = > 8 Magi Cascade = > true,granularity= >' ALL')
Collect statistics for the entire database
Exec dbms_stats.gather_database_stats (estimate_percent= > 10 ALL' degree = > 8 Magi Cascade = > true,granularity= > 'ALL')
Ownname: USER_NAME
Tabname: TABLE_NAME
Partname: a partition name of the partition table
Estimate_percent: percentage of samples, valid range is [0.000001100]
Block_sample: use random block sampling instead of random row sampling
Method_opt:
Cascade: whether to collect statistics for this table index
Degree: number of cpu for parallel processing
Granularity: collection of statistics, 'ALL'-collects all (sub-partition, partition and global) statistics
Dynamic collection of statistical information
For the newly created table, oracle dynamically collects information about the table when it is accessed, and then collects it into the data dictionary at 10:00 in the evening.
SQL > set autotrace off
SQL > set linesize 1000
SQL > drop table t_sample purge
Drop table t_sample purge
ERROR at line 1:
ORA-00942: table or view does not exist
SQL > create table t_sample as select * from dba_objects
Table created.
SQL > create index idx_t_sample_objid on t_sample (object_id)
Index created.
No statistics were found for the newly created table
SQL > select num_rows, blocks, last_analyzed from user_tables where table_name = 'Tunable sample'
NUM_ROWS BLOCKS LAST_ANAL
View the execution plan:
SQL > set autotrace traceonly
SQL > set linesize 1000
SQL > select * from t_sample where object_id=20
Execution Plan
Plan hash value: 1453182238
-
| | Id | Operation | Name | Rows | Bytes | Cost (% CPU) | Time |
-
| | 0 | SELECT STATEMENT | | 1 | 207 | 2 (0) | 00:00:01 |
| | 1 | TABLE ACCESS BY INDEX ROWID | T_SAMPLE | 1 | 207 | 2 (0) | 00:00:01 |
| | * 2 | INDEX RANGE SCAN | IDX_T_SAMPLE_OBJID | 1 | | 1 (0) | 00:00:01 |
-
Predicate Information (identified by operation id):
2-access ("OBJECT_ID" = 20)
Note
-
-dynamic sampling used for this statement (level=2)
Statistics
24 recursive calls
0 db block gets
93 consistent gets
1 physical reads
0 redo size
1608 bytes sent via SQL*Net to client
523 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
-dynamic sampling used for this statement (level=2) represents dynamic sampling, but does not record data dictionaries unless you manually collect statistics for the table.
SQL > select num_rows, blocks, last_analyzed from user_tables where table_name = 'Tunable sample'
NUM_ROWS BLOCKS LAST_ANAL
SQL >
The above is all the contents of the article "how to View and collect oracler Statistics". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.