Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Solution 2 for deploying statspack tools (2)

2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Solution 2: create an index on the empno column of emp2, and then execute the share_pool_sql_1.sh script to view the sp report

8.1 create an index on the empno column of emp2

Sys@TESTDB12 > create index ind_empno on scott.emp2 (empno)

8.2 re-execute the share_pool_sql_1.sh script and reopen the statspack automatic snapshot

{oracle@Redhat55.cuug.net:/home/oracle/script/bin} $sh share_pool_sql_1.sh

SQL > @? / rdbms/admin/spauto

8.3 generate statspack report

Perfstat@TESTDB12 > selectsnap_id,snap_time,snap_level from stats$snapshot order by snap_time

Perfstat@TESTDB12 > selectsnap_id,snap_time,snap_level from stats$snapshot order by snap_time

SNAP_ID SNAP_TIME SNAP_LEVEL

--

1 28-JUL-14 7

11 28-JUL-14 7

21 28-JUL-14 7

31 28-JUL-14 7

41 29-JUL-14 7

51 29-JUL-14 7

61 29-JUL-14 7

71 29-JUL-14 7

81 29-JUL-14 7

91 29-JUL-14 7

101 29-JUL-14 7

111 29-JUL-14 7

121 29-JUL-14 7

131 29-JUL-14 7

141 29-JUL-14 7

151 29-JUL-14 7

161 29-JUL-14 7

171 29-JUL-14 7

181 29-JUL-14 7

191 29-JUL-14 7

~ ~

Enter value for begin_snap: 131

Enter value for end_snap: 141

Enter value for report_name:

Specify the Begin and End Snapshot Ids

~ ~

Enter value for begin_snap: 141

Enter value for end_snap: 151

Enter value for report_name:

Specify the Begin and End Snapshot Ids

~ ~

Enter value for begin_snap: 151

Enter value for end_snap: 161

Enter value for report_name:

Specify the Begin and End Snapshot Ids

~ ~

Enter value for begin_snap: 161

Enter value for end_snap: 171

Enter value for report_name:

8.4 compare the hit rate of the data buffer and the hit rate of the library buffer for each time period through the 4 newly generated statspack reports:

time

Buffer Hit (%)

Library Hit (%)

0519purl 01 ~ 05:34:01

99.99

89.78

05:34:01 to 05:49:00

99.99

89.72

05:49:00 to 06:04:05

99.98

89.45

06:04:05 to 06:13:00

99.95

88.79

After creating the index on the empno column of emp2, it is found that the hit rate of the data buffer has been significantly improved, reaching more than 99%, while the hit rate of the library buffer has also been slightly improved.

8.5 look at Top 5 Timed Events to find out the wait events related to disk Icano in each of the four reports.

time

Name

Wait (s)

Time (s)

0519purl 01 ~ 05:34:01

Log file parallel write

45110

fifty-four

Log file sync

6240

forty-six

Os thread startup

thirty-four

five

Control file parallel write

three hundred and thirty two

three

05:34:01 to 05:49:00

Log file parallel write

48413

thirty-six

Log file sync

3563

twenty-eight

Os thread startup

thirty-three

five

Db file sequential read

2018

two

05:49:00 to 06:04:05

Log file parallel write

49564

twenty-three

Log file sync

four hundred and fifty five

fifteen

Db file sequential read

3955

nine

Os thread startup

thirty-nine

six

06:04:05 to 06:13:00

Log file parallel write

28273

eight

Db file sequential read

2928

five

Log file sync

two hundred and thirty one

four

Os thread startup

twenty-one

three

Through the comparison of four reports, direct path read is missing in Top 5 Timed Events, which shows that the problem of waiting for I\ O in full table scan has been solved, but the disk I\ O of log fileparallel write and log file sync is still relatively large, and there are no major resource-consuming tasks, indicating that the performance of the system has been improved.

8.6 the first few sql statements that caused the largest physical read were not found in the report. Query these statements with SQL statements: select sql_text from v$sqlwhere disk_reads= (select max (disk_reads) from v$sql)

time

Executions

Rows per Exec

Sql statement

0519purl 01 ~ 05:34:01

10840

16.1

Select / * + rule * / bucket, endpoint, col#, epvalue from histgrm$

Where obj#=:1 and intcol#=:2 and row#=:3 order by bucket

05:34:01 to 05:49:00

12565

16.1

Select / * + rule * / bucket, endpoint, col#, epvalue from histgrm$

Where obj#=:1 and intcol#=:2 and row#=:3 order by bucket

05:49:00 to 06:04:05

15112

16.0

Select / * + rule * / bucket, endpoint, col#, epvalue from histgrm$

Where obj#=:1 and intcol#=:2 and row#=:3 order by bucket

06:04:05 to 06:13:00

20814

16.4

Select / * + rule * / bucket, endpoint, col#, epvalue from histgrm$

Where obj#=:1 and intcol#=:2 and row#=:3 order by bucket

By comparing the most resource-consuming SQL statements in each time period, it is found that there are still the same or similar execution plans, and binding variables should be used to improve execution efficiency.

The execution plan of the generated statement: set autotrace traceonly select * from scott.emp2

Idle > select * from scott.emp2 where empno=1484

Execution Plan

Plan hash value:2918945472

-

| | Id | Operation | Name | Rows | Bytes | Cost (% CPU) | Time |

-

| | 0 | SELECT STATEMENT | | 1 | 48 | 4 (0) | 00:00:01 |

| | 1 | TABLE ACCESS BY INDEX ROWID | EMP2 | 1 | 48 | 4 (0) | 00:00:01 |

| | * 2 | INDEX RANGE SCAN | IND_EMPNO | 1 | | 3 (0) | 00:00:01 |

-

PredicateInformation (identified by operation id):

2-access ("EMPNO" = 1484)

Statistics

55 recursive calls

0 db block gets

78 consistent gets

4 physical reads

0 redo size

1033 bytes sent via SQL*Net to client

523 bytes received via SQL*Net from client

2 SQL*Net roundtrips to/from client

7 sorts (memory)

0 sorts (disk)

1 rows processed

8.7 View Buffer Pool Advisory and set the size of Buffer cache to the recommended size

Buffer PoolAdvisory for the period of 01-05:34:01 of 05PUR 19purl

Est

Phys Estimated Est

Size for Size Buffers Read Phys Reads Est Phys dbtime

P Est (M) Factr (thousands) Factr (thousands) Read Time for Rds

-

D 4. 1 0 8.0 261 345 5.2

D 8. 2 1 1.1 35 37. 6

D 12. 3 1 1.1 34 36. 5

D 16. 3 2 1.0 33 35. 5

D 20. 4 2 1.0 33 34. 5

D 24. 5 3 1.0 33 34. 5

D 28. 6 3 1.0 33 34. 5

D 32. 7 4 1.0 33 33. 5

D 36. 8 4 1.0 33 33. 5

D 40. 8 5 1.0 32 33. 5

D 44. 9 5 1.0 32 33. 5

D 48 1.0 6 1.0 32 33. 5

D 52 1.1 6 1.0 32 33. 5

D 56 1.2 7 1.0 32 33. 5

D 60 1.3 7 1.0 32 33. 5

D 64 1.3 8 1.0 32 33. 5

D 68 1.4 8 1.0 32 33. 5

D 72 1.5 9 1.0 32 33. 5

D 76 1.6 9 1.0 32 33. 5

D 80 1.7 10 1.0 32 33. 5

Buffer PoolAdvisory between 05:34:01 and 05:49:00

Est

Phys Estimated Est

Size for Size Buffers Read Phys Reads Est Phys dbtime

P Est (M) Factr (thousands) Factr (thousands) Read Time for Rds

-

D 4. 1 0 7.8 273 357 5.1

D 8. 2 1 1.1 37 39. 6

D 12. 3 1 1.0 37 38. 5

D 16. 3 2 1.0 36 37. 5

D 20. 4 2 1.0 35 37. 5

D 24. 5 3 1.0 35 36. 5

D 28. 6 3 1.0 35 36. 5

D 32. 7 4 1.0 35 36. 5

D 36. 8 4 1.0 35 36. 5

D 40. 8 5 1.0 35 36. 5

D 44. 9 5 1.0 35 36. 5

D 48 1.0 6 1.0 35 36. 5

D 52 1.1 6 1.0 35 36. 5

D 56 1.2 7 1.0 35 36. 5

D 60 1.3 7 1.0 35 36. 5

D 64 1.3 8 1.0 35 36. 5

D 68 1.4 8 1.0 35 36. 5

D 72 1.5 9 1.0 35 36. 5

D 76 1.6 9 1.0 35 36. 5

D 80 1.7 10 1.0 35 36. 5

Buffer PoolAdvisory between 05:49:00 and 06:04:05

Est

Phys Estimated Est

Size for Size Buffers Read Phys Reads Est Phys dbtime

P Est (M) Factr (thousands) Factr (thousands) Read Time for Rds

-

D 4. 1 0 7.6 302 438 6.0

D 8. 2 1 1.1 42 49. 7

D 12. 3 1 1.0 41 48. 7

D 16. 3 2 1.0 40 47. 6

D 20. 4 2 1.0 40 46. 6

D 24. 5 3 1.0 40 46. 6

D 28. 6 3 1.0 40 46. 6

D 32. 7 4 1.0 40 46. 6

D 36. 8 4 1.0 40 46. 6

D 40. 8 5 1.0 40 46. 6

D 44. 9 5 1.0 40 46. 6

D 48 1.0 6 1.0 40 46. 6

D 52 1.1 6 1.0 40 46. 6

D 56 1.2 7 1.0 40 46. 6

D 60 1.3 7 1.0 40 46. 6

D 64 1.3 8 1.0 40 46. 6

D 68 1.4 8 1.0 40 46. 6

D 72 1.5 9 1.0 40 46. 6

D 76 1.6 9 1.0 40 46. 6

D 80 1.7 10 1.0 40 46. 6

Buffer PoolAdvisory between 06:04:05 and 06:13:00

Est

Phys Estimated Est

Size for Size Buffers Read Phys Reads Est Phys dbtime

P Est (M) Factr (thousands) Factr (thousands) Read Time for Rds

-

D 4. 1 0 7.6 338 497 6.6

D 8. 2 1 1.0 47 56. 7

D 12. 3 1 1.0 46 55. 7

D 16. 3 2 1.0 45 54. 7

D 20. 4 2 1.0 45 54. 7

D 24. 5 3 1.0 45 54. 7

D 28. 6 3 1.0 45 53. 7

D 32. 7 4 1.0 45 53. 7

D 36. 8 4 1.0 45 53. 7

D 40. 8 5 1.0 45 53. 7

D 44. 9 5 1.0 45 53. 7

D 48 1.0 6 1.0 45 53. 7

D 52 1.1 6 1.0 45 53. 7

D 56 1.2 7 1.0 45 53. 7

D 60 1.3 7 1.0 45 53. 7

D 64 1.3 8 1.0 45 53. 7

D 68 1.4 8 1.0 45 53. 7

D 72 1.5 9 1.0 45 53. 7

D 76 1.6 9 1.0 45 53. 7

D 80 1.7 10 1.0 45 53. 7

As can be seen from the Buffer Pool Advisory recommendations in the above four time periods, the impact of increasing the size of Buffer cache on performance is not obvious.

8.8 View Time Model System Stats

0519purl 01to 05:34:01 Time Model System Stats

Statistic Time (s)% DB time

DB CPU 440.5 119.9

Parse time elapsed 158.5 43.1

Sql execute elapsed time 145.1 39.5

Hard parse elapsed time 135.0 36.8

Connection management call elapsed 108.8 29.6

PL/SQL execution elapsed time 5.7 1.6

Hard parse (sharing criteria) elaps 1.3. 3

Hard parse (bind mismatch) elapsed 1.2. 3

PL/SQL compilation elapsed time 0.8. 2

Repeated bind elapsed time 0.4. 1

Sequence load elapsed time 0.1. 0

DB time 367.4

Background elapsed time 75.1

Background cpu time 20.1

05:34:01-05:49:00 Time Model System Stats

Statistic Time (s)% DB time

DB CPU 455.9 124.3

Parse time elapsed 155.5 42.4

Sql execute elapsed time 149.9 40.9

Hard parse elapsed time 128.2 35.0

Connection management call elapsed 104.6 28.5

PL/SQL execution elapsed time 6.8 1.9

Hard parse (sharing criteria) elaps 2.5. 7

Hard parse (bind mismatch) elapsed 2.4. 7

PL/SQL compilation elapsed time 0.8. 2

Repeated bind elapsed time 0.5. 1

Sequence load elapsed time 0.3. 1

DB time 366.8

Background elapsed time 54.4

Background cpu time 20.1

05:49:00-06:04:05 Time Model System Stats

Statistic Time (s)% DB time

DB CPU 463.3 122.2

Parse time elapsed 160.9 42.4

Sql execute elapsed time 158.6 41.9

Hard parse elapsed time 133.8 35.3

Connection management call elapsed 103.6 27.3

PL/SQL execution elapsed time 7.3 1.9

Hard parse (sharing criteria) elaps 2.1. 6

Hard parse (bind mismatch) elapsed 1.9. 5

PL/SQL compilation elapsed time 1.1. 3

Repeated bind elapsed time 0.5. 1

Sequence load elapsed time 0.2. 0

DB time 379.0

Background elapsed time 52.7

Background cpu time 23.0

06:04:05-06:13:00 Time Model System Stats

Statistic Time (s)% DB time

DB CPU 269.2 119.5

Parse time elapsed 105.7 46.9

Sql execute elapsed time 102.9 45.6

Hard parse elapsed time 89.9 39.9

Connection management call elapsed 58.2 25.8

PL/SQL execution elapsed time 4.0 1.8

Hard parse (sharing criteria) elaps 2.0. 9

Hard parse (bind mismatch) elapsed 1.6. 7

PL/SQL compilation elapsed time 1.1. 5

Repeated bind elapsed time 0.6. 3

Sequence load elapsed time 0.1. 1

DB time 225.4

Background elapsed time 19.6

Background cpu time 12.2

By comparing the Time Model System Stats in each time period of the four reports, it was found that the amount of hard parsing generated increased significantly.

8.9 View Latch Sleep breakdown

Latch Sleep breakdown for the period of 01-05:34:01 of 05PUR 19purl

Latch Name Requests Misses Sleeps Gets

Shared pool 3787761 4 4 0

Latch Sleep breakdown between 05:34:01 and 05:49:00

Latch Name Requests Misses Sleeps Gets

Shared pool 4107841 5 5 0

JS Sh mem access 3 1 1 0

Enqueue hash chains 320877 1 1 0

Latch Sleep breakdown between 05:49:00 and 06:04:05

Latch Name Requests Misses Sleeps Gets

Shared pool 4257852 8 8 0

Row cache objects 3956966 3 3 0

Call allocation 110566 110

Redo allocation 99927 1 1 0

Latch Sleep breakdown between 06:04:05 and 06:13:00

Latch Name Requests Misses Sleeps Gets

Shared pool 2595386 6 6 0

Row cache objects 2500734 1 1 0

Through the above 4 sp reports the contents of the Latch Sleepbreakdown for each time period, it is found that the cache bufferslru chain is gone, but the number of shared pool has increased.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 256

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report