In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
Editor to share with you what to do with the growing delay in the OGG replication process, I believe most people do not know much about it, so share this article for your reference. I hope you will learn a lot after reading this article. Let's learn about it together.
-bash-3.2$ ogg
Oracle GoldenGate Command Interpreter for Oracle
Version 11.2.1.0.6 16211226 OGGCORE_11.2.1.0.6_PLATFORMS_130418.1829_FBO
Solaris, sparc, 64bit (optimized), Oracle 11g on Apr 22 2013 15:23:39
Copyright (C) 1995, 2013, Oracle and/or its affiliates. All rights reserved.
GGSCI (ODSDB) 1 > info all
Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER RUNNING
JAGENT STOPPED
REPLICAT RUNNING RP10 00:00:00 00:00:03
REPLICAT RUNNING RP7 00:00:06 00:00:03
REPLICAT RUNNING RPS1 42:41:14 00:00:01
REPLICAT RUNNING RPS2 00:00:00 00:00:10
REPLICAT RUNNING RPS2A 00:00:00 00:00:00
REPLICAT RUNNING RPS2B 00:00:00 00:00:00
REPLICAT RUNNING RPS3 00:00:00 00:00:02
REPLICAT RUNNING RPS3A 00:00:00 00:00:03
REPLICAT RUNNING RPS3B 00:00:00 00:00:02
REPLICAT RUNNING RPS4 00:00:00 00:00:07
REPLICAT RUNNING RPS4A 00:00:00 00:00:07
REPLICAT RUNNING RPS4B 00:00:00 00:00:07
REPLICAT RUNNING RPS4C 00:00:00 00:00:07
REPLICAT RUNNING RPS4D 00:00:00 00:00:08
REPLICAT RUNNING RPS4E 00:00:00 00:00:07
REPLICAT RUNNING RPS4F 00:00:00 00:00:07
REPLICAT RUNNING RPS4G 00:00:00 00:00:07
REPLICAT RUNNING RPS4H 00:00:00 00:00:07
REPLICAT RUNNING RPS4I 00:00:00 00:00:08
REPLICAT RUNNING RPS4J 00:00:00 00:00:07
I went to the terminal with him and found that there was indeed a rps1 process with a high delay. It looks like the process status is RUNNING, but according to the fact that I have maintained ogg for six years, it may be an illusion, so how can I tell?
1. Check the ggserr.log
2. View the dsc file of the process under dirrpt
3The name of the RBA info process has been changed many times to see if it has changed.
The state of the rps1 process is normal after the above steps, what is the problem of high latency? View params rps1 sees that the process has only one table. Let's see where this process is stuck.
SQL >! ps-ef | grep rps1 | grep-v grep
Orao 12169 11012 0 Jun 07? 614:58 / odogg/baseogg/replicat PARAMFILE / odsogg/baseogg/dirprm/rps1.prm REPORTFILE /
SQL >! ps-ef | grep 12169 | grep-v grep
Orao 12170 12169 1 Jun 07? 2315 oracleodb (DESCRIPTION= (LOCAL=YES) (ADDRESS= (PROTOCOL=beq)
Orao 12169 11012 0 Jun 07? 614:58 / odogg/baseogg/replicat PARAMFILE / odsogg/baseogg/dirprm/rps1.prm REPORTFILE /
SQL > select s. Sidcharge s.SERIA Letters journal sqlroomid from v$process pr é cor vault session s where p.addr=s.paddr and p.spid=12170
SID SERIAL# SQL_ID
5146 113 5099dwmx3s4mf
SQL > /
SID SERIAL# SQL_ID
5146 113 dzubcf8jm69yx
SQL > /
SID SERIAL# SQL_ID
5146 113 dzubcf8jm69yx
SQL > /
SID SERIAL# SQL_ID
5146 113 dzubcf8jm69yx
Asked colleagues who had maintained the system to see if the statistics were too old. Go down the train of thought.
SQL > select OWNER,TABLE_NAME,LAST_ANALYZED from dba_tables where table_name='TF_F_USER_REVGRPTAG'
OWNER TABLE_NAME LAST_ANALYZED
TTT TF_F_USER_REXXX 2017-06-26 00:20:56
SQL > select table_name,num_rows from dba_tables where owner='TTT' and table_name='TF_F_USER_REXXX'
TABLE_NAME NUM_ROWS
TF_F_USER_REXXX 409269832
SQL > select count (*) from TTT.TF_F_USER_REXXX
COUNT (*)
-
409955510
Next, let's see if the sql implementation plan is going the wrong way.
Of the two execution plans, it is clear that 137632316 has been run 5400 times, but the execution time is much longer than the second execution plan.
Bind the execution plan with sql profile
SQL > @ coe_xfr_sql_profile.sql
Parameter 1:
SQL_ID (required)
Enter value for 1: dzubcf8jm69yx
PLAN_HASH_VALUE AVG_ET_SECS
--
2320424968. 004
137632316 21.119
Parameter 2:
PLAN_HASH_VALUE (required)
Enter value for 2: 2320424968
Values passed to coe_xfr_sql_profile:
~ ~
SQL_ID: "dzubcf8jm69yx"
PLAN_HASH_VALUE: "2320424968"
Check to see if the binding is successful
SQL > select name from dba_sql_profiles
NAME
-
Coe_dzubcf8jm69yx_2320424968
SYS_SQLPROF_015d075ccdd40000
Coe_a5hgtqfq09tcu_229988255
After clearing the sql statement from memory
SQL > select address,hash_value,executions,parse_calls from v$sql where sql_TEXT like 'UPDATE "TTT". "TF_F_USER_RExxxx" SET "PARTITION_ID" =: A27, "USER_ID" =: A28%'
ADDRESS HASH_VALUE EXECUTIONS PARSE_CALLS
--
00000017673EA5E8 4198240878 100 0
00000017CF3E0000 897328139 673 0
00000017982D40D0 1588244011 8 0
000000049D04B1A8 590555101 10 0
00000017AA525230 4271852353 2 0
0000001777FC6A20 3666847391 735 0
00000017B9154A58 1180290247 1 0
7 rows selected.
SQL > alter session set events' 5614566 trace name context forever'
Session altered.
SQL > exec dbms_shared_pool.purge ('00000017673 EA5E8, 4198240878)
PL/SQL procedure successfully completed.
SQL > exec dbms_shared_pool.purge ('00000017CF3E0000001 897328139)
Exec db
PL/SQL procedure successfully completed.
SQL > ms_shared_pool.purge ('00000017982D40D01588244011')
PL/SQL procedure successfully completed.
SQL > exec dbms_shared_pool.purge ('000000049D04B1A8, 590555101')
PL/SQL procedure successfully completed.
SQL > exec dbms_shared_pool.purge ('00000017AA525230, 4271852353)
PL/SQL procedure successfully completed.
SQL > exec dbms_shared_pool.purge ('0000001777FC6A20, 3666847391)
PL/SQL procedure successfully completed.
SQL > exec dbms_shared_pool.purge ('00000017B9154A58, 1180290247)
PL/SQL procedure successfully completed.
Observe that the latency of the ogg replication process has begun to decrease. The hidden trouble is solved smoothly!
GGSCI (ODSDB) 8 > info all
Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER RUNNING
JAGENT STOPPED
REPLICAT RUNNING RP10 00:00:00 00:00:01
REPLICAT RUNNING RP7 00:00:00 00:00:07
REPLICAT RUNNING RPS1 31:10:29 00:00:50
REPLICAT RUNNING RPS2 00:00:00 00:00:00
REPLICAT RUNNING RPS2A 00:00:00 00:00:00
REPLICAT RUNNING RPS2B 00:00:00 00:00:00
REPLICAT RUNNING RPS3 00:00:00 00:00:01
REPLICAT RUNNING RPS3A 00:00:00 00:00:02
REPLICAT RUNNING RPS3B 00:00:00 00:00:01
REPLICAT RUNNING RPS4 00:00:00 00:00:05
REPLICAT RUNNING RPS4A 00:00:00 00:00:04
REPLICAT RUNNING RPS4B 00:00:00 00:00:04
REPLICAT RUNNING RPS4C 00:00:00 00:00:04
REPLICAT RUNNING RPS4D 00:00:00 00:00:04
REPLICAT RUNNING RPS4E 00:00:00 00:00:04
REPLICAT RUNNING RPS4F 00:00:00 00:00:04
REPLICAT RUNNING RPS4G 00:00:00 00:00:05
REPLICAT RUNNING RPS4H 00:00:00 00:00:04
REPLICAT RUNNING RPS4I 00:00:00 00:00:04
REPLICAT RUNNING RPS4J 00:00:00 00:00:04
These are all the contents of the article "what if the delay in the OGG replication process continues to grow". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.