In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Author | Zheng Songlin
Transfer from | data and Cloud
Wechat account | OraNews
Analyze the generation process of a SQL parallel execution
1. Disasters caused by parallelism
Early in the morning, a provincial brother told me that the SQL of database session execution turned on parallelism, resulting in high load and high session. I checked for a long time, but I didn't find the specific reason, and I didn't know how to solve it.
I answered his question directly: isn't that clear? There are only two common reasons:
First: parallelism is enabled for objects (including indexes and tables)
Second: PARALLEL's HINTS is used in the SQL statement
The brothers at the scene said that they all checked and did not have the above situation. When I heard his answer, I was first skeptical about the way he inquired. Without setting the degree of parallelism or adding HINTS, how could the executed SQL be executed in parallel? With this doubt, I asked my brother at the scene to give me screenshots of the query results one by one, as follows (all the cases in this article are supplemented afterwards):
After seeing the result, I was a little confused for a moment. What happened? When I encounter problems, I always tell myself to be calm and not urgent.
2. Push forward layer by layer and analyze the problem
Is there any parameter control?
SQL > show parameter parallel
NAME TYPE VALUE
-
Fast_start_parallel_rollback string LOW
Parallel_adaptive_multi_user boolean TRUE
Parallel_automatic_tuning boolean FALSE
Parallel_degree_limit string CPU
Parallel_degree_policy string MANUAL
Parallel_execution_message_size integer 16384
Parallel_force_local boolean FALSE
Parallel_instance_group string
Parallel_io_cap_enabled boolean FALSE
Parallel_max_servers integer 320
Parallel_min_percent integer 0
Parallel_min_servers integer 0
Parallel_min_time_threshold string AUTO
Parallel_server boolean FALSE
Parallel_server_instances integer 1
Parallel_servers_target integer 128
Parallel_threads_per_cpu integer 2
Recovery_parallelism integer 0
No suspicious parameters were found.
At this point, the results of superficial investigation can not solve this problem, so I asked the scene to find a parallel SQL, execute it manually, and collect a 10053 event trace to see if anything new can be found. The script is as follows
Soon the TRACE FILE file was provided to me on the spot, and I gave priority to the parameter list.
At this point, I found a suspicious parameter: parallel_query_default_dop = 16
Find the relevant information about this parameter on mos, which is a parameter of default parallelism. The algorithm for this parameter value is as follows:
DEFAULT DOP = cpu_count * parallel_threads_per_cpu* cluster_database_instances
I immediately asked my colleagues in the field whether the executed SQL reflected 16 parallel processes in the active session. My colleagues at the scene replied that what I had observed was basically. At this point, the problem becomes clear, and the executed SQL is executed using the default parallelism, which is controlled by the parameter parallel_query_default_dop. Since it is the default parallelism, it should also be set (if not, the default is 1). So I modified the SQL that used the previous query to verify whether the object parallelism is enabled, as follows (all the cases in this article are added afterwards)
I am happy when the screenshot of the query result is sent out, and there is obviously a table and index with parallelism set to DEFAULT (or 1 if we don't). Then confirm that they are the objects in the running sql.
3. Problem solving
Now that the default parallelism is set, simply cancel the default parallelism and execute the following SQL
-- for tables
Alter table table_name noparalle
-- for indexes
Alter index index_name noparallel
So I asked the scene to change the object parallelism to 1, execute the SQL again, and found that the parallelism disappeared and the database returned to normal.
Although the problem has been solved, there is still one question unsolved. Under what circumstances will the parallelism be set to DEFUALT? Both the index and the table are 1 normally created.
4. How to set parallelism to default
Through practice, it is found that the parallelism can be set to DEFAULT in the following two ways.
1. Specify the following when creating the table:
2. You can modify the table after you create it.
Summary: the first way to solve this problem is to think, and the second is to have solid basic skills.
one
After DB upgrade, DBLINK causes abnormal analysis of execution plan
The background is as follows: the collection intermediate database of a network province has been upgraded from 10.2.0.4 to 11.2.0.4 (Note: the upgrade is not directly upgraded on the old machine, but by installation and migration on the new machine)
The next day after the upgrade, he came to me on the spot and said that the interface functions for synchronizing archival data in the past are running very slowly (DBLINK is used in the way of data interface synchronization), and sometimes they can not even run properly, affecting the synchronization of archival data, which seems to be very serious.
Keyword: DB upgrade from 10G to 11G
I have come across relevant cases before, and I think it may be caused by the change in the implementation plan brought about by the upgrade. Inform the site to try to modify the optimizer parameters, that is, change optimizer_features_enable to 10.2.0.4, which can be changed online and take effect immediately. The script is as follows:
Alter system set optimizer_features_enable='10.2.0.4' scope=both
After the modification is completed, re-execute the task of synchronizing the file data interface to see if it is normal.
After some testing on the spot, the problem has not been solved, and it seems that the old experience can not solve the problem.
Okay, next we do the following simulation test:
The text of the SQL is as follows:
INSERT INTO EPCT.C_CUST_ADDR@EPEXDB
(CUST_ID
CUST_ADDR
PROVINCE_CODE
CITY_CODE
COUNTY_CODE
STREET_CODE
VILLAGE_CODE
ROAD_CODE
COMMUNITY_CODE
PLATE_NO
TYPE_CODE
POSTALCODE
CA_ID
APP_NO)
SELECT A2.CUST_ID
A2.CUST_ADDR
A2.PROVINCE_CODE
A2.CITY_CODE
A2.COUNTY_CODE
A2.STREET_CODE
A2.VILLAGE_CODE
A2.ROAD_CODE
A2.COMMUNITY_CODE
A2.PLATE_NO
A2.TYPE_CODE
A2.POSTALCODE
A2.CA_ID
''
FROM SGPM.C_CUST_ADDR A2
WHERE A2.CUST_ID=ANY
(SELECTA3.CUST_ID
FROM SGPM.C_CONS A5
SGPM.R_CP_CONS_RELA A4
SGPM.C_CUST A3
WHERE A4.CONS_ID=A5.CONS_ID
AND A4.CP_NO=:B1
ANDA5.CUST_ID=A3.CUST_ID)
You can see that the insert statement of DBLINK from database A to database B is used. The SQL initiator is in database A, that is, the program is deployed in database A, while the actual execution end of the SQL is in database B. Although you are inserting data into database B, a query SQL is derived to fetch the number from database A.
For INSERT INTO remote_table@dblink select * from local_table, the SQL executor is remote, not local, and cannot be specified using HINTS driving_site.
2. Then confirm whether a SQL is derived in the A database and find the SQLID
3. After the SQLID is provided on the spot, we can obtain the relevant information about the execution of the sql:
Select * from table (dbms_xplan.display_cursor ('1ar4us01aj0huzhongremark nullpenary advance'))
The words in the red box caught my attention, and the sharp-eyed DBA should soon find something fishy in it.
Yes, an internal function is called here. The description of this function is as follows:
The internal Oracle function SYS_OP_C2C performs conversion
From one character set to another character set C (haracterSet) 2C (haracterSet).
Conversion between character sets. OK, when I see this, I ask the scene, what is the character set in the new and old B databases? What is the A database character set?
The answers on the spot are as follows:
The old B database character set is utf-8
The new B database character set is zhs16gbk
The A database character set is utf-8.
This also shows that the reason for the sharp decline in performance after migrating to the new collection intermediate library has been found.
Then there are two ways to solve the problem:
The first is to modify the character set to ensure that the source and target character sets are consistent.
Second, create a functional index.
two
The domain index leads to a discussion of the submission of the report
Domain index causes error to be submitted
Recently, a problem of network province has been dealt with, and the error report submitted by the on-site feedback is as follows:
COMMIT
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-20000: Oracle Text error:
DRG-50610: internal error: drexdsync
DRG-50857: oracle error in drekrtd (reselect rowid row locator)
ORA-00942: table or view does not exist
ORA-06512: at "CTXSYS.SYNCRN", line 1
ORA-06512: at line 1
Seeing this error, we get the following information
1. This is about the error of the domain index.
2. This is an error caused by recursive SQL
3. This is a report or view that does not exist (most likely is permission or may not really exist)
When you see this error, first go to the scene to verify the permission problem, including the permissions of the operating user.
The results of the verification were not abnormal.
Further analysis:
1. Query domain index information
Select * from ctxsys.ctx_indexes
2. Creating a domain index automatically creates four secondary table objects and one index object with the attribute BASIC_STORAGE.
BASIC_STORAGE has the following attributes:
I_table_clause Parameter clause for dr$$I table creation.
The I table is the index data table.
K_table_clause Parameter clause for dr$$K table creation.
The K table is the keymap table.
R_table_clause Parameter clause for dr$$R table creation.
The R table is the rowid table.
N_table_clause Parameter clause for dr$$N table creation.
The N table is the negative list table.
I_index_clause Parameter clause for dr$$X index creation.
You can use the following SQL query in your own environment
Select owner,object_name,object_type,secondary,status
From dba_objects
Where owner = 'SGPM'
And object_name like 'DR$INDEX_NAME$%'-- INDEX_NAME changed to your actual name
The field query result is empty, indicating that the domain index no longer exists, which leads to the submission error, that is, the SQL error of recursive execution of the domain index.
The problem is located that the solution to the problem is easy:
Just rebuild the domain index.
The example I give here indicates the actual storage tablespace location of the domain index, which is intended to be controllable and, if not specified, to create the user's default tablespace.
Begin
-- create lexical analysis
-- ctx_ddl.create_preference ('chinese_lexer',' chinese_lexer')
-- Storage parameters
Ctx_ddl.create_preference ('t1storm storages and basic stories Store')
Ctx_ddl.set_attribute (Tablespace TEST') (TABLECLAUSEN)
Ctx_ddl.set_attribute (Tablespace TEST')
Ctx_ddl.set_attribute (Tablespace TEST')
Ctx_ddl.set_attribute ('t1storm storages and records TABLECTABLECLAUSES records records Tablespace TEST')
Ctx_ddl.set_attribute (Tablespace TEST')
End
-- create a domain index to specify storage parameters and lexer lexical analyzer parameters
Create index idx1_t1 on T1 (object_name) indextype is ctxsys.context parameters ('lexer chinese_lexer storage T1 stories')
-synchronize domain index data: (this operation has a risky business underestimation operation)
Query to confirm whether domain indexes need to be synchronized
Select u.username, i.idx_name
From ctxsys.dr$index i, dba_users u
Where u.user_id=i.idx_owner#
And idx_id in (select pnd_cid from ctxsys.dr$pending)
Exec ctx_ddl.sync_index ('IDX1_T1')
-- optimize domain index data (this operation has risky business underestimation operations)
Exec ctx_ddl.optimize_index ('IDX1_T1',' full')
three
A brief introduction to the author
Zheng Linsong, an expert in database technology of Langxin Technology Co., Ltd., has worked for more than 10 years, mainly serving mobile operator customers, power customers, securities customers and manufacturing customers. Proficient in Oracle performance optimization, fault diagnosis and handling, as well as MySQL database optimization and fault handling. Mainly responsible for performance optimization and fault handling of 12 provinces of Langxin State Grid, as well as performance optimization and fault handling of Southern Power Grid, led hot and cold data isolation and spatial recovery work of a securities company (with a total data volume of 100T), presided over XTTS migration work of a power grid company; spatial recovery and high water level processing of historical data of core marketing system of power grid company, totaling 15T of recovery space.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.