Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to convert a normal table to a partitioned table in the database

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article is about how to convert a normal table into a partitioned table in a database. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.

1.1 BLOG document structure map

1.2 introduction 1.2.1 introduction and precautions

Technical enthusiasts, after reading this article, you can master the following skills, and you can also learn some other knowledge that you do not know, ~ O (∩ _ ∩) Olympiad:

Common methods for ① to convert a regular table to a partitioned table (key points)

Use of ② online redefinition

Optimization of ③ ctas and insert

How the ④ DML statement enables parallel operations, and how to check whether DML has enabled parallelism.

Tips:

① if the article code format is out of order, it is recommended to use QQ, Sogou or 360browser, you can also download the pdf format document to view, pdf document download address: http://yunpan.cn/cdEQedhCs2kFz (extraction code: ed9b)

② I use a gray background and pink font to show the areas that require special attention in the output part of the command in this BLOG. For example, in the following example, the maximum archive log number of thread 1 is 33 thread 2, and the maximum archive log number is 43, which requires special attention; while the command generally uses a yellow background and red font marking; the comments on the code or the output part of the code are generally expressed in blue font.

List of Archived Logs in backup set 11

Thrd Seq Low SCN Low Time Next SCN Next Time

-

1 32 1621589 2015-05-29 11:09:52 1625242 2015-05-29 11:15:48

1 33 1625242 2015-05-29 11:15:48 1625293 2015-05-29 11:15:58

2 42 1613951 2015-05-29 10:41:18 1625245 2015-05-29 11:15:49

2 43 1625245 2015-05-29 11:15:49 1625253 2015-05-29 11:15:53

[ZHLHRDB1:root]: / > lsvg-o

T_XDESK_APP1_vg

Rootvg

[ZHLHRDB1:root]: / >

00:27:22 SQL > alter tablespace idxtbs read write

= "2097152, 512, 1024, 1024, 1024, 1G

If there are any mistakes or imperfections in this article, please correct them as much as you can, ITPUB or QQ. Your criticism is the biggest motivation of my writing.

1.2.2 links to related reference articles

The reference documents are all How to Partition a Non-partitioned / Regular / Normal Table on MOS (document ID 1070693.6), which have been uploaded to the cloud disk and can be downloaded by yourself.

1.2.3 introduction to this article

This paper introduces four methods of converting non-partitioned tables into partitioned tables. The reference document is from MOS.

There are four ways to convert a regular table to a partitioned table, which is described in the MOS document (How to Partition a Non-partitioned / Regular / Normal Table (document ID 1070693.6)):

1. Export/import method

2. Insert with a subquery method

3. Partition exchange method

4. DBMS_REDEFINITION

-Chapter II Four ways to convert a non-partitioned table to a partitioned table 2.1 Export/import method

Using logical export to import is very simple, first create a partition table in the source database, then export the data, and then import it into the new partition table

1) Export table: exp usr/pswd tables=numbers file=exp.dmp

2) delete table: drop table numbers

3) rebuild the definition of the partition table:

Create table numbers (qty number (3), name varchar2 (15))

Partition by range (qty)

(partition p1 values less than 501)

Partition p2 values less than (maxvalue))

4) use ignore=y to import partition table: imp usr/pswd file=exp.dmp ignore=y

2.1.1 exampl

Create a normal table and insert test data

LHR@dlhr > CREATE TABLE T (ID NUMBER PRIMARY KEY, TIME DATE)

Table created.

LHR@dlhr > INSERT INTO T SELECT ROWNUM, CREATED FROM DBA_OBJECTS

87069 rows created.

LHR@dlhr > commit

Commit complete.

LHR@dlhr > select to_char (t.time, 'YYYYMM'), COUNT (1)

2 from t

3 group by to_char (t.time, 'YYYYMM')

TO_CHA COUNT (1)

--

201310 85984

201605 1107

Export the table using expdp

[ZFXDESKDB2:oracle]: / tmp > expdp\'/ as sysdba\ 'directory=DATA_PUMP_DIR dumpfile=lhr_t.dmp INCLUDE=TABLE:\ "IN\ (\' T\'\)\" SCHEMAS=LHR LOGFILE=expdp_T.log

Export: Release 11.2.0.4.0-Production on Fri May 27 11:07:46 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0-64bit Production

With the Partitioning, Real Application Clusters, OLAP, Data Mining

And Real Application Testing options

Starting "SYS". "SYS_EXPORT_SCHEMA_01": "/ * AS SYSDBA" directory=DATA_PUMP_DIR dumpfile=lhr_t.dmp INCLUDE=TABLE: "IN ('T')" SCHEMAS=LHR LOGFILE=expdp_T.log

Estimate in progress using BLOCKS method...

Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA

Total estimation using BLOCKS method: 2 MB

Processing object type SCHEMA_EXPORT/TABLE/TABLE

Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT

. . Exported "LHR". "T" 1.406 MB 87091 rows

Master table "SYS". "SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded

*

Dump file set for SYS.SYS_EXPORT_SCHEMA_01 is:

/ oracle/app/oracle/admin/dlhr/dpdump/lhr_t.dmp

Job "SYS". "SYS_EXPORT_SCHEMA_01" successfully completed at Fri May 27 11:07:57 2016 elapsed 0 00:00:11

Delete the original table and create a partitioned table structure:

LHR@dlhr > drop table t

Table dropped.

LHR@dlhr > CREATE TABLE T (ID NUMBER PRIMARY KEY, TIME DATE)

2 PARTITION BY RANGE (TIME)

3 (PARTITION T1 VALUES LESS THAN (TO_DATE ('201311,' YYYYMM'))

4 PARTITION T2 VALUES LESS THAN (TO_DATE ('201606,' YYYYMM'))

5 PARTITION T3 VALUES LESS THAN (MAXVALUE))

6

Table created.

LHR@dlhr >

Import to partition table

[ZFXDESKDB2:oracle]: / tmp > impdp\'/ as sysdba\ 'directory=DATA_PUMP_DIR dumpfile=lhr_t.dmp SCHEMAS=LHR table_exists_action=APPEND LOGFILE=impdp_T.log

Import: Release 11.2.0.4.0-Production on Fri May 27 11:12:40 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0-64bit Production

With the Partitioning, Real Application Clusters, OLAP, Data Mining

And Real Application Testing options

Master table "SYS". "SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded

Starting "SYS". "SYS_IMPORT_SCHEMA_01": "/ * AS SYSDBA" directory=DATA_PUMP_DIR dumpfile=lhr_t.dmp SCHEMAS=LHR table_exists_action=APPEND LOGFILE=impdp_T.log

Processing object type SCHEMA_EXPORT/TABLE/TABLE

Table "LHR". "T" exists Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append

Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA

. . Imported "LHR". "T" 1.406 MB 87091 rows

Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT

Job "SYS". "SYS_IMPORT_SCHEMA_01" successfully completed at Fri May 27 11:12:46 2016 elapsed 0 00:00:05

[ZFXDESKDB2:oracle]: / tmp >

Query the situation after import:

SYS@dlhr > select to_char (t.time, 'YYYYMM'), COUNT (1)

2 from t

3 group by to_char (t.time, 'YYYYMM')

TO_CHA COUNT (1)

--

201310 85984

201605 1083

SYS@dlhr > SELECT D.TABLEONERMagazine D.TABLENAMEMagne.com D.PARTITIONAME name d WHERE d.tablebones

TABLE_OWNER TABLE_NAME PARTITION_NAME

-

LHR T T1

LHR T T2

LHR T T3

SYS@dlhr >

2.2 reconstruct the partition table using the original table (insert)

The characteristics of this method are:

Advantages: the method is simple and easy to use, due to the use of DDL statements, there is no UNDO, and only a small amount of REDO is generated, the efficiency is relatively high, and after the completion of the table, the data has been distributed to each partition.

Deficiency: additional consideration is needed for data consistency. Since there is almost no way to ensure consistency by locking the T table manually, direct changes in the execution of CREATE TABLE statements and RENAME T_NEW TO T statements may be lost. if you want to ensure consistency, you need to check the data after the execution of the statement, and this cost is relatively high. In addition, access to T executed between two RENAME statements will fail.

It is suitable for tables that are modified infrequently and operate at leisure, and the amount of data in the table should not be too large.

There are two main ways, ctas and insert, which are described below:

2.2.1 case 1: CTAS+RENAME

Using CTAS syntax, you can insert data together when creating a partitioned table, or you can create a table structure and insert it. CTAS this method uses DDL statements, does not generate UNDO, only a small amount of REDO, after the completion of the table data has been distributed to each partition.

Create a normal table and insert test data

LHR@dlhr > CREATE TABLE T (ID NUMBER PRIMARY KEY, TIME DATE)

Table created.

LHR@dlhr > INSERT INTO T SELECT ROWNUM, CREATED FROM DBA_OBJECTS

87069 rows created.

LHR@dlhr > commit

Commit complete.

LHR@dlhr > select to_char (t.time, 'YYYYMM'), COUNT (1)

2 from t

3 group by to_char (t.time, 'YYYYMM')

TO_CHA COUNT (1)

--

201310 85984

201605 1085

Create a partitioned table, and notice that there is no data type after the column of the partitioned table:

LHR@dlhr > CREATE TABLE T_NEW (ID, TIME) PARTITION BY RANGE (TIME)

2 (PARTITION T1 VALUES LESS THAN (TO_DATE ('201311,' YYYYMM'))

3 PARTITION T2 VALUES LESS THAN (TO_DATE ('201606,' YYYYMM'))

4 PARTITION T3 VALUES LESS THAN (MAXVALUE))

5 AS SELECT ID, TIME FROM T

Table created.

LHR@dlhr >

Change the name of the table

Table renamed.

LHR@dlhr > rename t_new to t

Table renamed.

Validate new table data

LHR@dlhr > select to_char (t.time, 'YYYYMM'), COUNT (1)

2 from t

3 group by to_char (t.time, 'YYYYMM')

TO_CHA COUNT (1)

--

201310 85984

201605 1085

LHR@dlhr >

2.2.1.1 ctas performance improvement

The performance of CTAS table-building statements can be improved by the following ways: ① plus nologging ② parallel DDL ③ query. It should be noted that the table is modified to logging schema as needed after table creation is completed.

CREATE TABLE T_NEW (ID, TIME)

PARTITION BY RANGE (TIME)

(PARTITION T1 VALUES LESS THAN (TO_DATE ('201311,' YYYYMM'))

PARTITION T2 VALUES LESS THAN (TO_DATE ('201606,' YYYYMM'))

PARTITION T3 VALUES LESS THAN (MAXVALUE))

Nologging parallel 4

AS SELECT / * + PARALLEL*/ ID, TIME FROM T

Execute the plan:

SYS@dlhr > explain plan for CREATE TABLE T_NEW (ID, TIME)

2 PARTITION BY RANGE (TIME)

3 (PARTITION T1 VALUES LESS THAN (TO_DATE ('201311,' YYYYMM'))

4 PARTITION T2 VALUES LESS THAN (TO_DATE ('201606,' YYYYMM'))

5 PARTITION T3 VALUES LESS THAN (MAXVALUE))

6 nologging parallel 4

7 AS SELECT / * + PARALLEL*/ ID, TIME FROM T

Explained.

SYS@dlhr > select * from table (dbms_xplan.display ())

PLAN_TABLE_OUTPUT

-

Plan hash value: 4064487821

-

| | Id | Operation | Name | Rows | Bytes | Cost (% CPU) | Time | TQ | IN-OUT | PQ Distrib |

-

| | 0 | CREATE TABLE STATEMENT | | 82787 | 1778K | 14 (0) | 00:00:01 |

| | 1 | PX COORDINATOR |

| | 2 | PX SEND QC (RANDOM) |: TQ10000 | 82787 | 1778K | 4 (0) | 00:00:01 | Q1Power00 | P-> S | QC (RAND) |

| | 3 | LOAD AS SELECT | T_NEW | Q1Power00 | PCWP | | |

| | 4 | PX BLOCK ITERATOR | | 82787 | 1778K | 4 (0) | 00:00:01 | Q1Power00 | PCWC |

| | 5 | TABLE ACCESS FULL | T | 82787 | 1778K | 4 (0) | 00:00:01 | Q1Power00 | PCWP |

-

Note

-

-dynamic sampling used for this statement (level=2)

-automatic DOP: skipped because of IO calibrate statistics are missing

17 rows selected.

SYS@dlhr >

You can see that the query on the T table is parallel, and so is the create table, which performs well when the amount of data in the source table is very large.

2.2.2 case 2: Insert with a subquery method

This approach is to first establish the table structure and then use insert to implement it.

Look at the example:

Create a regular table T_LHR_20160527

LHR@dlhr > CREATE TABLE T_LHR_20160527 (ID NUMBER PRIMARY KEY, TIME DATE)

Table created.

LHR@dlhr > INSERT INTO T_LHR_20160527 SELECT ROWNUM, CREATED FROM DBA_OBJECTS

87098 rows created.

LHR@dlhr > commit

Commit complete.

LHR@dlhr > select to_char (t.time, 'YYYYMM'), COUNT (1)

2 from T_LHR_20160527 t

3 group by to_char (t.time, 'YYYYMM')

TO_CHA COUNT (1)

--

201310 85984

201605 1114

Create a partition table T_LHR_20160527_NEW:

LHR@dlhr > CREATE TABLE T_LHR_20160527_NEW (ID NUMBER, TIME DATE)

2 PARTITION BY RANGE (TIME)

3 (PARTITION T1 VALUES LESS THAN (TO_DATE ('201311,' YYYYMM'))

4 PARTITION T2 VALUES LESS THAN (TO_DATE ('201606,' YYYYMM'))

5 PARTITION T3 VALUES LESS THAN (MAXVALUE))

Table created.

Insert the query from the source table into the new table:

LHR@dlhr > alter table T_LHR_20160527_NEW nologging

Table altered.

LHR@dlhr > alter session enable parallel dml

Session altered.

LHR@dlhr > insert / * + APPEND PARALLEL*/ into T_LHR_20160527_NEW (ID, TIME) select * from T_LHR_20160527

87098 rows created.

LHR@dlhr > commit

Commit complete.

Delete the source table and rename the new table

LHR@dlhr > drop table T_LHR_20160527

Table dropped.

LHR@dlhr > rename T_LHR_20160527_NEW to T_LHR_20160527

Table renamed.

Verify the new table data:

LHR@dlhr > select to_char (t.time, 'YYYYMM'), COUNT (1)

2 from T_LHR_20160527 t

3 group by to_char (t.time, 'YYYYMM')

TO_CHA COUNT (1)

--

201310 85984

201605 1114

LHR@dlhr >

2.2.2.1 insert performance improvement

INSERT performance improvement. The ① table is modified to disable the index on the nologging ② table, and the index can be rebuilt after the data insertion is completed. ③ enables parallel DML alter session enable parallel dml; ④ to insert in append mode.

Commit

Alter session enable parallel dml

Alter table T_LHR_20160527_NEW nologging

Insert / * + APPEND PARALLEL*/ into T_LHR_20160527_NEW (ID, TIME) select / * + PARALLEL (t3jin4) * / * from T_LHR_20160527

With parallel DML, you must execute alter session enable parallel dml; to enable parallel DML, execute the plan:

LHR@dlhr > explain plan for insert / * + APPEND PARALLEL*/ into T_LHR_20160527 (ID, TIME) select / * + PARALLEL (t3heli4) * / * from T3

Explained.

LHR@dlhr > select * from table (dbms_xplan.display ())

PLAN_TABLE_OUTPUT

-

Plan hash value: 584641640

-

| | Id | Operation | Name | Rows | Bytes | Cost (% CPU) | Time | TQ | IN-OUT | PQ Distrib |

-

| | 0 | INSERT STATEMENT | | 6897K | 144m | 272 (4) | 00:00:04 |

| | 1 | LOAD AS SELECT | T_LHR_20160527 | | |

| | 2 | PX COORDINATOR |

| | 3 | PX SEND QC (RANDOM) |: TQ10000 | 6897K | 144m | 272m (4) | 00:00:04 | Q1Magin00 | P-> S | QC (RAND) |

| | 4 | PX BLOCK ITERATOR | | 6897K | 144m | 272 (4) | 00:00:04 | Q1Power00 | PCWC |

| | 5 | TABLE ACCESS FULL | T3 | 6897K | 144m | 272 (4) | 00:00:04 | Q1Magin00 | PCWP |

-

Note

-

-dynamic sampling used for this statement (level=2)

-automatic DOP: skipped because of IO calibrate statistics are missing

17 rows selected.

LHR@dlhr > commit

Commit complete.

LHR@dlhr > alter session enable parallel dml

Session altered.

LHR@dlhr > explain plan for insert / * + APPEND PARALLEL*/ into T_LHR_20160527 (ID, TIME) select / * + PARALLEL (t3heli4) * / * from T3

Explained.

LHR@dlhr > select * from table (dbms_xplan.display ())

PLAN_TABLE_OUTPUT

-

Plan hash value: 576433284

-

| | Id | Operation | Name | Rows | Bytes | Cost (% CPU) | Time | TQ | IN-OUT | PQ Distrib |

-

| | 0 | INSERT STATEMENT | | 6897K | 144m | 272 (4) | 00:00:04 |

| | 1 | PX COORDINATOR |

| | 2 | PX SEND QC (RANDOM) |: TQ10000 | 6897K | 144m | 272m (4) | 00:00:04 | Q1Magin00 | P-> S | QC (RAND) |

| | 3 | LOAD AS SELECT | T_LHR_20160527 | Q1Power00 | PCWP | | |

| | 4 | PX BLOCK ITERATOR | | 6897K | 144m | 272 (4) | 00:00:04 | Q1Power00 | PCWC |

| | 5 | TABLE ACCESS FULL | T3 | 6897K | 144m | 272 (4) | 00:00:04 | Q1Magin00 | PCWP |

-

Note

-

-dynamic sampling used for this statement (level=2)

-automatic DOP: skipped because of IO calibrate statistics are missing

17 rows selected.

LHR@dlhr >

2.3 method of using swap partitions (Partition exchange method)

The characteristics of this method

Advantages: only the definition of partitions and tables in the data dictionary has been modified, and there is no data modification or replication, which is the most efficient. If there are no further requirements for the distribution of data in the partition, the implementation is relatively simple. After performing the RENAME operation, you can check whether there is data in the T_OLD, and if so, insert the data directly into T to ensure that the operation to insert T will not be lost.

Deficiency: there is still a consistency problem. After swapping partitions, before RENAME T_NEW TO T, queries, updates, and deletions may cause errors or lack of access to data. If data is required to be distributed among multiple partitions, the SPLIT operation of the partition is required, which will increase the complexity of the operation and reduce the efficiency.

It is suitable for the operation that a table containing a large amount of data is transferred to a partition in a partitioned table. You should try to operate in your spare time.

2.3.1 single partition exampl

Give an example to illustrate

Create a normal table and insert test data

LHR@dlhr > CREATE TABLE T (ID NUMBER PRIMARY KEY, TIME DATE)

Table created.

LHR@dlhr > INSERT INTO T SELECT ROWNUM, CREATED FROM DBA_OBJECTS where CREATED COMMIT

Commit complete.

LHR@dlhr > select to_char (t.time, 'YYYYMM'), COUNT (1)

2 from t

3 group by to_char (t.time, 'YYYYMM')

TO_CHA COUNT (1)

--

201310 85984

Create a partition table

LHR@dlhr > CREATE TABLE T_NEW (ID NUMBER PRIMARY KEY, TIME DATE) PARTITION BY RANGE (TIME)

2 (PARTITION T1 VALUES LESS THAN (TO_DATE ('2013-11-1,' YYYY-MM-DD')

3 PARTITION T2 VALUES LESS THAN (MAXVALUE))

Table created.

Exchange data

LHR@dlhr > ALTER TABLE T_NEW EXCHANGE PARTITION T1 WITH TABLE T

Table altered.

Change the name of the table

LHR@dlhr > rename t to t_old

Table renamed.

LHR@dlhr > rename t_new to t

Table renamed.

Query data

LHR@dlhr > select to_char (t.time, 'YYYYMM'), COUNT (1)

2 from t

3 group by to_char (t.time, 'YYYYMM')

TO_CHA COUNT (1)

--

201310 85984

2.3.2 multiple partition exampl

The steps for swapping partitions are as follows:

1. Create a partition table, assuming that there are 2 partitions, P 1 and P 2.

two。 Create Table A to store the data of the P1 rule.

3. Create Table B to store the data of the P2 rule.

4. Swap with tables An and P1. Put the data from Table An into the P1 partition

5. Swap with table B and p2. Store the data from Table B in the P2 partition.

2.3.2.1 examples on MOS

This example creates the exchange table with the same structure as the partitions of the partitioned table p_emp.

SQL > CREATE TABLE p_emp

2 (sal NUMBER (7pd2))

3 PARTITION BY RANGE (sal)

4 (partition emp_p1 VALUES LESS THAN (2000)

5 partition emp_p2 VALUES LESS THAN (4000))

Table created.

SQL > SELECT * FROM emp

EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO

--

7369 SMITH CLERK 7902 17-DEC-80 800 20

7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30

7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30

7566 JONES MANAGER 7839 02-APR-81 2975 20

7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30

7698 BLAKE MANAGER 7839 01-MAY-81 2850 30

7782 CLARK MANAGER 7839 09-JUN-81 2450 10

7788 SCOTT ANALYST 7566 19-APR-87 3000 20

7839 KING PRESIDENT 17-NOV-81 5000 10

7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30

7876 ADAMS CLERK 7788 23-MAY-87 1100 20

7900 JAMES CLERK 7698 03-DEC-81 950 30

7902 FORD ANALYST 7566 03-DEC-81 3000 20

7934 MILLER CLERK 7782 23-JAN-82 1300 10

14 rows selected.

SQL > CREATE TABLE exchtab1 as SELECT sal FROM emp WHERE sal CREATE TABLE exchtab2 as SELECT sal FROM emp WHERE sal BETWEEN 2000 AND 3999

Table created.

SQL > alter table p_emp exchange partition emp_p1 with table exchtab1

Table altered.

SQL > alter table p_emp exchange partition emp_p2 with table exchtab2

Table altered.

2.4 using the online redefinition feature (DBMS_REDEFINITION)

The characteristics of this zoning

Advantages: to ensure the consistency of data, Table T can operate DML normally most of the time. Lock the table only at the moment of switching, with high availability. This method has strong flexibility and can meet a variety of different needs. Moreover, the corresponding authorization and various constraints can be established before the handover, so that no additional management operations are needed after the handover is completed.

Deficiency: the implementation is slightly more complex than the above two.

It is suitable for all kinds of situations.

The general operation flow of online redefinition is as follows:

(1) create the underlying table A, if it exists, no action is required.

(2) create a temporary partition table B structure.

(3) start redefining and import the data from base table An into temporary partition table B.

(4) after the end of the redefinition, the two tables have been exchanged in the Name Directory of DB. That is, at this point, base table A becomes a partition table, and the temporary partition table B we created becomes a regular table. At this point we can delete the temporary table B we created. It's already a regular watch.

Documentation on MOS:

2.4.1 knowledge of online redefinition 2.4.1.1 online redefinition function

This feature is available only in versions after 9.2.0.4, and the online redefinition table has the following features:

(1) modify the storage parameters of the table

(2) transfer the table to another tablespace

(3) add parallel query options.

(4) add or delete partitions

(5) rebuild the table to reduce fragmentation

(6) change the heap table into an index to organize the table or vice versa.

(7) add or delete a column.

2.4.1.2 to redefine a table online

The principle of online redefinition: materialized views

To redefine a table online:

1. Select a redefinition method:

There are two methods of redefinition, one based on the primary key and the other based on ROWID. The ROWID approach cannot be used for indexing and organizing tables, and there will be a hidden column named Mendrowski $after redefinition. The primary key is used by default.

two。 Calling the DBMS_REDEFINITION.CAN_REDEF_TABLE () procedure, if the table does not meet the redefined condition, an error will be reported and the reason will be given.

3. Create an empty intermediate table in a solution and create an intermediate table based on the structure you expect after redefinition. For example: using partition table, adding COLUMN and so on.

4. Call the DBMS_REDEFINITION.START_REDEF_TABLE () procedure and provide the following parameters: the name of the redefined table, the name of the intermediate table, the mapping rule for the column, and the redefinition method.

If the mapping method is not provided, all columns included in the intermediate table are considered to be used for redefinition of the table. If a mapping method is given, only the columns given in the mapping method are considered. If no redefinition method is given, the primary key method is considered to be used.

5. Establish triggers, indexes, and constraints on the intermediate table and authorize them accordingly. Any integrity constraint that contains intermediate tables should set the state to disabled.

When the redefinition is complete, the triggers, indexes, constraints, and authorizations created on the intermediate table replace the triggers, indexes, constraints, and authorizations on the redefined table. The constraint of disabled on the intermediate table will be enable on the redefined table.

6. (optional) if a large number of DML operations are performed directly on the redefined table during the execution of the DBMS_REDEFINITION.START_REDEF_TABLE () procedure and the DBMS_REDEFINITION.FINISH_REDEF_TABLE () procedure, you can choose to execute the SYNC_INTERIM_TABLE () procedure one or more times to reduce the locking time when the FINISH_REDEF_TABLE () procedure is executed in the last step.

7. The DBMS_REDEFINITION.FINISH_REDEF_TABLE () procedure completes the redefinition of the table. During this process, the original table is locked in exclusive mode for a short period of time, depending on the amount of data in the table.

After the FINISH_REDEF_TABLE () procedure, the original table is redefined with the properties, indexes, constraints, authorizations, and triggers of the intermediate table. The constraint of disabled on the intermediate table is in the enabled state on the original table.

8. (optional) indexes, triggers, and constraints can be renamed. For tables that have been redefined in ROWID mode, an implicit column named MendroWare $is included. It is recommended that you use the following statements to be implicitly listed or deleted as UNUSED.

ALTER TABLE TABLE_NAME SET UNUSED (Machirowski $)

ALTER TABLE TABLE_NAME DROP UNUSED COLUMNS

2.4.1.3 restrictions for using online redefinition

Use some of the constraints that are redefined online:

(1) There must be enough space to hold two copies of the table.

(2) Primary key columns cannot be modified.

(3) Tables must have primary keys.

(4) Redefinition must be done within the same schema.

(5) New columns added cannot be made NOT NULL until after the redefinition operation.

(6) Tables cannot contain LONGs, BFILEs or User Defined Types.

(7) Clustered tables cannot be redefined.

(8) Tables in the SYS or SYSTEM schema cannot be redefined.

(9) Tables with materialized view logs or materialized views defined on them cannot be redefined.

(10) Horizontal sub setting of data cannot be performed during the redefinition.

Under Oracle 10.2.0.4 and 11.1.0.7 versions, online redefinition may encounter the following bug:

Bug 7007594-ORA-600 [12261]

Http://blog.csdn.net/tianlesoftware/archive/2011/03/02/6218681.aspx

? If you use a primary key-based approach, the redefined table after the original table must have the same primary key

? If you use a ROWID-based approach, the table cannot be organized by an index

? If there is a materialized view or materialized view log on the original table, it cannot be redefined online

? Materialized view container tables or advanced queue tables cannot be redefined online

? The overflow table of the index organization table cannot be redefined online

? Tables with BFILE,LOGN columns cannot be redefined online

? Tables in Cluster cannot be redefined online

? Tables under sys and system cannot be redefined online

? Temporary tables cannot be redefined online

? Horizontal data subset is not supported

? Only expressions with definite results can be used in column mapping, such as subqueries.

? If the intermediate table has new columns, there can be no NOT NULL constraint

? There can be no referential integrity between the original table and the intermediate table

? Online redefinition cannot adopt nologging

2.4.2 my example

Create the regular table T_LHR_20160527_UNPART and its index:

LHR@dlhr > CREATE TABLE T_LHR_20160527_UNPART (ID NUMBER PRIMARY KEY, TIME DATE)

Table created.

LHR@dlhr > INSERT INTO T_LHR_20160527_UNPART SELECT ROWNUM, CREATED FROM DBA_OBJECTS

87112 rows created.

LHR@dlhr > commit

Commit complete.

LHR@dlhr > CREATE INDEX create_date_indx ON T_LHR_20160527_UNPART (TIME)

Index created.

LHR@dlhr > exec dbms_stats.gather_table_stats (user, 'Tunable LHRM 20160527 unmanned, cascade = > true)

PL/SQL procedure successfully completed.

LHR@dlhr >

LHR@dlhr > select to_char (t.time, 'YYYYMM'), COUNT (1)

2 from T_LHR_20160527_UNPART t

3 group by to_char (t.time, 'YYYYMM')

TO_CHA COUNT (1)

--

201310 85984

201605 1128

Create a temporary partition table T_LHR_20160527_PART. Notice that I replaced the time column here with CREATED_DATE.

LHR@dlhr > CREATE TABLE T_LHR_20160527_PART (ID NUMBER PRIMARY KEY, CREATED_DATE DATE)

2 PARTITION BY RANGE (created_date)

3 (PARTITION T1 VALUES LESS THAN (TO_DATE ('201311,' YYYYMM'))

4 PARTITION T2 VALUES LESS THAN (TO_DATE ('201606,' YYYYMM'))

5 PARTITION T3 VALUES LESS THAN (MAXVALUE))

Table created.

Then execute DBMS_REDEFINITION.CAN_REDEF_TABLE (USER, 'Tunable LHRM 20160527 undefined, DBMS_REDEFINITION.CONS_USE_PK); check whether online redefinition can be performed, and if an error is returned, it means it cannot be executed, LHR@dlhr > EXEC DBMS_REDEFINITION.CAN_REDEF_TABLE (USER,' checking, DBMS_REDEFINITION.CONS_USE_PK)

BEGIN DBMS_REDEFINITION.CAN_REDEF_TABLE (USER, 'tasked, DBMS_REDEFINITION.CONS_USE_PK); END

*

ERROR at line 1:

ORA-12089: cannot online redefine table "LHR". "T" with no primary key

ORA-06512: at "SYS.DBMS_REDEFINITION", line 143

ORA-06512: at "SYS.DBMS_REDEFINITION", line 1635

ORA-06512: at line 1

LHR@dlhr > EXEC DBMS_REDEFINITION.CAN_REDEF_TABLE (USER, 'Tunable LHRM 20160527 unmanned, DBMS_REDEFINITION.CONS_USE_PK)

PL/SQL procedure successfully completed.

There is no error, which means that the table we need to transform can perform an online redefinition, which may take a while, depending on the size of the table:

LHR@dlhr > EXEC DBMS_REDEFINITION.START_REDEF_TABLE (USER, 'Tunable LHRM 20160527 unmanned,' Tunable LHRV 20160527)

BEGIN DBMS_REDEFINITION.START_REDEF_TABLE (USER, 'Tunable LHRM 20160527 unmanned,' Thirty LHRV 20160527), DBMSREDEFINITION.CONSIGUSEUSENTPK); END

*

ERROR at line 1:

ORA-42016: shape of interim table does not match specified column mapping

ORA-06512: at "SYS.DBMS_REDEFINITION", line 56

ORA-06512: at "SYS.DBMS_REDEFINITION", line 1498

ORA-06512: at line 1

LHR@dlhr > EXEC DBMS_REDEFINITION.START_REDEF_TABLE (USER,'ID ID LHR 20160527 unmanned,'ID ID, TIME created_date', DBMS_REDEFINITION.CONS_USE_PK)

PL/SQL procedure successfully completed.

LHR@dlhr >

LHR@dlhr > select count (1) from T_LHR_20160527_UNPART

COUNT (1)

-

87112

LHR@dlhr > select count (1) from T_LHR_20160527_PART

COUNT (1)

-

87112

LHR@dlhr > EXEC DBMS_REDEFINITION.SYNC_INTERIM_TABLE (USER, 'Thoracic LHRM 20160527 unmanned,' Tunable LHRV 20160527)

After this step, the data has been synchronized to the temporary partition table. It is important to note that if the column names of the partitioned table and the original table are the same, the transformation with columns can be avoided, and if it is different, the mapping relationship can be reassigned. In addition, EXEC DBMS_REDEFINITION.SYNC_INTERIM_TABLE (USER, 'Tunable LHRM 20160527 unscheduled parts'); is optional for synchronizing new tables. If a large number of DML operations are performed directly on the redefined table during the execution of the DBMS_REDEFINITION.START_REDEF_TABLE () procedure and the DBMS_REDEFINITION.FINISH_REDEF_TABLE () procedure, you can choose to execute the SYNC_INTERIM_TABLE () procedure one or more times to reduce the locking time when the last step executes the FINISH_REDEF_TABLE () procedure.

Below we create an index on the new table, online redefinition redefines only the data, and the index needs to be created separately.

LHR@dlhr > CREATE INDEX create_date_indx2 ON T_LHR_20160527_PART (created_date)

Index created.

LHR@dlhr > exec dbms_stats.gather_table_stats (user, 'Tunable LHRM 20160527 partner, cascade = > true)

PL/SQL procedure successfully completed.

LHR@dlhr >

The next step is to end the redefinition:

LHR@dlhr > EXEC DBMS_REDEFINITION.FINISH_REDEF_TABLE (user, 'Thoracic LHRM 20160527 unmanned,' Tunable LHRV 20160527)

PL/SQL procedure successfully completed.

LHR@dlhr >

LHR@dlhr > select D.TABLE_NAME, partitioned from user_tables D where table_name like'% Tunable LHRM 20160527%'

TABLE_NAME PAR

-

T_LHR_20160527_PART NO

T_LHR_20160527_UNPART YES

LHR@dlhr > SELECT D.TABLE_NAME, partition_name

2 FROM user_tab_partitions D

3 WHERE table_name = 'Tunable LHRM 20160527 unmanned'

TABLE_NAME PARTITION_NAME

T_LHR_20160527_UNPART T1

T_LHR_20160527_UNPART T2

T_LHR_20160527_UNPART T3

LHR@dlhr >

End the meaning of redefining DBMS_REDEFINITION.FINISH_REDEF_TABLE:

The base table T_LHR_20160527_UNPART and the temporary partition table T_LHR_20160527_PART are exchanged. At this point, the temporary partition table T_LHR_20160527_PART becomes a regular table, and our base table, T_LHR_20160527_UNPART, becomes a partition table.

When we redefine, the base table T_LHR_20160527_UNPART can perform DML operations. There is only a brief lock on the table when the two tables are switched.

Online redefinition ensures data consistency, and tables can operate DML normally most of the time. Lock the table only at the moment of switching, with high availability. This method has strong flexibility and can meet a variety of different needs. Moreover, the corresponding authorization and various constraints can be established before the handover, so that no additional management operations are needed after the handover is completed.

As a final step, delete the temporary table and rename the index to verify the data:

LHR@dlhr > drop table T_LHR_20160527_PART

Table dropped.

LHR@dlhr > alter index create_date_indx2 rename to create_date_indx

Index altered.

LHR@dlhr > select to_char (t.created_date, 'YYYYMM'), COUNT (1)

2 from T_LHR_20160527_UNPART t

3 group by to_char (t.created_date, 'YYYYMM')

TO_CHA COUNT (1)

--

201310 85984

201605 1128

-

Thank you for reading! This is the end of the article on "how to convert an ordinary table into a partition table in the database". I hope the above content can be of some help to you, so that you can learn more knowledge. If you think the article is good, you can share it out for more people to see!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report