In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
The move operation of partitioned tables is still a question worth delving into. If the partition table contains the lob field, it will be more difficult.
For ordinary watches, the move operating room is taken for granted, and the way oracle provides is very direct and quick. Of course, this command is not a highly available category. However, it is still very convenient and fast for data reorganization.
In general, if you try to move a partitioned table, you will get the following error warning.
SQL > alter table charge move tablespace large_data
Alter table charge move tablespace large_data
*
ERROR at line 1:
ORA-14511: cannot perform operation on a partitioned object
At this point, you can use the following sql statement to generate dynamic sql to do partition-level move operations.
For example, for the following three large tables, the following sql can generate hundreds of statements to complete the partition-level move operation.
Select 'alter table' | | table_name | | 'move partition' | | partition_name | 'tablespace large_data;' from user_tab_partitions where table_name in (' CHARGE','MEMO','CHARGE_REL')
If the table contains a lob field, you may encounter the following problems
Alter table memo move partition xxxx tablespace large_data
*
ERROR at line 1:
ORA-01658: unable to create INITIAL extent for segment in tablespace DATAS01
When you encounter this problem, there are generally several reasons. One is to check whether there is enough quota in the corresponding table space when you need to check the quota.
One is to check the storage of the tablespace to see if there is enough space.
If the space is limited and some partitions do not have data, you can turn on deferred loading. Enable deferred_segment
Expdp/impdp
The use of data pump is a good choice, you can easily carry out the data structure of dump everywhere, and then directly do remap_tablespace when importing, this function is beyond the reach of exp/imp.
It is recommended to use the remap feature of impdp.
Perl/shell
Of course, you can use exp+imp+shell/perl to parse dump files
You can refer to the following blog post.
Http://blog.itpub.net/23718752/viewspace-1160404/
Some of the details of parsing dump files are discussed in detail.
Dbms_metadata.get_ddl
If the source shema can be accessed at any time, the corresponding creation statement can be generated directly through dbms_metadata when the conditions permit, and the table space can be re-mapping on this basis.
This process seems simple, but in fact, there are many details to be considered. There are so many partitions that it is easy to exceed the maximum character length and need to be controlled by yourself.
Exchange partition
This approach may be risky, but in some cases it is also an option to remap tablespaces in the form of exchange partition.
Use a normal table to make an exchange with the partition.
However, none of the above methods are suitable for high availability situations, but the time of downtime also needs to be considered and the operation time needs to be minimized. After all, in the environment where partitioned tables are used, the data is very large, which also needs to be considered. The change of tablespace is the purpose, but the reorganization of data involves work such as data export and import, from main road to simplicity, but there are too many details that need to be paid more attention to.
From: http://blog.itpub.net/23718752/viewspace-1266689/
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.