In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Production line historical data archiving is a daily basic work of database operation and maintenance. When building tables, data pipelining tables (such as logs, user login history, software download records, user attribute change history tables, etc.) are usually designed as range partition tables and interval partition tables (11G). When the retention range of business requirements is exceeded, the data is basically static data and the application no longer accesses it, but it may need to be queried manually due to some special requirements. In this case, the data is archived from the production library to the history library, compressed and saved, and if it is beyond the scope of the special requirements query, it is directly exported, compressed and backed up to tape.
In Oracle database, there are many methods for archiving table data, such as exp/imp, expdp/impdp, transfer tablespace and so on, which may be used in daily database maintenance. The usage of these tools will not be expanded here. Let's move on to today's topic. Use stored procedures to archive production line history table data, and briefly summarize them first:
1. Simple and not easy to make mistakes
2. For its source database (in this case, the production database), it is a select query statement with little performance impact.
3. Data does not need to be landed from the source database to the target database, which is similar to dblink+impdp 's method of migrating data, saving space (tables of hundreds of GB) and archiving time for exporting data.
4. The progress of filing and the number of archived records can be monitored.
5. If it is transmitted across data centers, please monitor the network bandwidth traffic.
6 、.
Operation procedure:
1. Create a view under the Schema of the production library maintenance user, which contains the partitions of the tables to be archived:
Create view log_table_p201209
As
Select * from user01.log_table partition (P201209)
Note: why build a view? Because select * from table partition (partition_name) cannot be used through dblink queries. A sentence like this.
2. Create a data archive status table and sequence under the user who puts the archived data in the history library (all operations in the history library are performed under this user).
-- Create table
Create table data_archive_status
(
Id NUMBER
Threadno NUMBER
Table_name VARCHAR2 (60)
Syncnum NUMBER
State NUMBER
Starttime DATE
Synctime DATE
Remark VARCHAR2 (2000)
);
-- Add comments to the columns
Comment on column data_archive_progress.state
Is'0: start, 1: open and parse cursors, 2: extract data, 3: a table is synchronized, 4: all tables are completed, other negative numbers: error coding'
-- Create sequence
Create sequence seq_id
Minvalue 1
Maxvalue 9999999999999
Start with 1
Increment by 1
Cache 20
3. Create a dblink in the history library that can connect to the production library with read-only permissions. For example:
-- Create database link
Create database link XXDB.LOCALDOMAIN
Connect to readonly
Identified by ""
Using'(DESCRIPTION=)
(LOAD_BALANCE=no)
(ADDRESS_LIST=
(ADDRESS=
(PROTOCOL=TCP)
(HOST=172.16.XX.XX)
(PORT=1521)
)
(ADDRESS=
(PROTOCOL=TCP)
(HOST=172.16.XX.XX)
(PORT=1521)
)
)
(CONNECT_DATA=
(FAILOVER_MODE=
(TYPE=select)
(METHOD=basic)
(RETRIES=180)
(DELAY=5)
)
(SERVER=dedicated)
(SERVICE_NAME=XX_service)
)
)'
4. The history database creates a table with the same table structure as the production database, and it is recommended that the table name be changed with archived data identification.
Create tabel log_table_p201209 (.)
5. Create a stored procedure for data archiving:
Create procedure p_log_table_p201209 as
-- Index table
Type u_type is table of log_table_p201209%rowtype index by pls_integer
V_list u_type
Defines an array that holds the name of the view to be synchronized.
Type varchar_arrary is table of varchar2 (60) index by pls_integer
V_remoteview_list varchar_arrary
-- define a reference index
Type cur_ref is ref cursor
Cur_data cur_ref
-- Local variable, record SQL%ROWCOUNT
V_counter number: = 0
V_rowid rowid
V_sqlcode varchar2 (300): = null
V_querystr varchar (1000): = null
V_parse_elapsed_s date: = null
V_parse_elapsed_e date: = null
V_fetch_elapsed_s date: = null
V_fetch_elapsed_e date: = null
Begin
Initialize the array (the view created in step 1)
V_remoteview_list (1): = 'zhanghui.log_table_p201209'
-- cyclically synchronize each partition table
For k in 1.. V_remoteview_list.count loop
-- add a synchronization task record
Insert into data_archive_status
Values
(seq_id.nextval
K
V_remoteview_list (k)
0
0
Sysdate
Sysdate
Null)
Returning rowid into v_rowid
Commit
V_querystr: = 'select / * + rowid (t) * / * from' | | v_remoteview_list (k) | |
'@ XXDB.LOCALDOMAIN t'
Update data_archive_status t
Set t.synctime = sysdate, t.state = 1
Where rowid = v_rowid
Commit
Record the length of time it takes to open and parse the cursor.
V_parse_elapsed_s: = sysdate
Open cur_data for v_querystr
V_parse_elapsed_e: = sysdate
Update data_archive_status
Set synctime = sysdate
State = 2
Remark = remark | |'['| | v_remoteview_list (k) | |
': parse_elapsed=' | |
(v_parse_elapsed_e-v_parse_elapsed_s) | | 'sec,'
Where rowid = v_rowid
Commit
V_counter: = 0
V_fetch_elapsed_s: = sysdate
Loop synchronization of open cursors.
Loop
-- using Bulk Binding to process 10000 records at a time
Fetch cur_data bulk collect
Into v_list limit 10000
Forall i in 1.. V_list.last
Insert into log_table_p201209
Values v_list
(I)
-record the number of records currently synchronized
V_counter: = v_counter + sql%rowcount
Update data_archive_status t
Set t.syncnum = v_counter, t.synctime = sysdate
Where rowid = v_rowid
Commit
Exit when cur_data%notfound
End loop
V_fetch_elapsed_e: = sysdate
Update the schedule to record the completion time of the current partition in the comments.
Update data_archive_status
Set state = 3
Synctime = sysdate
Remark = remark | | 'fetch_elapsed=' | |
Round ((v_fetch_elapsed_e-v_fetch_elapsed_s) * 24 * 60
4) | | 'min,syncnum=' | | v_counter | |
', endtime=' | | to_char (sysdate, 'yyyymmddhh34miss') | |']'
Where rowid = v_rowid
Commit
Close cur_data
-- updating the schedule
Update data_archive_status t set t.state = 4 where rowid = v_rowid
Commit
End loop
Exception
When others then
V_sqlcode: = sqlcode
Update data_archive_status
Set synctime = sysdate, state = v_sqlcode
Where rowid = v_rowid
Commit
Raise
End
6. Create a compressed object stored procedure. Since the move operation requires nearly double the storage space, please evaluate the space requirement in advance before compressing.
Create procedure p_compress_object (vObject_name varchar2,-- object
VPartition_name varchar2 default null,-- partition name
VParallel int default 0,-- parallelism
VPctfree int default 0,-- the storage parameter pctfree no longer considers the setting of the DML operation to 0
VTablespace varchar2 default null,-- tablespace
VOwner varchar2 default user,-- object owner
VType number-Type: 0, table 1, index 2, partition table 3, index partition
) Authid Current_User is
VSql varchar2 (4000)
VSqlerrm varchar2 (256)
V_sqlstring varchar2 (4000)
Begin
V_sqlstring: = 'alter session set db_file_multiblock_read_count=128'
Execute immediate v_sqlstring
If vType = 0 then
Begin
VSql: = 'alter table' | | vOwner | |'. | | vObject_name | | 'move' | | case when vTablespace is null then null else 'tablespace' | | vTablespace end | | 'pctfree' | | vPctfree | | 'compress nologging' | | case when vParallel in (0,1) then null else 'parallel' | | vParallel end
Execute immediate vSql
End
Elsif vType = 1 then
Begin
VSql: = 'alter index' | | vOwner | |'. | | vObject_name | |
'rebuild' | | case when vTablespace is null then null else 'tablespace' | | vTablespace end | | 'pctfree' | | vPctfree | | 'compress nologging' | | case when vParallel in (0,1) then null else 'parallel' | | vParallel end
Execute immediate vSql
End
Elsif vType = 2 then
Begin
VSql: = 'alter table' | | vOwner | |'. | | vObject_name | |
'move partition' | | vPartition_name | | case when vTablespace is null then null else 'tablespace' | | vTablespace end | | 'pctfree' | | vPctfree | | 'compress nologging' | | case when vParallel in (0,1) then null else 'parallel' | | vParallel end
Execute immediate vSql
End
Elsif vType = 3 then
Begin
VSql: = 'alter index' | | vOwner | |'. | | vObject_name | |
'rebuild partition' | | vPartition_name | | case when vTablespace is null then null else 'tablespace' | | vTablespace end | | 'pctfree' | | vPctfree | | 'compress nologging' | | case when vParallel in (0,1) then null else 'parallel' | | vParallel end
Execute immediate vSql
End
End if
Exception
When others then
VSqlerrm: = sqlerrm
Dbms_output.put_line (vSqlerrm | |'|'| | vSql)
End
7. To confirm the history database tablespace, call the data archiving stored procedure p_log_table_p201209, compress the data after processing, and call the stored procedure p_compress_object (....).
8. Confirm that the data is correct. Drop removes the partition of the user's corresponding view and business table from the production database, and frees up the object space (note: check whether the index of the partition table is local, otherwise.).
Above. Finish it!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.