Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Testing of mixed column Compression (HCC) in OLAP and OLTP scenarios

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Author: Li Min, Yunhe Enmo delivery engineer.

Oracle Corp first introduced EHCC (Exadata Hybrid Columnar Compression) in 11G R2, which was earlier restricted by E, which means that this feature can only be enabled on exadata all-in-one machines. As an important part of the many excellent features on exadata, compared with smart scan or cell offloading, although EHCC can bring great space compression, EHCC still requires DBA to do some additional operations and even evaluate multiple scenarios to decide whether to adopt it or not.

EHCC (or later, due to the use of more platforms, in addition to exadata, it was changed to HCC after it was supported on Oracle corp's zfssa, Pillar Axiom, SuperCluster, and ODA.) essentially, the problem solved is the IO problem. It can also be said that it is to balance between CPU and IO and trade computing power for space. At present, it seems that in most scenarios, this exchange is very valuable, and compression ratios of several times, ten times or even dozens of times are very common. If this part of the data is cold data, this feature looks perfect.

But sometimes that's not the case. This article selects one or two points from many aspects of HCC to outline the first intuitive experience of this feature to DBA.

DB version of the test environment

First, prepare the environment.

Create a tablespace, where multiple small files are selected.

CREATE SMALLFILE TABLESPACE EHCCTBS

DATAFILE

'/ ehccfs/ORA19C/ora19pdb1/EHCCTBS_001.DBF' SIZE 10485760 AUTOEXTEND ON NEXT 1048576 MAXSIZE 10737418240

'/ ehccfs/ORA19C/ora19pdb1/EHCCTBS_002.DBF' SIZE 10485760 AUTOEXTEND ON NEXT 1048576 MAXSIZE 10737418240

'/ ehccfs/ORA19C/ora19pdb1/EHCCTBS_003.DBF' SIZE 10485760 AUTOEXTEND ON NEXT 1048576 MAXSIZE 10737418240

'/ ehccfs/ORA19C/ora19pdb1/EHCCTBS_004.DBF' SIZE 10485760 AUTOEXTEND ON NEXT 1048576 MAXSIZE 10737418240

BLOCKSIZE 8192

FORCE LOGGING

DEFAULT COLUMN STORE NO COMPRESS NO INMEMORY

ONLINE

SEGMENT SPACE MANAGEMENT AUTO

EXTENT MANAGEMENT LOCAL AUTOALLOCATE; (swipe left and right to see the complete code, same below)

Here, we choose NO Compress mode to create tablespaces. Instead of taking compression as an attribute of tablespaces, we use CREATE TABLE to specify compression attributes.

[ora19c@dm01db06 ~] $sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0-Production on Sun Mar 24 10:07:02 2019

Version 19.2.0.0.0

Copyright (c) 1982, 2018, Oracle. All rights reserved.

Connected to:

Oracle Database 19c Enterprise Edition Release 19.0.0.0.0-Production

Version 19.2.0.0.0

SQL > create user hr identified by welcome1 default tablespace ehcctbs

User created.

SQL > grant dba to hr

Grant succeeded.

SQL > create table hr.big_table_no_ehcc as select * from dba_objects

Table created.

In order to reflect the difference in compression ratio, I created an uncompressed table of 360m to compare the compression ratio under eight compression modes.

[ora19c@dm01db06 ~] $sqlplus hr/welcome1@ora19pdb1

SQL*Plus: Release 19.0.0.0.0-Production on Sun Mar 24 10:07:28 2019

Version 19.2.0.0.0

Copyright (c) 1982, 2018, Oracle. All rights reserved.

Last Successful login time: Sun Mar 24 2019 09:36:33 + 08:00

Connected to:

Oracle Database 19c Enterprise Edition Release 19.0.0.0.0-Production

Version 19.2.0.0.0

SQL >

SQL > insert into BIG_TABLE_NO_EHCC select * from BIG_TABLE_NO_EHCC

72360 rows created.

SQL > /

144720 rows created.

SQL > /

289440 rows created.

SQL > /

578880 rows created.

SQL > insert into BIG_TABLE_NO_EHCC select * from BIG_TABLE_NO_EHCC

1157760 rows created.

SQL > commit

SQL > select count (*) from BIG_TABLE_NO_EHCC

COUNT (*)

-

2315520

SQL > col OWNER for A15

SQL > col SEGMENT_NAME for A40

SQL > select OWNER,SEGMENT_NAME,BYTES/1048576 SIZE_MB from dba_segments where SEGMENT_NAME like ('% EHCC%')

OWNER SEGMENT_NAME SIZE_MB

HR BIG_TABLE_NO_EHCC 360

Based on this base table, eight tables with different HCC compression are then created. Here I timing on, but only for reference, because redo is 200m, resulting in CTAS when there is an archiving behavior, IO is affected, there may be a time affected.

SQL > create table EHCC_QUERY_HIGH compress for query high tablespace ehcctbs as select * from big_table_no_ehcc

Table created.

Elapsed: 00:00:10.61

SQL > create table EHCC_QUERY_LOW compress for query low tablespace ehcctbs as select * from big_table_no_ehcc

Table created.

Elapsed: 00:00:21.33

SQL > create table EHCC_ARCHIVE_HIGH compress for archive high tablespace ehcctbs as select * from big_table_no_ehcc

Table created.

Elapsed: 00:00:38.75

SQL > create table EHCC_ARCHIVE_LOW compress for archive low tablespace ehcctbs as select * from big_table_no_ehcc

Table created.

Elapsed: 00:00:11.07

SQL > create table EHCC_QUERY_HIGH_LOCKING compress for query high row level locking tablespace ehcctbs as select * from big_table_no_ehcc

Table created.

Elapsed: 00:00:09.46

SQL > create table EHCC_QUERY_LOW_LOCKING compress for query low row level locking tablespace ehcctbs as select * from big_table_no_ehcc

Table created.

Elapsed: 00:00:12.35

SQL > create table EHCC_ARCHIVE_HIGH_LOCKING compress for archive high row level locking tablespace ehcctbs as select * from big_table_no_ehcc

Table created.

Elapsed: 00:00:33.90

SQL > create table EHCC_ARCHIVE_LOW_LOCKING compress for archive low row level locking tablespace ehcctbs as select * from big_table_no_ehcc

Table created.

Elapsed: 00:00:17.50

Then look at the object sizes under these different compressions. Note that the LOCKING here refers to row level locking.

Except for the first base table, the difference in compression between every two adjacent objects is the difference in the row level locking way.

Hr.BIG_TABLE_NO_EHCC this table is based on PDB's dba_objects to create a 28-column table, to be honest, this table is not suitable for HCC run test, but it can still achieve an amazing compression ratio of 360max, 15cm, 24 times in archive high mode.

So, if you force a full scan on an uncompressed basic table, and then force a full scan on the table of the highest compressed archive high, which is faster?

There were many tests and the results were unexpected. The gap between 16 seconds and 1 second is the result that is not on exadata. If you aggregate the cell offloading of exadata, you can see the performance of HCC under OLAP for details.

Oracle's collaboration and optimization of its own products is stunning.

14336, first-level bitmap block

* 2019-03-24T17:14:51.182266+08:00 (ORA19PDB1 (3))

Start dump data blocks tsn: 6 file#:24 minblk 14336 maxblk 14336

..

..

..

Dump of Second Level Bitmap Block

Number: 9 nfree: 1 ffree: 8 pdba: 0x06003802

Inc #: 0 Objd: 72974 Flag: 3

Opcode:0

Here is a hint to:

Second Level Bitmap block DBAs

DBA 1: 0x06003801 secondary bitmap block, which happens to be the next block of level 14336:

The following is the information about this secondary bitmap block:

Dump of Second Level Bitmap Block

Number: 9 nfree: 1 ffree: 8 pdba: 0x06003802

Inc #: 0 Objd: 72974 Flag: 3

Opcode:0

Xid:

L1 Ranges:

0x06003800 Free: 1 Inst: 1

0x06003840 Free: 1 Inst: 1

0x05803400 Free: 1 Inst: 1

0x06003880 Free: 1 Inst: 1

0x05803480 Free: 1 Inst: 1

0x06003900 Free: 1 Inst: 1

0x05803500 Free: 1 Inst: 1

0x06003980 Free: 1 Inst: 1

0x05803580 Free: 7 Inst: 1

End dump data blocks tsn: 6 file#: 24 minblk 14337 maxblk 14337

There seems to be nothing. It seems that the watch is too small to use.

14338 yuan doesn't have much information. Here, look at 14339 yuan.

Block_row_dump:

Tab 0, row 0, @ 0x30

Tl: 8016 fb:-H-F--N lb: 0x0 cc: 1

Nrid: 0x06003804.0

Col 0: [8004]

Compression level: 04 (Archive High)

Length of CU row: 8004

Kdzhrh:-PC- CBLK: 0 Start Slot: 00

NUMP: 22

PNUM: 00 POFF: 7774 PRID: 0x06003804.0

PNUM: 01 POFF: 15790 PRID: 0x06003805.0

PNUM: 02 POFF: 23806 PRID: 0x06003806.0

PNUM: 03 POFF: 31822 PRID: 0x06003807.0

PNUM: 04 POFF: 39838 PRID: 0x06003808.0

PNUM: 05 POFF: 47854 PRID: 0x06003809.0

PNUM: 06 POFF: 55870 PRID: 0x0600380a.0

PNUM: 07 POFF: 63886 PRID: 0x0600380b.0

PNUM: 08 POFF: 71902 PRID: 0x0600380c.0

PNUM: 09 POFF: 79918 PRID: 0x0600380d.0

PNUM: 10 POFF: 87934 PRID: 0x0600380e.0

PNUM: 11 POFF: 95950 PRID: 0x0600380f.0

PNUM: 12 POFF: 103966 PRID: 0x06003810.0

PNUM: 13 POFF: 111982 PRID: 0x06003811.0

PNUM: 14 POFF: 119998 PRID: 0x06003812.0

PNUM: 15 POFF: 128014 PRID: 0x06003813.0

PNUM: 16 POFF: 136030 PRID: 0x06003814.0

PNUM: 17 POFF: 144046 PRID: 0x06003815.0

PNUM: 18 POFF: 152062 PRID: 0x06003816.0

PNUM: 19 POFF: 160078 PRID: 0x06003817.0

PNUM: 20 POFF: 168094 PRID: 0x06003818.0

PNUM: 21 POFF: 176110 PRID: 0x06003819.0

*-

CU header:

CU version: 0 CU magic number: 0x4b445a30

CU checksum: 0xbdbe82d3

CU total length: 180160

CU flags: NC-U-CRD-OP

Ncols: 26

Nrows: 32759

Algo: 0

CU decomp length: 175939 len/value length: 4332401

Row pieces per row: 1

Num deleted rows: 0

START_CU:

There is a lot of information in this part. I will talk about it according to my personal understanding.

Tl: 8016 fb:-H-F-N H here means CUhead. Fb means flag byte,F and first, and P means previous,N and next. In addition, I don't have the last row piece of dump. Logically, the fb on the last 0x06003819 block will show L, which stands for last. (as a matter of fact, I dump afterwards, showing LP)

Nrid: 0x06003804.0 here nrid means next row piece id, and the data here is nrid: 0x06003804.0. In decimal terms, it is rdba: 0x6003804 (100677636) file: 24, block: 14340, document 24 14340.

According to reason, what is shown on 14340 yuan is similar to PN, but not H's tag.

Compression level: 04 (Archive High) is the HCC compressed format.

NUMP: 22 represents how many row piece there are in this CU. Here it shows 22 row piece. According to this address, a row piece is a block. I understand that it is a representative. There are 22 block in this CU.

CU checksum: 0xbdbe82d3 is the check value of this CU.

Nrows: 32759, which means that there are 32759 lines in this CU, which is a very large number.

Next, we dump the second CU block, 14340 yuan.

Start dump data blocks tsn: 6 file#:24 minblk 14340 maxblk 14340

..

..

..

Block_row_dump:

Tab 0, row 0, @ 0x1f

Tl: 8033 fb:-PN lb: 0x0 cc: 1

Nrid: 0x06003805.0

Col 0: [8021]

Compression level: 04 (Archive High)

Length of CU row: 8021

Kdzhrh:-START_CU:

As identified above, this is PN.

Here will be according to insert,update,delete these three DML to test in the case of HCC related to the possible compression conversion, ROWID changes, lock range to explain.

In the DML scenario, compare two tables, an uncompressed table and a compressed table. All the rows of the compressed table are in a block of a CU.

Here is the created table, with a regular table, an archive high table, and a row level locking archive high table. They are assigned the same size, which does not mean that the space occupied in the extents is the same, but because the table's initial allocation of extents is 8 block, each block is 8192 bytes. This is the distribution law of ASSM.

SQL > create table dml_test_no_ehcc as select * from dba_objects where rownum

< 100; Table created. SQL>

Update dml_test_no_ehcc set OBJECT_ID=rownum

99 rows updated.

SQL > commit

Commit complete.

SQL > create table DML_TEST_ARCHIVE_HIGH compress for archive high tablespace ehcctbs as select * from dml_test_no_ehcc

Table created.

SQL > create table DML_TEST_ARCHIVE_HIGH_LOCKING compress for archive high row level locking tablespace ehcctbs as select * from dml_test_no_ehcc

Table created.

Col OWNER for a15

Col SEGMENT_NAME for a40

SQL > select s.OWNERJournal s.SEGMENTUniteNAMEPapers.BYTESCompact 1024 SIZE_MB,t.COMPRESS_FOR from dba_segments s dbaicaltables t where s.SEGMENT_NAME like ('DML_TEST_%') and s.owner=t.owner and s.segment_name = t.table_name order by 2

OWNER SEGMENT_NAME SIZE_MB COMPRESS_FOR

-

SYS DML_TEST_ARCHIVE_HIGH 64 ARCHIVE HIGH

SYS DML_TEST_ARCHIVE_HIGH_LOCKING 64 ARCHIVE HIGH ROW LEVEL LOCKING

SYS DML_TEST_NO_EHCC

Next, you need to prove that all rows of the two HCC tables are in the same CU.

At this time, excluding the primary and secondary bitmap blocks, the fourth block of each table in dump, that is, 19203 blocks of DML_TEST_ARCHIVE_HIGH in file 24 and 19211 blocks of DML_TEST_ARCHIVE_HIGH_LOCKING in file 24, check from the dump information to see if all lines are in one CU.

19203 yuan, the information is as follows, you can see that the fb is marked as Head, with F and L, which means that the CU is both the CU of first and last, and the nrows in this CU is 99 lines. This is all consistent with the constructed environment.

Data_block_dump,data header at 0x9b95a07c

=

Tsiz: 0x1f80

Hsiz: 0x1c

Pbl: 0x9b95a07c

76543210

Flag=-0-

Ntab=1

Nrow=1

Frre=-1

Fsbo=0x1c

Fseo=0x1830

Avsp=0x1814

Tosp=0x1814

R0_9ir2=0x0

Mec_kdbh9ir2=0x0

76543210

Shcf_kdbh9ir2=-

76543210

Flag_9ir2=--R- Archive compression: Y

Fcls_9ir2 [0] = {}

0x16:pti [0] nrow=1 offs=0

0x1a:pri [0] offs=0x1830

Block_row_dump:

Tab 0, row 0, @ 0x1830

Tl: 1872 fb:-- Hmurfrill-lb: 0x0 cc: 1

Col 0: [1866]

Compression level: 04 (Archive High)

Length of CU row: 1866

Kdzhrh:-Start Slot: 00

*-

CU header:

CU version: 0 CU magic number: 0x4b445a30

CU checksum: 0x24a713c2

CU total length: 1854

CU flags: NC-U-CRD-OP

Ncols: 26

Nrows: 99

Algo: 0

CU decomp length: 1715 len/value length: 10614

Row pieces per row: 1

Num deleted rows: 0

START_CU:

Similarly, the 19211 blocks of another table get the same construction information.

Data_block_dump,data header at 0x7fda7a65e07c

=

Tsiz: 0x1f80

Hsiz: 0x1c

Pbl: 0x7fda7a65e07c

76543210

Flag=-0-

Ntab=1

Nrow=1

Frre=-1

Fsbo=0x1c

Fseo=0x17c9

Avsp=0x17ad

Tosp=0x17ad

R0_9ir2=0x0

Mec_kdbh9ir2=0x0

76543210

Shcf_kdbh9ir2=-

76543210

Flag_9ir2=--R- Archive compression: Y

Fcls_9ir2 [0] = {}

0x16:pti [0] nrow=1 offs=0

0x1a:pri [0] offs=0x17c9

Block_row_dump:

Tab 0, row 0, @ 0x17c9

Tl: 1975 fb:-- Hmurfrill-lb: 0x0 cc: 1

Col 0: [1969]

Compression level: 04 (Archive High)

Length of CU row: 1969

Kdzhrh:-L Start Slot: 00

Num lock bits: 8

Locked rows:

*-

CU header:

CU version: 0 CU magic number: 0x4b445a30

CU checksum: 0x24a713c2

CU total length: 1854

CU flags: NC-U-CRD-OP

Ncols: 26

Nrows: 99

Algo: 0

CU decomp length: 1715 len/value length: 10614

Row pieces per row: 1

Num deleted rows: 0

START_CU:

For the first scenario test under OLTP, we tentatively choose the insert test. Here, we only test the HCC table to test the performance of append mode and conventional insertion mode under the row level locking table and row level locking table, respectively.

According to the document, if you insert a table that has been compressed by HCC, the newly inserted data will not be compressed if it is a regular insert, but will continue to be compressed only if it is inserted in a direct path such as append. In addition to verifying this, you also need to verify whether the concurrent inserts of other sessions will be affected. If blocked, you need to test whether the row level locking-style HCC table is affected.

SQL > select distinct (sid) from v$mystat

SID

-

one hundred and forty seven

SQL > insert into DML_TEST_ARCHIVE_HIGH select * from DML_TEST_NO_EHCC

99 rows created.

SQL >

SQL > select distinct (sid) from v$mystat

SID

-

two hundred and sixty nine

SQL > insert into DML_TEST_ARCHIVE_HIGH select * from DML_TEST_NO_EHCC

99 rows created.

SQL >

This test does not lock a single CU to prove the insertion of an HCC table without a row level locking attribute.

However, when this test comes down, there is a problem, that is, for inserts that do not use append, if the inserted data can be accommodated by the currently compressed CU, then the inserted data will be compressed, and if the data inserted without append cannot be placed by the current CU, then in the next allocation, the data beyond the current CU will not be compressed.

Once again, this is unexpected.

COUNT (*) COMPRESSION_TYPE

10 COMP_NOCOMPRESS

323126 COMP_FOR_ARCHIVE_HIGH

SQL >

In the second scenario under OLTP, we test DELETE. I don't know what to test. I'll do a delete test for two sessions on the HCC table for the time being.

I tested it twice. If the table is not compressed, I delete the data of object_id=1 and 2 in two sessions. If they are not submitted, they will not block each other.

However, if the table is HCC compressed and row level locking is not enabled, if the entry for object_id=1 is deleted in session 1 and the entry for object_id=2 is deleted in session 2, the deletion of session 2 will be blocked by session 1.

This also sideways verifies that in a normal HCC table, the minimum unit of a lock is CU, instead of being affected by rows that have been affected by other sessions, as normal tables do. But when you think about it, the truth seems to be the same.

So, that's when I laid the groundwork for so many HCC features of row level locking. This feature was introduced in 12c HCC. Oracle corp may find that the scope of impact on the entire CU locking is too large, in order to be OLTP-friendly, the HCC feature of row level locking is introduced, although this may bring a little bit of compression loss, which can be seen earlier.

Next, delete the object_id=1 and object_id=2 records for different sessions on the created row level locking table.

You can see that deletions within the same CU of the HCC table with the row level locking attribute added are independent of each other.

In the third scenario test in OLTP, we will test update. According to the previous DELETE test, it is obvious that compression without row level locking in HCC will be blocked by other update. If so, if the same CU for different record operations, it will not be affected. What if it was the same recording operation within the same CU:).

In the UPDATE part, the key test here is the change of rowid.

Regenerate the environment:

SQL > drop table DML_TEST_ARCHIVE_HIGH purge

Table dropped.

SQL > drop table DML_TEST_ARCHIVE_HIGH_LOCKING purge

Table dropped.

SQL > drop table DML_TEST_NO_EHCC purge

Table dropped.

This time the table is created smaller.

SQL > create table dml_test_no_ehcc as select * from dba_objects where rownum

< 10; Table created. SQL>

Create table DML_TEST_ARCHIVE_HIGH compress for archive high tablespace ehcctbs as select * from dml_test_no_ehcc

Table created.

SQL > create table DML_TEST_ARCHIVE_HIGH_LOCKING compress for archive high row level locking tablespace ehcctbs as select * from dml_test_no_ehcc

Table created.

SQL >

SQL > col OWNER for A15

SQL > col SEGMENT_NAME for A40

SQL > select s.OWNERJournal s.SEGMENTUniteNAMEPapers.BYTESCompact 1024 SIZE_MB,t.COMPRESS_FOR from dba_segments s dbaicaltables t where s.SEGMENT_NAME like ('DML_TEST_%') and s.owner=t.owner and s.segment_name = t.table_name order by 2

OWNER SEGMENT_NAME SIZE_MB COMPRESS_FOR

-

HR DML_TEST_ARCHIVE_HIGH 64 ARCHIVE HIGH

HR DML_TEST_ARCHIVE_HIGH_LOCKING 64 ARCHIVE HIGH ROW LEVEL LOCKING

HR DML_TEST_NO_EHCC 64

SQL >

Check the rowid and block number distribution of the HCC table.

SQL > select rowid,object_name,dbms_rowid.rowid_block_number (rowid) from DML_TEST_ARCHIVE_HIGH

ROWID OBJECT_NAME DBMS_ROWID.ROWID_BLOCK_NUMBER (ROWID)

-

AAAR0vAAWAAAEWLAAA I_FILE#_BLOCK# 17803

AAAR0vAAWAAAEWLAAB I_OBJ3 17803

AAAR0vAAWAAAEWLAAC I_TS1 17803

AAAR0vAAWAAAEWLAAD I_CON1 17803

AAAR0vAAWAAAEWLAAE IND$ 17803

AAAR0vAAWAAAEWLAAF CDEF$ 17803

AAAR0vAAWAAAEWLAAG C_TS# 17803

AAAR0vAAWAAAEWLAAH I_CCOL2 17803

AAAR0vAAWAAAEWLAAI PROXYDATA $17803

9 rows selected.

Here, you can use DBMS_COMPRESSION.GET_COMPRESSION_TYPE to confirm how a row of data is compressed:

SQL > select DBMS_COMPRESSION.GET_COMPRESSION_TYPE ('HR','DML_TEST_ARCHIVE_HIGH','AAAR0vAAWAAAEWLAAA') from dual

DBMS_COMPRESSION.GET_COMPRESSION_TYPE ('HR','DML_TEST_ARCHIVE_HIGH','AAAR0VAAWAAAEWLAAA')

-

sixteen

References are as follows:

COMP_NOCOMPRESS CONSTANT NUMBER: = 1

COMP_FOR_OLTP CONSTANT NUMBER: = 2

COMP_FOR_QUERY_HIGH CONSTANT NUMBER: = 4

COMP_FOR_QUERY_LOW CONSTANT NUMBER: = 8

COMP_FOR_ARCHIVE_HIGH CONSTANT NUMBER: = 16

COMP_FOR_ARCHIVE_LOW CONSTANT NUMBER: = 32

COMP_RATIO_MINROWS CONSTANT NUMBER: = 1000000

COMP_RATIO_ALLROWS CONSTANT NUMBER: =-1

As you can see, 16 is the ARCHIVE_HIGH compression method at the time of creation.

After that, update the table.

SQL > update DML_TEST_ARCHIVE_HIGH set OBJECT_NAME=OBJECT_NAME | | 'MINOR'

9 rows updated.

Check the rowid and block number of this table again:

SQL > select rowid,object_name,dbms_rowid.rowid_block_number (rowid) from DML_TEST_ARCHIVE_HIGH

ROWID OBJECT_NAME DBMS_ROWID.ROWID_BLOCK_NUMBER (ROWID)

-

AAAR0vAAWAAAEWOAAA I_FILE#_BLOCK#MINOR 17806

AAAR0vAAWAAAEWOAAB I_OBJ3MINOR 17806

AAAR0vAAWAAAEWOAAC I_TS1MINOR 17806

AAAR0vAAWAAAEWOAAD I_CON1MINOR 17806

AAAR0vAAWAAAEWOAAE IND$MINOR 17806

AAAR0vAAWAAAEWOAAF CDEF$MINOR 17806

AAAR0vAAWAAAEWOAAG C_TS#MINOR 17806

AAAR0vAAWAAAEWOAAH I_CCOL2MINOR 17806

AAAR0vAAWAAAEWOAAI I_PROXY_DATA$MINOR 17806

9 rows selected.

As you can see, rowid,block id has changed, so it proves that for the data update in CU, there is an operation to extract and move to another block update.

So is the updated data still compressed? Obviously, it's not.

SQL > select DBMS_COMPRESSION.GET_COMPRESSION_TYPE ('HR','DML_TEST_ARCHIVE_HIGH',rowid) from DML_TEST_ARCHIVE_HIGH

DBMS_COMPRESSION.GET_COMPRESSION_TYPE ('HR','DML_TEST_ARCHIVE_HIGH',ROWID)

one

one

one

one

one

one

one

one

one

9 rows selected.

Compressed to 1 COMP_NOCOMPRESS CONSTANT NUMBER 1 represents 1: = 1, not compressed. Therefore, in addition to insert,update will also bring decompression and non-compression. When performing a update operation, db converts the data compressed by the column into rows for operation, and does not compress again after the operation is completed.

If you need to recompress the recovered data, you need to explicitly move these tables.

I just noticed that updates can lead to changes in the rowid of compressed data, so can they not be changed? The answer is yes.

Implied parameters:

Then we repeat the update operation above:

In the third part, so many of the tests in OLAP and OLTP above are tests in a single scenario, so how does HCC compare to an environment without HCC in a real scenario? I think of swingbench here.

Swingbench doesn't introduce much. There is a problem, however, that swingbench objects are generated by their own programs, and you cannot manually interfere with the parameters used to create objects unless you change those scripts one by one.

In fact, there is a simple way is to create test tablespaces, add HCC parameters to the tablespaces. Only stress tests are performed in query high scenarios without row level locking and with row level locking and non-HCC scenarios. Considering that the customer environment is not serial, I used four sessions to test. The benchmark data of the test is 0.5GB, and three fields are to be tested.

First, three host tablespaces are generated, one with the HCC attribute, one with the row level locking attribute with HCC, and one without the HCC attribute.

SQL > CREATE SMALLFILE TABLESPACE SOE_EHCC_TBS

2 DATAFILE

3'/ ehccfs/ORA19C/ora19pdb1/SOE_EHCC_TBS_001.DBF' SIZE 10485760 AUTOEXTEND ON NEXT 1048576 MAXSIZE 10737418240

4 BLOCKSIZE 8192

5 FORCE LOGGING

6 DEFAULT COLUMN STORE COMPRESS FOR query HIGH NO INMEMORY

7 ONLINE

8 SEGMENT SPACE MANAGEMENT AUTO

9 EXTENT MANAGEMENT LOCAL AUTOALLOCATE

Tablespace created.

SQL > CREATE SMALLFILE TABLESPACE SOE_NO_EHCC_TBS

2 DATAFILE

3'/ ehccfs/ORA19C/ora19pdb1/SOE_NO_EHCC_TBS_001.DBF' SIZE 10485760 AUTOEXTEND ON NEXT 1048576 MAXSIZE 10737418240

4 BLOCKSIZE 8192

5 FORCE LOGGING

6 DEFAULT COLUMN STORE COMPRESS FOR query HIGH NO INMEMORY

7 ONLINE

8 SEGMENT SPACE MANAGEMENT AUTO

9 EXTENT MANAGEMENT LOCAL AUTOALLOCATE

Tablespace created.

SQL > CREATE SMALLFILE TABLESPACE SOE_EHCC_ROW_LOCKING_TBS

2 DATAFILE

3'/ ehccfs/ORA19C/ora19pdb1/SOE_EHCC_ROW_LOCKING_TBS_001.DBF' SIZE 10485760 AUTOEXTEND ON NEXT 1048576 MAXSIZE 10737418240

4 BLOCKSIZE 8192

5 FORCE LOGGING

6 DEFAULT COLUMN STORE COMPRESS FOR query HIGH ROW LEVEL LOCKING NO INMEMORY

7 ONLINE

8 SEGMENT SPACE MANAGEMENT AUTO

9 EXTENT MANAGEMENT LOCAL AUTOALLOCATE

Tablespace created.

Finally, you can see that the generated data is as follows:

After the data generation is complete, start the testing of swingbench. The testing stopped here. Because in the default scenario of swingbench, there are a large number of DML operations, and the results of my previous tests show that over time, most tables become uncompressed because of DML. So DML testing doesn't make much sense. The only thing that might make sense for testing is OLAP. This modification of the swingbench configuration is omitted here.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report