Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Preliminary test of pg_repack bloat processing

2025-03-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

I. Software installation

1. Software requirements:

Postgresql-9.5.2.tar.gz

Pg_repack-1.3.4.zip

two。 Install pg_repack

[root@localhost pg_repack-1.3.4] # export PATH=/opt/pgsql/9.5.2/bin:$PATH

[root@localhost pg_repack-1.3.4] # export LD_LIBRARY_PATH=/opt/pgsql/9.5.2/lib

[root@localhost pg_repack-1.3.4] # export MANPATH=/opt/pgsql/9.5.2/share/man:$MANPATH

[root@localhost pg_repack-1.3.4] # make

Make [1]: Entering directory `/ home/soft/pg_repack-1.3.4/bin'

Gcc-Wall-Wmissing-prototypes-Wpointer-arith-Wdeclaration-after-statement-Wendif-labels-Wmissing-format-attribute-Wformat-security-fno-strict-aliasing-fwrapv-O2-I/opt/pgsql/9.5.2/include-DREPACK_VERSION=1.3.4-I. -I. /-I/opt/pgsql/9.5.2/include/server-I/opt/pgsql/9.5.2/include/internal-D_GNU_SOURCE-c-o pg_repack.o pg_repack.c

Gcc-Wall-Wmissing-prototypes-Wpointer-arith-Wdeclaration-after-statement-Wendif-labels-Wmissing-format-attribute-Wformat-security-fno-strict-aliasing-fwrapv-O2-I/opt/pgsql/9.5.2/include-DREPACK_VERSION=1.3.4-I. -I. /-I/opt/pgsql/9.5.2/include/server-I/opt/pgsql/9.5.2/include/internal-D_GNU_SOURCE-c-o pgut/pgut.o pgut/pgut.c

Gcc-Wall-Wmissing-prototypes-Wpointer-arith-Wdeclaration-after-statement-Wendif-labels-Wmissing-format-attribute-Wformat-security-fno-strict-aliasing-fwrapv-O2-I/opt/pgsql/9.5.2/include-DREPACK_VERSION=1.3.4-I. -I. /-I/opt/pgsql/9.5.2/include/server-I/opt/pgsql/9.5.2/include/internal-D_GNU_SOURCE-c-o pgut/pgut-fe.o pgut/pgut-fe.c

Gcc-Wall-Wmissing-prototypes-Wpointer-arith-Wdeclaration-after-statement-Wendif-labels-Wmissing-format-attribute-Wformat-security-fno-strict-aliasing-fwrapv-O2 pg_repack.o pgut/pgut.o pgut/pgut-fe.o-L/opt/pgsql/9.5.2/lib-lpq-L/opt/pgsql/9.5.2/lib-Wl,--as-needed-Wl,-rpath,'/opt/pgsql/9.5.2/lib' -enable-new-dtags-lpgcommon-lpgport-lz-lreadline-lrt-lcrypt-ldl-lm-o pg_repack

Make [1]: Leaving directory `/ home/soft/pg_repack-1.3.4/bin'

Make [1]: Entering directory `/ home/soft/pg_repack-1.3.4/lib'

Gcc-Wall-Wmissing-prototypes-Wpointer-arith-Wdeclaration-after-statement-Wendif-labels-Wmissing-format-attribute-Wformat-security-fno-strict-aliasing-fwrapv-O2-fpic-DREPACK_VERSION=1.3.4-I. -I. /-I/opt/pgsql/9.5.2/include/server-I/opt/pgsql/9.5.2/include/internal-D_GNU_SOURCE-c-o repack.o repack.c

Gcc-Wall-Wmissing-prototypes-Wpointer-arith-Wdeclaration-after-statement-Wendif-labels-Wmissing-format-attribute-Wformat-security-fno-strict-aliasing-fwrapv-O2-fpic-DREPACK_VERSION=1.3.4-I. -I. /-I/opt/pgsql/9.5.2/include/server-I/opt/pgsql/9.5.2/include/internal-D_GNU_SOURCE-c-o pgut/pgut-be.o pgut/pgut-be.c

Gcc-Wall-Wmissing-prototypes-Wpointer-arith-Wdeclaration-after-statement-Wendif-labels-Wmissing-format-attribute-Wformat-security-fno-strict-aliasing-fwrapv-O2-fpic-DREPACK_VERSION=1.3.4-I. -I. /-I/opt/pgsql/9.5.2/include/server-I/opt/pgsql/9.5.2/include/internal-D_GNU_SOURCE-c-o pgut/pgut-spi.o pgut/pgut-spi.c

(echo'{global:'; gawk'/ ^ [^ #] / {printf "% s;\ n", $1} 'exports.txt; echo' local: *;};') > exports.list

Gcc-Wall-Wmissing-prototypes-Wpointer-arith-Wdeclaration-after-statement-Wendif-labels-Wmissing-format-attribute-Wformat-security-fno-strict-aliasing-fwrapv-O2-fpic-shared-Wl,--version-script=exports.list-o pg_repack.so repack.o pgut/pgut-be.o pgut/pgut-spi.o-L/opt/pgsql/9.5.2/lib-Wl,--as-needed-Wl,-rpath,'/opt/pgsql/9.5.2/lib',--enable-new-dtags

Sed's Magazine REPACKGRON version 1.3.4 g 'pg_repack.sql.in > pg_repack--1.3.4.sql

Sed's Magazine REPACKGRON version 1.3.4 g 'pg_repack.control.in > pg_repack.control

Make [1]: Leaving directory `/ home/soft/pg_repack-1.3.4/lib'

Make [1]: Entering directory `/ home/soft/pg_repack-1.3.4/regress'

Make [1]: Nothing to be done for `all'.

Make [1]: Leaving directory `/ home/soft/pg_repack-1.3.4/regress'

[root@localhost pg_repack-1.3.4] # make install

Make [1]: Entering directory `/ home/soft/pg_repack-1.3.4/bin'

/ bin/mkdir-p'/ opt/pgsql/9.5.2/bin'

/ usr/bin/install-c pg_repack'/ opt/pgsql/9.5.2/bin'

Make [1]: Leaving directory `/ home/soft/pg_repack-1.3.4/bin'

Make [1]: Entering directory `/ home/soft/pg_repack-1.3.4/lib'

/ bin/mkdir-p'/ opt/pgsql/9.5.2/lib'

/ bin/mkdir-p'/ opt/pgsql/9.5.2/share/extension'

/ bin/mkdir-p'/ opt/pgsql/9.5.2/share/extension'

/ usr/bin/install-c-m 755 pg_repack.so' / opt/pgsql/9.5.2/lib/pg_repack.so'

/ usr/bin/install-c-m 644. / / pg_repack.control'/ opt/pgsql/9.5.2/share/extension/'

/ usr/bin/install-c-m 644 pg_repack--1.3.4.sql pg_repack.control'/ opt/pgsql/9.5.2/share/extension/'

Make [1]: Leaving directory `/ home/soft/pg_repack-1.3.4/lib'

Make [1]: Entering directory `/ home/soft/pg_repack-1.3.4/regress'

Make [1]: Nothing to be done for `install'.

Make [1]: Leaving directory `/ home/soft/pg_repack-1.3.4/regress'

[root@localhost pg_repack-1.3.4] #

3. Create the initial environment

[postgres@localhost ~] $createdb bloatdb

[postgres@localhost ~] $psql-d bloatdb-c "create extension pgstattuple;"

CREATE EXTENSION

[postgres@localhost ~] $psql-d bloatdb-c "CREATE EXTENSION pg_repack;"

CREATE EXTENSION

[postgres@localhost ~] $

$psql bloatdb

Psql (9.5.2)

Type "help" for help.

Bloatdb=#\ dx

List of installed extensions

Name | Version | Schema | Description

-+-

Pg_repack | 1.3.4 | public | Reorganize tables in PostgreSQL databases with minimal locks

Pgstattuple | 1.3 | public | show tuple-level statistics

Plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language

(3 rows)

Second, static (no active trading) expansion finishing test

1. Processing table tbl specified index

1)。 Prepare the environment

Bloatdb=# create table tbl (id int primary key, first varchar (20), second varchar (20))

CREATE TABLE

Bloatdb=# create index idx_tbl_first on tbl (first)

CREATE INDEX

Bloatdb=# create index idx_tbl_second on tbl (second)

CREATE INDEX

Bloatdb=# SELECT count (*) FROM tbl

Count

-

0

(1 row)

Bloatdb=# SELECT pg_size_pretty (pg_total_relation_size ('tbl'))

Pg_size_pretty

-

24 kB

(1 row)

Bloatdb=# INSERT INTO tbl VALUES (generate_series (1 ^ 10000), 'first' | | (random () * (10 ^ 3)):: integer,' second' | | (random () * (10 ^ 3)):: integer)

INSERT 0 10000

Bloatdb=# SELECT count (*) FROM tbl

Count

-

10000

(1 row)

Bloatdb=# SELECT pg_size_pretty (pg_total_relation_size ('tbl'))

Pg_size_pretty

-

1584 kB

(1 row)

Bloatdb=#

Update column

Bloatdb=# UPDATE tbl SET first= 'updated-001'

UPDATE 10000

Bloatdb=# SELECT count (*) FROM tbl

Count

-

10000

(1 row)

Bloatdb=# SELECT pg_size_pretty (pg_total_relation_size ('tbl'))

Pg_size_pretty

-

3376 kB

(1 row)

Bloatdb=#

2)。 Query expansion rate

Establish a statistical table of inflation

[postgres@localhost] $/ home/soft/pg_bloat_check-master/pg_bloat_check.py-c "dbname=bloatdb"-- create_stats_table

Expansion statistics

[postgres@localhost] $/ home/soft/pg_bloat_check-master/pg_bloat_check.py-c "dbname=bloatdb"-t tbl

1. Public.idx_tbl_second... (52.69%) 417 kB wasted

2. Public.idx_tbl_first... (52.64%) 413 kB wasted

3. Public.tbl_pkey... (57.79%) 388 kB wasted

[postgres@localhost ~] $

3)。 Treatment expansion

Specify a specific index for the database

[postgres@localhost] $pg_repack-d bloatdb-- index idx_tbl_first

INFO: repacking index "public". "idx_tbl_first"

[postgres@localhost] $/ home/soft/pg_bloat_check-master/pg_bloat_check.py-c "dbname=bloatdb"-t tbl

1. Public.idx_tbl_second... (52.69%) 417 kB wasted

2. Public.tbl_pkey... (57.79%) 388 kB wasted

3. Public.idx_tbl_first... (0.93%) 3121 bytes wasted

[postgres@localhost ~] $

two。 Process all indexes of the table tbl

1)。 Prepare the environment

Bloatdb=# update tbl set second='chris'

UPDATE 10000

Bloatdb=# SELECT count (*) FROM tbl

Count

-

10000

(1 row)

Bloatdb=# SELECT pg_size_pretty (pg_total_relation_size ('tbl'))

Pg_size_pretty

-

3600 kB

(1 row)

Bloatdb=#

Bloatdb=# update tbl set first='chris'

UPDATE 10000

Bloatdb=# SELECT count (*) FROM tbl

Count

-

10000

(1 row)

Bloatdb=# SELECT pg_size_pretty (pg_total_relation_size ('tbl'))

Pg_size_pretty

-

4176 kB

(1 row)

Bloatdb=#

2)。 Check for expansion

[postgres@localhost] $/ home/soft/pg_bloat_check-master/pg_bloat_check.py-c "dbname=bloatdb"-t tbl

1. Public.idx_tbl_second... (59.94%) 820 kB wasted

2. Public.idx_tbl_first... (40.94%) 409 kB wasted

3. Public.tbl_pkey... (28.73%) 193 kB wasted

[postgres@localhost ~] $

3)。

[postgres@localhost] $pg_repack-d bloatdb-- table tbl-- only-indexes

INFO: repacking indexes of "tbl"

INFO: repacking index "public". "idx_tbl_first"

INFO: repacking index "public". "idx_tbl_second"

INFO: repacking index "public". "tbl_pkey"

[postgres@localhost] $/ home/soft/pg_bloat_check-master/pg_bloat_check.py-c "dbname=bloatdb"-t tbl

1. Public.idx_tbl_first... (1.23%) 3028 bytes wasted

2. Public.idx_tbl_second... (1.23%) 3028 bytes wasted

3. Public.tbl_pkey... (1.23%) 3028 bytes wasted

[postgres@localhost ~] $

3. Dealing with tbl data and index bloat

1)。 Index expansion

[postgres@localhost] $/ home/soft/pg_bloat_check-master/pg_bloat_check.py-c "dbname=bloatdb"-t tbl

1. Public.idx_tbl_first... (57.87%) 49 MB wasted

2. Public.idx_tbl_second... (39.29%) 34 MB wasted

3. Public.tbl_pkey... (51.22%) 26 MB wasted

2)。 Dealing with inflated online VACUUM FULL database bloatdb table tbl (data and indexes)

[postgres@localhost] $pg_repack-- no-order-- table tbl-d bloatdb

INFO: repacking table "tbl"

[postgres@localhost] $/ home/soft/pg_bloat_check-master/pg_bloat_check.py-c "dbname=bloatdb"-t tbl

1. Public.tbl_pkey... (0.0%) 0 bytes wasted

2. Public.idx_tbl_second... (0.0%) 0 bytes wasted

[postgres@localhost ~] $

Third, dynamic (when there is a transaction) expansion processing

1. The whole table is inflated.

1)。 Initial condition

-- clear table data

Bloatdb=# select * from tbl

Id | first | second

-+-+--

(0 rows)

Bloatdb=#

INSERT 0 100000

Bloatdb=# UPDATE tbl SET first= 'updated-001'

UPDATE 100000

Bloatdb=#

-- check bloat

[postgres@localhost] $/ home/soft/pg_bloat_check-master/pg_bloat_check.py-c "dbname=bloatdb"-t tbl

1. Public.idx_tbl_second... (67.26%) 17 MB wasted

2. Public.idx_tbl_first... (67.46%) 17 MB wasted

3. Public.tbl_pkey... (63.91%) 9832 kB wasted

[postgres@localhost ~] $

2)。 Insert a large number of data and inflate at the same time.

Statement_timeout=0, adjust as appropriate: maintenance_work_mem,wal_keep_segments (streaming,SSD)

Insert the data first, and the inflation plus-T parameter value is 3600.

-- session 1:insert data

Bloatdb=# INSERT INTO tbl VALUES (generate_series (100001jue 3000000), 'first' | | (random () * (10 ^ 3)):: integer,' second' | | (random () * (10 ^ 3)):: integer)

Cursor flicker

-- session 2:repack during insert

Pg_repack-d bloatdb-- no-order-- table tbl-- wait-timeout=3600

INFO: repacking table "tbl"

Cursor flicker

# # args:-j #

If you specify more than one table with-- table, each specified table is processed in turn. If the-j parameter is used in collation, then pg_repack will start multiple background processes to create indexes in parallel when creating temporary table indexes. Generally, one background process needs to be started for each index establishment until the number of worker (min) is created. When the specified number of j is less than the number of indexes, when an index is created, the idle work is automatically dispatched to build the remaining indexes. When the specified number of j is greater than the number of indexes, the total number of indexes work is dispatched at one time to perform the index creation task.

$pg_repack-j 10-no-order-d bloatdb-- table tbl-- wait-timeout=3600

NOTICE: Setting up workers.conns

INFO: repacking table "tbl"

LOG: Initial worker 0 to build index: CREATE UNIQUE INDEX index_22025 ON repack.table_22022 USING btree (id)

LOG: Initial worker 1 to build index: CREATE INDEX index_22027 ON repack.table_22022 USING btree (first)

LOG: Initial worker 2 to build index: CREATE INDEX index_22028 ON repack.table_22022 USING btree (second)

LOG: Command finished in worker 0: CREATE UNIQUE INDEX index_22025 ON repack.table_22022 USING btree (id)

LOG: Command finished in worker 1: CREATE INDEX index_22027 ON repack.table_22022 USING btree (first)

LOG: Command finished in worker 2: CREATE INDEX index_22028 ON repack.table_22022 USING btree (second)

$

Specify the case of multiple tables, j < idx_numbers

$pg_repack-j 2-no-order-d bloatdb-table tbl-t tbl01-- wait-timeout=3600

NOTICE: Setting up workers.conns

INFO: repacking table "tbl"

LOG: Initial worker 0 to build index: CREATE UNIQUE INDEX index_22025 ON repack.table_22022 USING btree (id)

LOG: Initial worker 1 to build index: CREATE INDEX index_22027 ON repack.table_22022 USING btree (first)

LOG: Command finished in worker 0: CREATE UNIQUE INDEX index_22025 ON repack.table_22022 USING btree (id)

LOG: Assigning worker 0 to build index # 2: CREATE INDEX index_22028 ON repack.table_22022 USING btree (second)

LOG: Command finished in worker 1: CREATE INDEX index_22027 ON repack.table_22022 USING btree (first)

LOG: Command finished in worker 0: CREATE INDEX index_22028 ON repack.table_22022 USING btree (second)

(there are long transactions in the process and will wait for the transaction to be completed)

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

NOTICE: Waiting for 1 transactions to finish. First PID: 10426

INFO: repacking table "tbl01"

LOG: Initial worker 0 to build index: CREATE UNIQUE INDEX index_22065 ON repack.table_22062 USING btree (id)

LOG: Initial worker 1 to build index: CREATE INDEX index_22067 ON repack.table_22062 USING btree (first)

LOG: Command finished in worker 0: CREATE UNIQUE INDEX index_22065 ON repack.table_22062 USING btree (id)

LOG: Assigning worker 0 to build index # 2: CREATE INDEX index_22068 ON repack.table_22062 USING btree (second)

LOG: Command finished in worker 1: CREATE INDEX index_22067 ON repack.table_22062 USING btree (first)

LOG: Command finished in worker 0: CREATE INDEX index_22068 ON repack.table_22062 USING btree (second)

$

#

-- session 1 finish insert

Bloatdb=# INSERT INTO tbl VALUES (generate_series (100001jue 3000000), 'first' | | (random () * (10 ^ 3)):: integer,' second' | | (random () * (10 ^ 3)):: integer)

INSERT 0 2900000

Bloatdb=#

-- session 2: finish repack

[postgres@localhost] $pg_repack-d bloatdb-- no-order-- table tbl-- wait-timeout=3600

INFO: repacking table "tbl"

-- session 2: expansion check

[postgres@localhost] $/ home/soft/pg_bloat_check-master/pg_bloat_check.py-c "dbname=bloatdb"-t tbl

1. Public.tbl_pkey... (0.0%) 0 bytes wasted

2. Public.idx_tbl_second... (0.0%) 0 bytes wasted

3. Public.idx_tbl_first... (0.0%) 0 bytes wasted

[postgres@localhost ~] $

-- session 1: data check

Bloatdb=# select count (*) from tbl

Count

-

3000000

(1 row)

Bloatdb=#

two。 Specify all index ballooning handling for the tbl table

If the tbl table has multiple indexes, by default, index after index inflates even if the-j parameter is specified greater than 1.

1)。 Prepare data

-- session 1: insert data

Bloatdb=# delete FROM tbl

DELETE 3000000

Bloatdb=# INSERT INTO tbl VALUES (generate_series (1m 100000), 'first' | | (random () * (10 ^ 3)):: integer,' second' | | (random () * (10 ^ 3)):: integer)

INSERT 0 100000

Bloatdb=# update tbl set first='chris'

UPDATE 100000

Bloatdb=#

-- session 2:check bloat

[postgres@localhost] $/ home/soft/pg_bloat_check-master/pg_bloat_check.py-c "dbname=bloatdb"-t tbl

1. Public.tbl_pkey... (41.14%) 28 MB wasted

2. Public.idx_tbl_second... (4.32%) 4471 kB wasted

[postgres@localhost ~] $

2). Online insert and repack

-- session 1: insert large data

Bloatdb=# INSERT INTO tbl VALUES (generate_series (100001jue 3000000), 'first' | | (random () * (10 ^ 3)):: integer,' second' | | (random () * (10 ^ 3)):: integer)

Cursor flicker

-- session 2:process bloat,during session 1 inert large data

INFO: repacking indexes of "tbl"

INFO: repacking index "public". "idx_tbl_first"

INFO: repacking index "public". "idx_tbl_second"

Cursor flicker

-- session 1:insert finish

Bloatdb=# INSERT INTO tbl VALUES (generate_series (100001jue 3000000), 'first' | | (random () * (10 ^ 3)):: integer,' second' | | (random () * (10 ^ 3)):: integer)

INSERT 0 2900000

Bloatdb=#

-- session 2:repack finish

INFO: repacking indexes of "tbl"

INFO: repacking index "public". "idx_tbl_first"

INFO: repacking index "public". "idx_tbl_second"

INFO: repacking index "public". "tbl_pkey"

3) check table data and index bloat

-- session 2:check bloat

[postgres@localhost] $/ home/soft/pg_bloat_check-master/pg_bloat_check.py-c "dbname=bloatdb"-t tbl

1. Public.tbl_pkey... (0.0%) 0 bytes wasted

2. Public.idx_tbl_first... (0.0%) 0 bytes wasted

3. Public.idx_tbl_second... (0.0%) 0 bytes wasted

[postgres@localhost ~] $

-- session 1:check table data

Bloatdb=# select count (*) from tbl

Count

-

3000000

(1 row)

Bloatdb=#

3. Specify tbl table specifies index ballooning handling

Note:-- index (the specified index is created using concurrently by default), and cannot be used with the-- only-indexes option.

[postgres@localhost] $pg_repack-d bloatdb-- index idx_tbl_first-- only-indexes

ERROR: cannot specify-index (- I) and-only-indexes (- x)

1)。 Prepare data

-- read data

Bloatdb=# delete FROM tbl

DELETE 3000000

Bloatdb=# INSERT INTO tbl VALUES (generate_series (1m 100000), 'first' | | (random () * (10 ^ 3)):: integer,' second' | | (random () * (10 ^ 3)):: integer)

INSERT 0 100000

Bloatdb=# update tbl set first='chris'

UPDATE 100000

Bloatdb=#

-- check bloat

[postgres@localhost] $/ home/soft/pg_bloat_check-master/pg_bloat_check.py-c "dbname=bloatdb"-t tbl

1. Public.idx_tbl_second... (47.57%) 97 MB wasted

2. Public.tbl_pkey... (9.44%) 7206 kB wasted

3. Public.idx_tbl_first... (3.11%) 3040 kB wasted

[postgres@localhost ~] $

2). Online insert and repack

-- session 1: insert large data

Bloatdb=# INSERT INTO tbl VALUES (generate_series (100001jue 3000000), 'first' | | (random () * (10 ^ 3)):: integer,' second' | | (random () * (10 ^ 3)):: integer)

Cursor flicker

-- session 2:process bloat,during session 1 inert large data

[postgres@localhost] $pg_repack-d bloatdb-- index idx_tbl_second-- wait-timeout=3600

INFO: repacking index "public". "idx_tbl_second"

Cursor flicker

-- session 1:insert finish

Bloatdb=# INSERT INTO tbl VALUES (generate_series (100001jue 3000000), 'first' | | (random () * (10 ^ 3)):: integer,' second' | | (random () * (10 ^ 3)):: integer)

INSERT 0 2900000

Bloatdb=#

-- session 2:repack finish

[postgres@localhost] $pg_repack-d bloatdb-- index idx_tbl_second-- wait-timeout=3600

INFO: repacking index "public". "idx_tbl_second"

[postgres@localhost ~] $

3) check table data and index bloat

-- session 2:check bloat

[postgres@localhost] $/ home/soft/pg_bloat_check-master/pg_bloat_check.py-c "dbname=bloatdb"-t tbl

1. Public.idx_tbl_first... (50.77%) 102 MB wasted

2. Public.tbl_pkey... (47.6%) 65 MB wasted

3. Public.idx_tbl_second... (0.0%) 0 bytes wasted

[postgres@localhost ~] $

-- session 1:check table data

Bloatdb=# select count (*) from tbl

Count

-

3000000

(1 row)

Bloatdb=#

Test conclusion:

Generally, under the same conditions, indexes are more likely to expand than data.

In the case of a shortage of disk space, it is recommended to process one index after another.

Generally speaking, bloat processing requires twice as much free disk space as object size, so you must pay attention to the amount of free disk space before processing.

Note the support of pg_repack version to Pg version. 9.6 is still not supported as of 2016-11-26. For more information, please see http://pgxn.org/dist/pg_repack/doc/pg_repack.html#Releases.

When dealing with bloat of tables or index objects that have online transactions, pay attention to setting the timeout parameter-wait-timeout, which is generally set to 1800 or 3600 (especially thanks to Li Hailong's suggestion).

Special statement: this note is only for this test environment, which should be run during the low peak period in the production environment. In order to ensure the data security of the system, it is recommended to back up the data first, and then do expansion processing.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report