Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

PostgreSQL Master Slave upgrade process

2025-04-10 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

1. Initial state: Master,slave is in running state.

two。 Upgrade process

Master

1)。 Close the last checkpoint location (latest checkpoint location) of the master record, which is where the downtime begins (This is where your downtime starts).

The postgres user executes the following command:

$pg_ctl-D $PGDATA stop-m fast

$pg_controldata | grep "Latest checkpoint location"

$Latest checkpoint location: 0/C619840

2)。 Close the last checkpoint of slave comparison

$pg_ctl-D $PGDATA stop-m fast

$pg_controldata | grep "Latest checkpoint location"

$Latest checkpoint location: 0/C619840

Because the locations of the two checkpoints are the same, we confirm that all logs are applied by standby, and there is no difference between Master and Slave data.

3)。 Save the old version configuration file

$cp / u02/pgdata/testmig/postgresql.conf / var/tmp

$cp / u02/pgdata/testmig/pg_hba.conf / var/tmp

$cp / u02/pgdata/testmig/postgresql.conf / var/tmp

$cp / u02/pgdata/testmig/pg_hba.conf / var/tmp

4) .Master uses link upgrade. If the multicore server uses the "- j" option, execute pg_upgrade in parallel

$export PGDATAOLD=/u02/pgdata/testmig/

$export PGDATANEW=/u02/pgdata/testmig95/

$export PGBINOLD=/u01/app/postgres/product/91/db_8/bin/

$export PGBINNEW=/u01/app/postgres/product/95/db_5/bin/

$/ u01/app/postgres/product/95/db_5/bin/pg_upgrade-k

(Usually you'd do a "- c" check run before doing the real upgrade) When using link mode the files get hard-linked instead of copied which is much faster and saves disk space. The downside is that you can not revert to the old cluster in case anything goes wrong. When it goes fine, it looks like this:

Performing Consistency Checks

-

Checking cluster versions ok

Checking database user is the install user ok

Checking database connection settings ok

Checking for prepared transactions ok

Checking for reg* system OID user data types ok

Checking for contrib/isn with bigint-passing mismatch ok

Checking for invalid "line" user columns ok

Creating dump of global objects ok

Creating dump of database schemas

Ok

Checking for presence of required libraries ok

Checking database user is the install user ok

Checking for prepared transactions ok

If pg_upgrade fails after this point, you must re-initdb the

New cluster before continuing.

Performing Upgrade

-

Analyzing all rows in the new cluster ok

Freezing all rows on the new cluster ok

Deleting files from new pg_clog ok

Copying old pg_clog to new server ok

Setting next transaction ID and epoch for new cluster ok

Deleting files from new pg_multixact/offsets ok

Setting oldest multixact ID on new cluster ok

Resetting WAL archives ok

Setting frozenxid and minmxid counters in new cluster ok

Restoring global objects in the new cluster ok

Restoring database schemas in the new cluster

Ok

Setting minmxid counter in new cluster ok

Adding ".old" suffix to old global/pg_control ok

If you want to start the old cluster, you will need to remove

The ".old" suffix from / u02/pgdata/testmig/global/pg_control.old.

Because "link" mode was used, the old cluster cannot be safely

Started once the new cluster has been started.

Linking user relation files

Ok

Setting next OID for new cluster ok

Sync data directory to disk ok

Creating script to analyze new cluster ok

Creating script to delete old cluster ok

Upgrade Complete

-

Optimizer statistics are not transferred by pg_upgrade so

Once you start the new server, consider running:

. / analyze_new_cluster.sh

Running this script will delete the old cluster's data files:

. / delete_old_cluster.sh

5)。 Restore the configuration file to a new directory

$mkdir-p / u02/pgdata/testmig95/pg_log

$cp / var/tmp/postgresql.conf / u02/pgdata/testmig95/postgresql.conf

$cp / var/tmp/pg_hba.conf / u02/pgdata/testmig95/pg_hba.conf

6)。 Start and stop the updated instance and check that everything is normal in the log file

$/ u01/app/postgres/product/95/db_5/bin/pg_ctl-D / u02/pgdata/testmig95/-l / u02/pgdata/testmig95/pg_log/log.log start

$/ u01/app/postgres/product/95/db_5/bin/pg_ctl-D / u02/pgdata/testmig95/ stop

The database cluster is now running and the database is completely shut down (plan to rebuild standby)

Slave

1)。 Save configuration Fil

$cp / u02/pgdata/testmig/postgresql.conf / var/tmp

$cp / u02/pgdata/testmig/pg_hba.conf / var/tmp

$cp / u02/pgdata/testmig/recovery.conf / var/tmp

Synchronize the master directory to standby (this will be very fast because it will create hard links on the standby server instead of copying the user files):

$cd / u02/pgdata

$rsync-archive-delete-hard-links-size-only testmig testmig95 192.168.22.33:/u02/pgdata

$cd / u03

$rsync-r pgdata/testmig95 192.168.22.33:/u03/pgdata/testmig95

2) .standby restore configuration file

$cp / var/tmp/postgresql.conf / u02/pgdata/testmig95/postgresql.conf

$cp / var/tmp/pg_hba.conf / u02/pgdata/testmig95/pg_hba.conf

$cp / var/tmp/recovery.conf / u02/pgdata/testmig95/recovery.conf

3)。 Start master

$export PATH=/u01/app/postgres/product/95/db_5/bin:$PATH

$pg_ctl-D / u02/pgdata/testmig95/ start-l / u02/pgdata/testmig95/pg_log/log.log

4)。 Start standby

$export PATH=/u01/app/postgres/product/95/db_5/bin:$PATH

$pg_ctl-D / u02/pgdata/testmig95/ start-l / u02/pgdata/testmig95/pg_log/log.log

5)。 Check the standby log file

LOG: database system was shut down at 2017-01-19 07:51:24 GMT

LOG: creating missing WAL directory "pg_xlog/archive_status"

LOG: entering standby mode

LOG: started streaming WAL from primary at 0/E000000 on timeline 1

LOG: consistent recovery state reached at 0/E024D38

LOG: redo starts at 0/E024D38

LOG: database system is ready to accept read only connections

6) standby other inspection work

$psql

Psql (9.5.5)

Type "help" for help.

Postgres=# select pg_is_in_recovery ()

Pg_is_in_recovery

-

T

(1 row)

Postgres=#\ dx

List of installed extensions

Name | Version | Schema | Description

-+-

Adminpack | 1.0 | pg_catalog | administrative functions for PostgreSQL

Plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language

(2 rows)

Postgres=#\ c testmig

You are now connected to database "testmig" as user "postgres".

Testmig=#\ dx

List of installed extensions

Name | Version | Schema | Description

-+-

Pg_buffercache | 1.0 | public | examine the shared buffercache

Pg_trgm | 1.0 | public | text similarity measurement and index searching based on trigrams

Plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language

(3 rows)

Testmig=#\ d

List of relations

Schema | Name | Type | Owner

-+-

Public | pg_buffercache | view | postgres

Public | pgbench_accounts | table | postgres

Public | pgbench_branches | table | postgres

Public | pgbench_history | table | postgres

Public | pgbench_tellers | table | postgres

(5 rows)

Testmig=# select count (*) from pgbench_accounts

Count

-

1000000

(1 row)

7) .master runs analyze_new_cluster.sh

$. / analyze_new_cluster.sh

This script will generate minimal optimizer statistics rapidly

So your system is usable, and then gather statistics twice more

With increasing accuracy. When it is done, your system will

Have the default level of optimizer statistics.

If you have used ALTER TABLE to modify the statistics target for

Any tables, you might want to remove them and restore them after

Running this script because they will delay fast statistics generation.

If you would like default statistics as quickly as possible, cancel

This script and run:

"/ u01/app/postgres/product/95/db_5/bin/vacuumdb"-all-analyze-only

Vacuumdb: processing database "postgres": Generating minimal optimizer statistics (1 target)

Vacuumdb: processing database "template1": Generating minimal optimizer statistics (1 target)

Vacuumdb: processing database "testmig": Generating minimal optimizer statistics (1 target)

Vacuumdb: processing database "postgres": Generating medium optimizer statistics (10 targets)

Vacuumdb: processing database "template1": Generating medium optimizer statistics (10 targets)

Vacuumdb: processing database "testmig": Generating medium optimizer statistics (10 targets)

Vacuumdb: processing database "postgres": Generating default (full) optimizer statistics

Vacuumdb: processing database "template1": Generating default (full) optimizer statistics

Vacuumdb: processing database "testmig": Generating default (full) optimizer statistics

8) .master deletes the old cluster

$. / delete_old_cluster.sh

Copy the script to standby or delete the old standby manually

$rm-rf / u02/pgdata/testmig

$rm-rf / u03/pgdata/testmig

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report