Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to download POSTGRESQL source code installation package and realize host configuration

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article will explain in detail how to download the POSTGRESQL source installation package and realize the host configuration. The editor thinks it is very practical, so I share it with you as a reference. I hope you can get something after reading this article.

Download POSTGRESQL source code installation package and host configuration

Https://www.postgresql.org/ftp/source/v10.3/

Postgresql-10.3.tar.gz

Virtual machine environment

Node1 192.168.159.151

Node2 192.168.159.152

The operating system is redhat6.5

Database is postgresql10.3

Both nodes are configured with / etc/hosts

Vi / etc/hosts

Node1 192.168.159.151

Node2 192.168.159.152

II. Compilation and installation

(1) create a postgres user

Useradd-m-r-s / bin/bash-u 5432 postgres

(2) install related dependency packages

Yum install gettext gcc make perl python perl-ExtUtils-Embed readline-devel zlib-devel openssl-devel libxml2-devel cmake gcc-c++ libxslt-devel openldap-devel pam-devel python-devel cyrus-sasl-devel libgcrypt-devel libgpg-error-devel libstdc++-devel

(3) configure POSTGRES

. / configure-- prefix=/opt/postgresql-10.3-- with-segsize=8-- with-wal-segsize=64-- with-wal-blocksize=16-- with-blocksize=16-- with-libedit-preferred-- with-perl-- with-python-- with-openssl-- with-libxml-- with-libxslt-- enable-thread-safety-- enable-nls=zh_CN

If the following yellow output appears in the last few lines, the configuration is correct, otherwise continue to install the dependency package according to the error prompt

Configure: using CPPFLAGS=-D_GNU_SOURCE-I/usr/include/libxml2

Configure: using LDFLAGS=-Wl,--as-needed

Configure: creating. / config.status

Config.status: creating GNUmakefile

Config.status: creating src/Makefile.global

Config.status: creating src/include/pg_config.h

Config.status: creating src/include/pg_config_ext.h

Config.status: creating src/interfaces/ecpg/include/ecpg_config.h

Config.status: linking src/backend/port/tas/dummy.s to src/backend/port/tas.s

Config.status: linking src/backend/port/dynloader/linux.c to src/backend/port/dynloader.c

Config.status: linking src/backend/port/posix_sema.c to src/backend/port/pg_sema.c

Config.status: linking src/backend/port/sysv_shmem.c to src/backend/port/pg_shmem.c

Config.status: linking src/backend/port/dynloader/linux.h to src/include/dynloader.h

Config.status: linking src/include/port/linux.h to src/include/pg_config_os.h

Config.status: linking src/makefiles/Makefile.linux to src/Makefile.port

(4) compilation

Make & & make install

The following yellow output appears in the last few lines to be configured correctly

Make [1]: Leaving directory `/ opt/postgresql-10.3/src'

Make-C config install

Make [1]: Entering directory `/ opt/postgresql-10.3/config'

/ bin/mkdir-p'/ opt/postgresql-10.3/lib/pgxs/config'

/ usr/bin/install-c-m 755. / install-sh' / opt/postgresql-10.3/lib/pgxs/config/install-sh'

/ usr/bin/install-c-m 755. / missing' / opt/postgresql-10.3/lib/pgxs/config/missing'

Make [1]: Leaving directory `/ opt/postgresql-10.3/config'

PostgreSQL installation complete.

(5) installation

Make world & & make install-world

The following yellow output appears in the last few lines to be configured correctly

Make [1]: Leaving directory `/ opt/postgresql-10.3/src'

Make-C config install

Make [1]: Entering directory `/ opt/postgresql-10.3/config'

/ bin/mkdir-p'/ opt/postgresql-10.3/lib/pgxs/config'

/ usr/bin/install-c-m 755. / install-sh' / opt/postgresql-10.3/lib/pgxs/config/install-sh'

/ usr/bin/install-c-m 755. / missing' / opt/postgresql-10.3/lib/pgxs/config/missing'

Make [1]: Leaving directory `/ opt/postgresql-10.3/config'

PostgreSQL installation complete.

Make: Leaving directory `/ opt/postgresql-10.3'

(6) create relevant directories and configure environment variables

Mkdir-p / data/pgdata/serverlog

Mkdir / data/pg

Su-postgres

Vi .bash _ profile (delete all the original, copy and paste the following yellow parts directly)

# .bash _ profile

# Get the aliases and functions

If [- f ~ / .bashrc]; then

. ~ / .bashrc

Fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

Export PATH

# postgres

# PostgreSQL Port

PGPORT=5432

# PostgreSQL data directory

PGDATA=/data/pgdata

Export PGPORT PGDATA

# the language used

Export LANG=zh_CN.utf8

# PostgreSQL installation directory

Export PGHOME=/data/pg

# PostgreSQL connection Library File

Export LD_LIBRARY_PATH=$PGHOME/lib:/lib64:/usr/lib64:/usr/local/lib64:/lib:/usr/lib:/usr/local/lib:$LD_LIBRARY_PATH

Export DATE= `date + "% Y%m%d%H%M" `

# add the command line of PostgreSQL to the PATH environment variable

Export PATH=$PGHOME/bin:$PATH

# man manual for PostgreSQL

Export MANPATH=$PGHOME/share/man:$MANPATH

# default user for PostgreSQL

Export PGUSER=postgres

# PostgreSQL default host address

Export PGHOST=127.0.0.1

# default database name

Export PGDATABASE=postgres

# define log storage directory

PGLOG= "$PGDATA/serverlog" source .bash _ profile

(7) initialize the database

# execute database initialization script

Root user login

Chown-R postgres.postgres / data/

Su-postgres

$/ opt/postgresql-10.3/bin/initdb-- encoding=utf8-D / data/pg/data warning: "trust" authentication was initiated for local connections.

You can change it by editing pg_hba.conf or your next time

Use the-An or-- auth-local and-- auth-host options when initdb. Success. You can now start the database server using: starting the database

Su-postgres

/ opt/postgresql-10.3/bin/pg_ctl-D / data/pg/data-l logfile start

(8) copy of relevant commands

Root user

Mkdir / data/pg/bin

Cp / opt/postgresql-10.3/bin/* / data/pg/bin

Chown-R postgres.postgres / data/pg/bin

Third, postgresql master-slave building

1. Configuration of main library

(1) create a user replica copied by the user

Su-postgres

Psql

CREATE ROLE replica login replication encrypted password 'replica'

(2) modify the pg_hba.conf file and specify replica to log in to the network (last added)

Vi / data/pg/data/pg_hba.conf

Host replication replica 192.168.159.0/24 md5

Host all replica 192.168.159.0/24 trust

(3) the main library configuration file modifies the following items, while others remain unchanged.

Vi / data/pg/data/postgresql.conf

Listen_addresses ='*'

Wal_level = hot_standby # hot standby mode

Max_wal_senders= 6 # can set up a maximum of several stream copy links, as many as there are from.

Wal_keep_segments = 10240 # important configuration

Wal_send_timeout = 60s

Max_connections = 512 # the max_connections of the slave library is larger than the master library

Archive_mode = on # allow archiving

Archive_command ='cp% p / data/pg/data/archive/%f' # set according to the actual situation

Checkpoint_timeout = 30min

Max_wal_size = 3GB

Min_wal_size = 64MB

Mkdir / data/pg/data/archive

2. From the library environment

(1) empty the data folder directory of the database.

Rm-rf / var/lib/pgsql/10/data/*

(2) run on the standby database

Pg_basebackup-F p-- progress-D / data/pg/data/-h 192.168.159.151-p 5432-U replica-- password

Enter password replica

!!! Note that after the replication is completed, all the files in the data directory must be re-authorized in the standby library.

Chown-R postgres.postgres / data/pg/data/

(3) create recovery.conf file

Cp / opt/postgresql-10.3/share/recovery.conf.sample / data/pg/data/recovery.conf

Vi / data/pg/data/recovery.conf

Standby_mode = on

Primary_conninfo = 'host=192.168.159.151 port=5432 user=replica password=replica'

Recovery_target_timeline = 'latest'

Trigger_file ='/ data/pg/data/trigger.kenyon'

(4) configure postgresql.conf file

Vi / data/pg/data/postgresql.conf

Listen_addresses ='*'

Wal_level = hot_standby

Max_connections = 1000 # generally, the maximum slave link is larger than the master link.

Hot_standby = on # indicates that this machine is not only used for data archiving, but also for query

Max_standby_streaming_delay = 30s

Wal_receiver_status_interval = 10s # how often do you report the slave status to the master?

Hot_standby_feedback = on # if there is incorrect data replication, whether to make an example to the master

(5) start the backup database

Su-postgres

/ opt/postgresql-10.3/bin/pg_ctl-D / data/pg/data/-l logfile start

If it cannot be started, copy the file postmaster.opts from the main library to the standby library as follows:

Scp / data/pg/data/postmaster.opts 192.168.159.152:/data/pg/data/

Chown-R postgres.postgres / data/pg/data/

Cd / data/pg/

Chmod 700 data/

3. Verify the master-slave function

Main database query

Su-postgres

Psql

Postgres=# select client_addr,sync_state from pg_stat_replication

Client_addr | sync_state

-+-

192.168.159.152 | async

(1 row)

The following problems were found when logging in to postgres

-bash-4.1 $

Root user execution

Cp / etc/skel/.bash* / var/lib/pgsql/

Log in again and you can become

[postgres@node1 ~] $4. Manual master / slave switch

(1) create a slave recovery.conf file (operate 192.168.159.152 on the slave database)

Cp $PGHOME/share/recovery.conf.sample $PGDATA/recovery.conf

Configure the following parameters

Standby_mode = 'on'-- Mark PG as STANDBY SERVER

Primary_conninfo = 'host=192.168.159.151 port=5432 user=replica password=replica'-- identifies the main library information

Trigger_file ='/ data/pg/data/trigger.kenyon'-- identifies the trigger file

(2) close the main library (operate 192.168.159.151 on the main library)

/ opt/postgresql-10.3/bin/pg_ctl-D / data/pg/data/-l logfile stop

(3) activate slave database to master database status (operate 192.168.159.152 on slave database)

To activate the slave library, you only need to create a file, according to the parameter trigger_file value of the slave recovery.conf configuration file.

Just create this trigger file. For example "touch / data/pg/data/trigger.kenyon"

Touch / data/pg/data/trigger.kenyon

After a while, it is found that the recovery.conf file has become recovery.done, indicating that the repository has been activated.

Check the logfile log and activate when the following message appears

2018-06-04 21 11V 01.137 PDT [12818] Log: trigger file found: / data/pg/data/trigger.kenyon

2018-06-04 21 0/C02A390 11V 01.148 PDT [12818] Log: redo is completed in 0/C02A390

2018-06-04 21 11V 01.172 PDT [12818] Log: selected new timeline ID:2

2018-06-04 21 1115 05.442 PDT [12818] Log: archive recovery completed

2018-06-04 21 11V 05.568 PDT [12817] Log: database system is ready to accept connections

(4) modify the configuration file of the original main library

Vi / data/pg/data/postgresql.conf

Max_connections = 1500 # the slave value is larger than the master value

(5) activate the original master library and turn it into a slave library (execute 192.168.159.151 on the original master library)

Create a $PGDATA/recovery.conf file and configure the following parameters

Vi / data/pg/data/recovery.conf

Recovery_target_timeline = 'latest'

Standby_mode = 'on' #-Mark PG as STANDBY SERVER

Primary_conninfo = 'host=192.168.159.152 port=5432 user=replica password=replica' #-- identifies the main library information

Trigger_file ='/ data/pg/data/trigger.kenyon' #-identifies the trigger file

-- modify pg_hba.conf (now add 192.168.159.152 to the main library) to add the following configuration

Vi / data/pg/data/pg_hba.conf

Host replication replica 192.168.159.151/32 md5

Start the original master library, which is now the slave library (192.168.159.151)

/ opt/postgresql-10.3/bin/pg_ctl-D / data/pg/data/-l logfile start

Check the current logfile log from the library to find the error message.

2018-06-05 00 PDT 08V 00.326 PDT [9729] details: at the end of WAL, arrive at timeline 1 and address 0/C02A400.

2018-06-05 00 PDT 08V 00.327 log: before the current recovery point 0/C02A630, the new point in time 2 broke away from the current database system point in time 1

2018-06-05 00 WAL 08V 05.322 PDT [9729] Log: restart WAL stream operation on timeline 1 at 0/C000000

2018-06-05 00 PDT 08V 05.327 Log: replication is terminated by the active server

2018-06-05 00 PDT 08V 05.327 PDT [9729] details: at the end of WAL, arrive at timeline 1 and address 0/C02A400.

2018-06-05 00 PDT 08V 05.329 Log: before the current recovery point 0/C02A630, the new time point 2 broke away from the current database system time point 1

2018-06-05 00 WAL 08 PDT [9729] Log: restart the WAL stream operation on timeline 1 at 0/C000000

2018-06-05 00 0815 PDT [9729] Log: replication is terminated by the active server

2018-06-05 00 PDT 08 PDT [9729] details: at the end of the WAL, arrive at timeline 1 and address 0/C02A400.

2018-06-05 00 PDT 08 PDT [9725] Log: before the current recovery point 0/C02A630, the new point in time 2 separated from the current database system point in time 1

In the current main library operation:

Scp / data/pg/data/pg_wal/00000002.history 192.168.159.151:/data/pg/data/pg_wal/

(6) both nodes operate

Vi / data/pg/data/recovery.conf

Restore_command ='cp / data/pg/data/archive/%f% p'

Mkdir / data/pg/data/archive

Chown postgres.postgres / data/pg/data/archive

Vi / data/pg/data/postgresql.conf

Archive_command ='cp% p / data/pg/data/archive/%f'

Fourth, install PGPOOL

(1) configure SSH key-free login for two machines

1 node

[postgres@node1] $ssh-keygen-t rsa

Enter all by default

[postgres@node1] $cat ~ / .ssh/id_rsa.pub > > ~ / .ssh/authorized_keys

[postgres@node1] $chmod go-rwx ~ / .ssh/*

[postgres@node1] $cd ~ / .ssh 2 node

[postgres@node2 $ssh-keygen-t rsa

Enter all by default

[postgres@node2 $cat ~ / .ssh/id_rsa.pub > > ~ / .ssh/authorized_keys

[postgres@node2 $chmod go-rwx ~ / .ssh/*

[postgres@node2] cd ~ / .ssh 1 node

[postgres@node1] $scp id_rsa.pub 192.168.159.152:/home/postgres/.ssh/id_rsa.pub1

2 nodes

[postgres@node2] cat id_rsa.pub1 > > authorized_keys

[postgres@node2] scp id_rsa.pub 192.168.159.151:/home/postgres/.ssh/id_rsa.pub2

1 node

[postgres@node1] cat id_rsa.pub2 > > authorized_keys (2) install pgpool ii

Install pgpool ii

Yum-y install libmemcached postgresql-libs.x86_64 openssl098e

(note here that you must install these YUM sources first, otherwise you will not be able to install pgpool II.)

Rpm-ivh pgpool-II-pg10-3.7.2-1pgdg.rhel6.x86_64.rpm pg_md5-u postgres-p

The password is set to postgres

The output password is encoded as

E8a48653851e28c69d0506508fb27fc5

Vi / etc/pgpool-II/pcp.conf # add the last line

Postgres:e8a48653851e28c69d0506508fb27fc5

Mkdir-p / opt/pgpool/oiddir

Cp / etc/pgpool-II/pgpool.conf / etc/pgpool-II/pgpool.conf.bak ifconfig check the network card

[root@node1 pgpool-II] # ifconfig

Eth2 Link encap:Ethernet HWaddr 00:0C:29:9E:E8:6D

Inet addr:192.168.159.152 Bcast:192.168.159.255 Mask:255.255.255.0

Inet6 addr: fe80::20c:29ff:fe9e:e86d/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:14557 errors:0 dropped:0 overruns:0 frame:0

TX packets:10820 errors:0 dropped:0 overruns:0 carrier:0

Collisions:0 txqueuelen:1000

RX bytes:1889055 (1.8 MiB) TX bytes:1485329 (1.4 MiB) lo Link encap:Local Loopback

Inet addr:127.0.0.1 Mask:255.0.0.0

Inet6 addr: 1/128 Scope:Host

UP LOOPBACK RUNNING MTU:16436 Metric:1

RX packets:5029 errors:0 dropped:0 overruns:0 frame:0

TX packets:5029 errors:0 dropped:0 overruns:0 carrier:0

Collisions:0 txqueuelen:0

RX bytes:2786891 (2.6MiB) TX bytes:2786891 (2.6MiB)

Note: my network card here is eth2, so the yellow font configuration of the following two nodes should be changed accordingly.

If the network card is not configured correctly, an error like this will be reported: arping: unknown iface eth0

1 node

Vi / etc/pgpool-II/pgpool.conf

Listen_addresses ='*'

Port = 9999

Socket_dir ='/ opt/pgpool'

Pcp_port = 9898

Pcp_socket_dir ='/ opt/pgpool'

Backend_hostname0 = '192.168.159.151' # # configure data node node1

Backend_port0 = 5432

Backend_weight0 = 1

Backend_flag0 = 'ALLOW_TO_FAILOVER'

Backend_hostname1 = '192.168.159.152' # # configure data node node2

Backend_port1 = 5432

Backend_weight1 = 1

Backend_flag1 = 'ALLOW_TO_FAILOVER'

Enable_pool_hba = on

Pool_passwd = 'pool_passwd'

Authentication_timeout = 60

Ssl = off

Num_init_children = 32

Max_pool = 4

Child_life_time = 300

Child_max_connections = 0

Connection_life_time = 0

Client_idle_limit = 0

Log_destination = 'syslog'

Print_timestamp = on

Log_connections = on

Log_hostname = on

Log_statement = on

Log_per_node_statement = off

Log_standby_delay = 'none'

Syslog_facility = 'LOCAL0'

Syslog_ident = 'pgpool'

Debug_level = 0

Pid_file_name ='/ opt/pgpool/pgpool.pid'

Logdir ='/ tmp'

Connection_cache = on

Reset_query_list = 'ABORT; DISCARD ALL'

Replication_mode = off

Replicate_select = off

Insert_lock = on

Lobj_lock_table =''

Replication_stop_on_mismatch = off

Failover_if_affected_tuples_mismatch = off

Load_balance_mode = on

Ignore_leading_white_space = on

White_function_list =''

Black_function_list = 'nextval,setval'

Master_slave_mode = on # sets stream replication mode

Master_slave_sub_mode = 'stream' # sets stream replication mode

Sr_check_period = 5

Sr_check_user = 'replica'

Sr_check_password = 'replica'

Delay_threshold = 16000

Follow_master_command =''

Parallel_mode = off

Pgpool2_hostname =''

System_db_hostname = 'localhost'

System_db_port = 5432

System_db_dbname = 'pgpool'

System_db_schema = 'pgpool_catalog'

System_db_user = 'pgpool'

System_db_password =''

Health_check_period = 5

Health_check_timeout = 20

Health_check_user = 'replica'

Health_check_password = 'replcia'

Health_check_max_retries = 3

Health_check_retry_delay = 1

Failover_command ='/ opt/pgpool/failover_stream.sh% d% H / data/pg/data/trigger.kenyon'

Failback_command =''

Fail_over_on_backend_error = on

Search_primary_node_timeout = 10

Recovery_user = 'nobody'

Recovery_password =''

Recovery_1st_stage_command =''

Recovery_2nd_stage_command =''

Recovery_timeout = 90

Client_idle_limit_in_recovery = 0

Use_watchdog = on

Trusted_servers =''

Ping_path ='/ bin'

Wd_hostname = '192.168.159.151'

Wd_port = 9000

Wd_authkey =''

Delegate_IP = '192.168.159.153'

Ifconfig_path ='/ sbin'

If_up_cmd = 'ifconfig eth2:0 inet $_ IP_$ netmask 255.255.255.0'

If_down_cmd = 'ifconfig eth2:0 down'

Arping_path ='/ usr/sbin' # arping command path

Arping_cmd = 'arping-I eth2-U $_ IP_$-w 1' #-I eth2 designated egress network card

Clear_memqcache_on_escalation = on

Wd_escalation_command =''

Wd_lifecheck_method = 'heartbeat'

Wd_interval = 10

Wd_heartbeat_port = 9694

Wd_heartbeat_keepalive = 2

Wd_heartbeat_deadtime = 30

Heartbeat_destination0 = '192.168.159.152' # configure the peer hostname

Heartbeat_destination_port0 = 9694

Heartbeat_device0 = 'eth2'

Wd_life_point = 3

Wd_lifecheck_query = 'SELECT 1'

Wd_lifecheck_dbname = 'template1'

Wd_lifecheck_user = 'nobody'

Wd_lifecheck_password =''

Other_pgpool_hostname0 = '192.168.159.152' # # configure peer pgpool

Other_pgpool_port0 = 9999

Other_wd_port0 = 9000

Relcache_expire = 0

Relcache_size = 256

Check_temp_table = on

Memory_cache_enabled = off

Memqcache_method = 'shmem'

Memqcache_memcached_host = 'localhost'

Memqcache_memcached_port = 11211

Memqcache_total_size = 67108864

Memqcache_max_num_cache = 1000000

Memqcache_expire = 0

Memqcache_auto_cache_invalidation = on

Memqcache_maxcache = 409600

Memqcache_cache_block_size = 1048576

Memqcache_oiddir ='/ opt/pgpool/oiddir'# (you need to create an oiddr in the / opt/pgpool directory now)

White_memqcache_table_list =''

Black_memqcache_table_list =''2 nodes

Vi / etc/pgpool-II/pgpool.conf

Listen_addresses ='*'

Port = 9999

Socket_dir ='/ opt/pgpool'

Pcp_port = 9898

Pcp_socket_dir ='/ opt/pgpool'

Backend_hostname0 = '192.168.159.151'

Backend_port0 = 5432

Backend_weight0 = 1

Backend_flag0 = 'ALLOW_TO_FAILOVER'

Backend_hostname1 = '192.168.159.152'

Backend_port1 = 5432

Backend_weight1 = 1

Backend_flag1 = 'ALLOW_TO_FAILOVER'

Enable_pool_hba = on

Pool_passwd = 'pool_passwd'

Authentication_timeout = 60

Ssl = off

Num_init_children = 32

Max_pool = 4

Child_life_time = 300

Child_max_connections = 0

Connection_life_time = 0

Client_idle_limit = 0

Log_destination = 'syslog'

Print_timestamp = on

Log_connections = on

Log_hostname = on

Log_statement = on

Log_per_node_statement = off

Log_standby_delay = 'none'

Syslog_facility = 'LOCAL0'

Syslog_ident = 'pgpool'

Debug_level = 0

Pid_file_name ='/ opt/pgpool/pgpool.pid'

Logdir ='/ tmp'

Connection_cache = on

Reset_query_list = 'ABORT; DISCARD ALL'

Replication_mode = off

Replicate_select = off

Insert_lock = on

Lobj_lock_table =''

Replication_stop_on_mismatch = off

Failover_if_affected_tuples_mismatch = off

Load_balance_mode = on

Ignore_leading_white_space = on

White_function_list =''

Black_function_list = 'nextval,setval'

Master_slave_mode = on

Master_slave_sub_mode = 'stream'

Sr_check_period = 0

Sr_check_user = 'replica'

Sr_check_password = 'replica'

Delay_threshold = 16000

Follow_master_command =''

Parallel_mode = off

Pgpool2_hostname =''

System_db_hostname = 'localhost'

System_db_port = 5432

System_db_dbname = 'pgpool'

System_db_schema = 'pgpool_catalog'

System_db_user = 'pgpool'

System_db_password =''

Health_check_period = 0

Health_check_timeout = 20

Health_check_user = 'nobody'

Health_check_password =''

Health_check_max_retries = 0

Health_check_retry_delay = 1

Failover_command ='/ opt/pgpool/failover_stream.sh% d% H / file/data/trigger/file'

Failback_command =''

Fail_over_on_backend_error = on

Search_primary_node_timeout = 10

Recovery_user = 'nobody'

Recovery_password =''

Recovery_1st_stage_command =''

Recovery_2nd_stage_command =''

Recovery_timeout = 90

Client_idle_limit_in_recovery = 0

Use_watchdog = off

Trusted_servers =''

Ping_path ='/ bin'

Wd_hostname =''

Wd_port = 9000

Wd_authkey =''

Delegate_IP = '192.168.159.153'

Ifconfig_path ='/ sbin'

If_up_cmd = 'ifconfig eth2:0 inet $_ IP_$ netmask 255.255.255.0'

If_down_cmd = 'ifconfig eth2:0 down'

Arping_path ='/ usr/sbin' # arping command path

Arping_cmd = 'arping-I eth2-U $_ IP_$-w 1' #-I eth2 designated egress network card

Clear_memqcache_on_escalation = on

Wd_escalation_command =''

Wd_lifecheck_method = 'heartbeat'

Wd_interval = 10

Wd_heartbeat_port = 9694

Wd_heartbeat_keepalive = 2

Wd_heartbeat_deadtime = 30

Heartbeat_destination0 = '192.168.159.151'

Heartbeat_destination_port0 = 9694

Heartbeat_device0 = 'eth2'

Wd_life_point = 3

Wd_lifecheck_query = 'SELECT 1'

Wd_lifecheck_dbname = 'template1'

Wd_lifecheck_user = 'nobody'

Wd_lifecheck_password =''

Other_pgpool_hostname0 = '192.168.159.152'

Other_pgpool_port0 = 9999

Other_wd_port0 = 9000

Relcache_expire = 0

Relcache_size = 256

Check_temp_table = on

Memory_cache_enabled = off

Memqcache_method = 'shmem'

Memqcache_memcached_host = 'localhost'

Memqcache_memcached_port = 11211

Memqcache_total_size = 67108864

Memqcache_max_num_cache = 1000000

Memqcache_expire = 0

Memqcache_auto_cache_invalidation = on

Memqcache_maxcache = 409600

Memqcache_cache_block_size = 1048576

Memqcache_oiddir ='/ opt/pgpool/oiddir'

White_memqcache_table_list =''

Black_memqcache_table_list =''

Vi / opt/pgpool/failover_stream.sh

#! / bin/sh

# Failover command for streaming replication.

# This script assumes that DB node 0 is primary, and 1 is standby.

#

# If standby goes down, do nothing. If primary goes down, create a

# trigger file so that standby takes over primary node.

#

# Arguments: $1: failed node id. 2: new master hostname. $3: path to

# trigger file.

Failed_node=$1

New_master=$2

Trigger_file=$3

# Do nothing if standby goes down.

# if [$failed_node = 1]; then

# exit 0

# fi

/ usr/bin/ssh-T $new_master / bin/touch $trigger_file

Exit 0; authorize the script

Chmod uplix / opt/pgpool/failover_stream.sh

Scp / opt/pgpool/failover_stream.sh 192.168.159.152:/opt/pgpool/ cp / etc/pgpool-II/pool_hba.conf / etc/pgpool-II/pool_hba.conf.bak

Vi / etc/pgpool-II/pool_hba.conf

Host all all 192.168.159.151/32 trust

Host replication replica 192.168.159.151/32 trust

Host postgres postgres 192.168.159.151/32 trust

Host all all 192.168.159.152/32 trust

Host replication replica 192.168.159.152/32 trust

Host postgres postgres 192.168.159.152/32 trust

Host postgres postgres 192.168.159.152/32 trust

Host all all 192.168.159.153/32 trust

Host replication replica 192.168.159.153/32 trust

Host postgres postgres 192.168.159.153/32 trust

Host postgres postgres 192.168.159.153 32 trust Note 192.168.159.153 is the VIP address scp / etc/pgpool-II/pool_hba.conf 192.168.159.140:/etc/pgpool-II/

Start pgpool

Pgpool-n & turn off pgpool

Pgpool-m fast stop

Log in to pgpool

/ data/pg/bin/psql-h 192.168.159.151-p 9999-U postgres-d postgres

You can also log in to / data/pg/bin/psql-h 192.168.159.153-p 9999-U postgres-d postgres using VIP

View the pgpool node

Show pool_nodes

Postgres=# show pool_nodes

Node_id | hostname | port | status | lb_weight | role | select_cnt | load_balance_node | replication_delay

-+-

0 | 192.168.159.151 | 5432 | up | 0.500000 | primary | 0 | true | 0

1 | 192.168.159.152 | 5432 | down | 0.500000 | standby | 0 | false | 0

(2 rows)

5. Install keepalive

Tar xvf keepalived-1.4.2.tar.gz

Cd keepalived-1.4.2

. / configure

Make

Make install

Mkdir / etc/keepalived

Cd / etc/keepalived/

1 node:

Vi / etc/keepalived/keepalived.conf

Global_defs {

Router_id node1

}

Vrrp_instance VI_1 {

State BACKUP # set up as the primary server

Interface eth2:0 # Monitoring Network Interface

Virtual_router_id 51 # master and standby must be the same

Priority 100 # (the master and standby take different priorities, the host value is larger, the backup machine value is smaller, the higher the value, the higher the priority)

Advert_int 1 # VRRP Multicast broadcast cycle seconds

Authentication {

Auth_type PASS # VRRP authentication method, master / slave must be consistent

Auth_pass 1111 # (password)

}

Virtual_ipaddress {

192.168.159.153Universe 24 # VRRP HA virtual address

}

2 nodes:

Vi / etc/keepalived/keepalived.conf

Global_defs {

Router_id node2

}

Vrrp_instance VI_1 {

State BACKUP # set up as the primary server

Interface eth2:0 # Monitoring Network Interface

Virtual_router_id 51 # master and standby must be the same

Priority 90 # (master and standby take different priority, the host value is larger, the backup machine value is smaller, the higher the value, the higher the priority)

Advert_int 1 # VRRP Multicast broadcast cycle seconds

Authentication {

Auth_type PASS # VRRP authentication method, master / slave must be consistent

Auth_pass 1111 # (password)

}

Virtual_ipaddress {

192.168.159.153Universe 24 # VRRP HA virtual address

}

Start Keepalived

Keepalived-D-f / etc/keepalived/keepalived.conf

View the log

Tail-f / var/log/message

View the process

Ps-ef | grep keepalive

! Attention! Configure the high availability of PGPOOL. The following contents are tested by myself. Some of the key materials are written by myself, but can not be found on the Internet.

1. Set relevant permissions (both nodes need to execute)

-- configure ifconfig, arping execution permissions under the root user

Chmod Utility / sbin/ifconfig

Chmod Utility / sbin/ifdown

Chmod Utility / sbin/ifup

Chmod Utility / usr/sbin/

Chmod 755 / opt/pgpool/failover_stream.sh

Chown postgres.root / opt/pgpool/failover_stream.sh 2, configure the PGPOOL log (to be executed by both nodes) add the last line

Vi / etc/rsyslog.conf

Local0.* / var/log/pgpool.log

/ etc/init.d/rsyslog restart 3. Configure the key script failover_stream.sh (to be executed by both nodes)

Delete or comment the original ssh line

When the main library is 192.168.159.151,

Vi / opt/pgpool/failover_stream.sh

Ifconfig eth2:0 down

/ usr/bin/ssh 192.168.159.152 / bin/touch / data/pg/data/trigger.kenyon

/ usr/bin/ssh 192.168.159.152 ifconfig eth2:0 up

When the main library is 192.168.159.152

Vi / opt/pgpool/failover_stream.sh

Ifconfig eth2:0 down

/ usr/bin/ssh 192.168.159.151 / bin/touch / data/pg/data/trigger.kenyon

/ usr/bin/ssh 192.168.159.151 ifconfig eth2:0 up

4. Copy an eth2:0 network card (both nodes need to be executed)

Cd / etc/sysconfig/network-scripts/

Cp ifcfg-eth2 ifcfg-eth2:0

Vi ifcfg-eth2:0

DEVICE= "eth2:0"

BOOTPROTO= "static"

HWADDR= "00:0c:29:0c:7d:4f"

IPV6INIT= "yes"

NM_CONTROLLED= "yes"

ONBOOT= "yes"

TYPE= "Ethernet"

# UUID= "e618ec6a-8bb0-4202-8fe6-54febd0f8c76"

IPADDR=192.168.159.153

NETMASK=255.255.255.0

GATEWAY=192.168.159.1

5. Modify pgpool.conf configuration file

Vi / etc/pgpool-II/pgpool.conf

Failover_command ='/ opt/pgpool/failover_stream.sh'

Commenting out the original line is fine in this way

6. Modify pgpool.conf configuration file

Vi / etc/pgpool-II/pgpool.conf

Heartbeat_device0 = 'eth2:0'

In the third big point and the fourth small point above this article, there are the steps of manual active / standby switching.

VIP still cannot automatically float and switch, but it can switch between master and slave manually (according to the above configuration, both nodes will have VIP192.168.159.153, which is very strange)

Currently, switching between master and slave manually can achieve VIP automatic floating switch, but the prerequisite is that pgpool must be stopped. For example, if the postgresql database instance of the master node stops, so does the master node pgpool. In this way, the vip192.168.159.153 of the slave node will start automatically in a few minutes.

It must be noted that after the master / slave switch is completed, use the command to check and test whether the switch is successful, such as logging in.

/ data/pg/bin/psql-h 192.168.159.153-p 9999-U postgres-d postgres

Show pool_nodes

Select client_addr,sync_state from pg_stat_replication

After the information found is correct, try create a test table to see if it can be created.

Create table test123 (tt int); note that in the / data/pg/data/gprof directory, there are a lot of binaries that take up a lot of storage space and don't know what they are. Ask the god for guidance, PG master and slave + pgpool ii after the completion of the relevant error report

1. PG is unable to log in

Originally, the first step was to set up the PG master and slave, test the master-slave synchronization function, and log in without any problem.

However, in the subsequent installation and configuration of pgpool ii with high availability, it was suddenly found that PG could not be logged in. An error was reported as follows:

[postgres@node1 ~] $psql

Psql: symbol lookup error: psql: undefined symbol: PQconnectdbParams

Because there is an execution log when starting PG.

[postgres@node1] $/ opt/postgresql-10.3/bin/pg_ctl-D / data/pg/data-l logfile start

You can view logfile log error messages in the postgres home directory:

2018-05-31 2300 PDT 18.703 [12734] fatal error: unable to load library "/ opt/postgresql-10.3/lib/libpqwalreceiver.so": / opt/postgresql-10.3/lib/libpqwalreceiver.so: undefined symbol: PQescapeIdentifier

2018-05-31 2300 PDT 23.709 [12736] fatal error: unable to load library "/ opt/postgresql-10.3/lib/libpqwalreceiver.so": / opt/postgresql-10.3/lib/libpqwalreceiver.so: undefined symbol: PQescapeIdentifier

2018-05-31 2300 PDT 28.715 [12737] fatal error: unable to load library "/ opt/postgresql-10.3/lib/libpqwalreceiver.so": / opt/postgresql-10.3/lib/libpqwalreceiver.so: undefined symbol: PQescapeIdentifier

2018-05-31 2323 undefined symbol 0015 33.721 PDT [12738] fatal error: unable to load library "/ opt/postgresql-10.3/lib/libpqwalreceiver.so": / opt/postgresql-10.3/lib/libpqwalreceiver.so: undefined symbol: PQescapeIdentifier

2018-05-31 2300 PDT 38.730 PDT [12739] fatal error: unable to load library "/ opt/postgresql-10.3/lib/libpqwalreceiver.so": / opt/postgresql-10.3/lib/libpqwalreceiver.so: undefined symbol: PQescapeIdentifier at this time, it can be executed temporarily

Export LD_LIBRARY_PATH=/opt/postgresql-10.3/lib/libpqwalreceiver.so

Load the missing library file and restart PG to log in.

For a permanent solution, as follows:

Vi / .bash_profile

Add on the last line

Export LD_LIBRARY_PATH=/opt/postgresql-10.3/lib/libpqwalreceiver.so2, PGPOOL failed to start problem

Use the command pgpool-n & to start pgpool and find that it cannot be started

[root@node1 ~] # ps-ef | grep pgpool

Root 3163 3081 0 19:57 pts/0 00:00:00 pgpool-n

Root 3205 3163 0 19:57 pts/0 00:00:00 pgpool: health check process (0)

Root 3206 3163 0 19:57 pts/0 00:00:02 pgpool: health check process (1)

Root 4505 4455 0 20:37 pts/1 00:00:00 grep pgpool

The ps command looks at the pgpool process and finds that there are residual processes

Kill 3205

Kill 3206

Pgpool started again successfully

The pgpool that starts successfully is as follows

[root@node1 ~] # ps-ef | grep pool

Root 12828 2231 0 19:58 pts/0 00:00:00 pgpool-n

Root 12829 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request

Root 12830 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request

Root 12831 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request

Root 12832 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request

Root 12833 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request

Root 12834 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request

Root 12835 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request

Root 12836 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request

Root 12837 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request

Root 12838 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request

Root 12839 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request

Root 12840 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request

Root 12841 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request

Root 12842 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request

Root 12843 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request

Root 12844 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request

Root 12845 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request

Root 12846 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request

Root 12847 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request

Root 12848 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request

Root 12849 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request

Root 12850 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request

Root 12851 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request

Root 12852 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request

Root 12853 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request

Root 12854 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request

Root 12855 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request

Root 12856 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request

Root 12857 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request

Root 12858 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request

Root 12859 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request

Root 12860 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request

Root 12861 12828 0 19:58 pts/0 00:00:00 pgpool: PCP: wait for connection request

Root 12862 12828 0 19:58 pts/0 00:00:00 pgpool: worker process

Root 12863 12828 0 19:58 pts/0 00:00:00 pgpool: health check process (0)

Root 12864 12828 0 19:58 pts/0 00:00:00 pgpool: health check process (1)

Root 14061 14045 0 20:37 pts/1 00:00:00 grep pool

3. PG database cannot be started

[postgres@node2 data] $/ opt/postgresql-10.3/bin/pg_ctl-D / data/pg/data/-l logfile start

Error report:

Wait for the server process to start. Stopped waiting

Pg_ctl: unable to start the server process

Check the log output.

Follow the error prompt to view the log

Tail logfile

2018-05-30 22 40 virtual 05.208 PDT [16383] Log: consistent recovery status has been reached on 0Universe 8000130

2018-05-30 22 40 charge 05.208 PDT [16382] Log: the database system is ready to accept read-only requests for connections

2018-05-30 22 40 05.242 log [16387] Log: start the flow operation of the log at a point in time: 0/C000000 (time schedule 1)

2018-05-30 23 19displacement 59.272 PDT [16382] Log: received a request for smart to stop

2018-05-30 23 19displacement 59.325 PDT [16387] fatal error: interrupt the walreceiver process due to administrator command

2018-05-30 23 1914 59.332 PDT [16384] Log: closing

2018-05-30 23 1915 59.426 PDT [16382] Log: database system is closed

2018-06-03 23 59race 31.974 PDT [15817] fatal error: unable to write lock file "postmaster.pid": no space on the device

2018-06-04 00 PDT 32.287 [15840] fatal error: unable to write lock file "postmaster.pid": no space on the device

2018-06-04 00 01V 54.556 PDT [15867] fatal error: unable to write lock file "postmaster.pid": no space on the device

Df-h checks disk space, and sure enough, disk space is insufficient.

[postgres@node2 data] $df-h

Filesystem Size Used Avail Use% Mounted on

/ dev/sda2 18G 17G 18m 100% /

Tmpfs 242m 72K 242m 1% / dev/shm

/ dev/sda1 291M 39M 238M 14% / boot

[postgres@node2 data] $4. Error is reported in slave database log after master-slave switch.

After master-slave switching, an error is reported from the logfile log of the slave library:

2018-07-01 21 port 08 PDT [2644] Log: listening on IPv4 address "0.0.0.0", port 5432

2018-07-01 21 port 08 PDT [2644] Log: listening on IPv6 address ":", port 5432

2018-07-01 21 tmp/.s.PGSQL.5432 08 PDT 41.893 Log: listening on Unix socket "/ tmp/.s.PGSQL.5432"

2018-07-01 21 08 PDT [2645] Log: the last time the database was closed was 2018-07-01 21:08:41 PDT

2018-07-01 21V 08VR 42.008 PDT [2644] Log: database system ready to accept connection

Add the file recovery.conf from the installation directory of the library

And need to be configured as follows:

Vi / data/pg/data/recovery.conf

Recovery_target_timeline = 'latest'

Standby_mode = 'on' #-Mark PG as STANDBY SERVER

Primary_conninfo = 'host=192.168.159.152 port=5432 user=replica password=replica' #-- identifies the main library information

Trigger_file ='/ data/pg/data/trigger.kenyon' #-identifies the trigger file

This is the end of the article on "how to download the POSTGRESQL source installation package and realize the host configuration". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, please share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report