In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
System version: centos7
8 cores
32 GB of memory
Master-slave server IP:
192.168.125.33 postgreSQL master
192.168.125.34 postgreSQL slave
1. Create a database management account
# groupadd pggroup
# useradd-g pggroup pguser
# passwd pguser
2. Install the dependency package
Yum install readline-devel zlib-devel-y
If you do not install the above two packages, compiling postgresql will report an error that the corresponding package is missing, causing it to fail.
Yum install postgresql-contrib
Http://mirror.centos.org/centos/7/os/x86_64/Packages/uuid-devel-1.6.2-26.el7.x86_64.rpm
The version is the same as the uuid version of the system, uuid-1.6.2-26.el7.x86_64
Rpm-ivh uuid-devel-1.6.2-26.el7.x86_64.rpm
If you do not install one of the above packages, compiling postgresql will report the following error:
Configure: error: library 'ossp-uuid' or' uuid' is required for OSSP-UUID "
If you have this package in the source, you can install it using yum.
3. Compilation
Upload the package postgresql-9.6.3.tar.gz from the master-slave server and compile it
# tar-zxf postgresql-9.6.3.tar.gz
# cd postgresql-9.6.3
#. / configure-prefix=/usr/local/postgresql-with-ossp-uuid
# make & & make install
4. Compile uuid module and pg_stat_statements module
If you do not need to use UUID-type functions in the environment, nor do you need the pg_stat_statements function module to analyze the resource consumption of sql, you can skip the processing of UUID and pg_stat_statements, and of course, the configuration file should be modified accordingly.
# cd contrib/uuid-ossp
# make & & make install
If the uuid-ossp is not compiled above, the following error occurs:
Postgres=# create extension "uuid-ossp"
ERROR: could not open extension control file "/ usr/local/postgresql/share/extension/uuid-ossp.control": No such file or directory
# cd contrib/pg_stat_statements
# make & & make install
5. Configure environment variables
# vi / etc/profile
Export PGHOME=/usr/local/postgresql
Export PGDATA=/data/pg_data
Export PATH=$PATH:$PGHOME/bin
# source / etc/profile
6. Give permissions to the directory
# make / data/pg_data
# chown-R pguser:pggroup / data/pg_data
# chown-R pguser:pggroup / usr/local/postgresql
7. The master server initializes the database
First, switch to ordinary users.
# su pguser
$initdb-D / data/pg_data
Start the service
$pg_ctl-D / data/pg_data start
8. Create a data synchronization user
$psql-h 127.0.0.1-d postgres
Postgres=# create role repuser login replication encrypted password 'password definition'
Change the login password of the administrative account
Postgres=# ALTER USER pguser WITH PASSWORD 'password definition'
Postgres=#\ Q
9. Modify configuration file
Edit pg_hba.conf
Bash-4.2$ vi / data/pg_data/pg_hba.conf
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
Local all all md5
# IPv4 local connections:
Host all all 127.0.0.1/32 md5
Host all all 192.168.125.0/24 md5
Host all all 192.168.99.0/24 md5
# replication privilege.
Host replication repuser 192.168.125.0/24 md5
Description:
TYPE defines a variety of ways to connect to PostgreSQL, namely: "local" uses local unix sockets, "host" uses TCP/IP connections (including SSL and non-SSL), "host" uses IPv4 with "IPv4 addresses", and IPv6 with "IPv6 addresses". "hostssl" can only use SSL TCP/IP connections, and "hostnossl" cannot use SSL TCP/IP connections.
DATABASE specifies which database, multiple databases, separated by commas. "all" means "all" only if there are no other conformance entries, and if there are other conformance entries, it means "except this entry", because "all" has the lowest priority.
USER specifies which database user (PostgreSQL is formally called role, role). Multiple users are separated by commas.
The local method of the ADDRESS entry does not need to be entered. It can be an IPv4 address or an IPv6 address, which can define a host or a network segment.
Servers configured with network segments 192.168.125.X and 192.168.99.X can access the database
METHOD specifies how to handle client authentication. The commonly used one is ident,md5,password,trust,reject.
Ident is the default local authentication method for PostgreSQL under Linux. Operating system users who can log in to the server correctly (note: not database users) can use the database users mapped by this user to log in to the database without a password.
Md5 is a commonly used password authentication method, if you do not use ident, it is best to use md5. The password is transmitted to the database in the form of md5, which is more secure and does not require the establishment of an operating system user with the same name.
Password is sent to the database with a clear text password and is not recommended for use in a production environment.
Trust can log in without a password or ident as long as you know the database user name, and it is not recommended to use it in a production environment.
Reject is denying authentication.
The last line is configured with the information from the slave server, that is, the information to synchronize the master database.
Edit postgresql.conf
Bash-4.2$ vi / data/pg_data/postgresql.conf
Add the following configuration, and the configuration file has the following configuration to be deleted (including those with the alarm number'#'in front)
Listen_addresses ='*'
Wal_level = hot_standby
Max_wal_senders= 6
Wal_keep_segments = 10240
Max_connections = 512
Archive_mode = on
Archive_command ='cp% p / data/pg_data/pg_archive/%f'
# configure pg_stat_statements
Shared_preload_libraries = 'pg_stat_statements'
Track_io_timing = on
Track_activity_query_size = 2048
Pg_stat_statements.max = 10000
Pg_stat_statements.track = all
Pg_stat_statements.track_utility = off
Pg_stat_statements.save = on
Create a directory for archiving files
Bash-4.2$ mkdir / data/pg_data/pg_archive
# need to adjust and optimize performance if necessary
Shared_buffers = 3276MB
Work_mem = 655MB
Effective_cache_size = 2GB
Maintence_work_mem = 256MB
Max_connections = 4000
# vi / etc/sysctl.conf
Kernel.sem = 50100 128256000 50100 2560
# sysctl-p
Restart the service
$pg_ctl-D / data/pg_data restart
10. Create uuid and pg_stat_statements
Log in to the appropriate database
Postgres=# create extension "uuid-ossp"
CREATE EXTENSION
Verification
Postgres=# select uuid_generate_v4 ()
Uuid_generate_v4
-
28cbfa1e-d659-4aa2-a0fd-95fc7ec0aa8b
(1 row)
Postgres=# create extension pg_stat_statements
Verification
Select * from pg_stat_statements order by total_time desc limit 5
Create a database, access users, and empower the database
Primary server
Postgres=# create user testuser with password 'password customization'
CREATE ROLE
Postgres=# create database test owner yourpassword
CREATE DATABASE
Postgres=# grant all privileges on database test to testuser
GRANT
Log into the database
Psql-U testuser-d test-W
12. Synchronize data
From the server, under the ordinary user
Bash-4.2 $pg_basebackup-h 192.168.125.33-U repuser-D / data/pg_data-X stream-P
Password:
36413336413 kB, 1 tablespace
13. Modify configuration files recovery.conf and postgresql.conf from the server
Bash-4.2$ cp / usr/local/postgresql/share/pgsql/recovery.conf.sample / data/pg_data/recovery.conf
Bash-4.2$ vi / data/pg_data/recovery.conf
Add the following configuration, and the configuration file has the following configuration to be deleted (including those with the alarm number'#'in front)
Standby_mode = on
Primary_conninfo = 'host=192.168.125.33 port=5432 user=repuser password=yourpassword keepalives_idle=60'
Recovery_target_timeline = 'latest'
Bash-4.2$ vi / data/pg_data/postgresql.conf
Add the following configuration
Hot_standby = on
Max_standby_streaming_delay = 30s
Wal_receiver_status_interval = 10s
Hot_standby_feedback = on
Postgresql.conf profile description:
# enable PostgreSQL to accept connection requests from any IP
Listen_addresses ='*'
# postgres introduced a master-slave stream replication mechanism after 9.0. the so-called stream replication means that the slave database synchronizes the corresponding data from the master database through the tcp stream.
# set the host whose host is wal
Wal_level = hot_standby
# A maximum of several stream replication connections can be set, and only a few streams can be set from
Max_wal_senders= 6
The pg_xlog of # postgresql is used to record database transaction information, which is called wal write ahead log, which is written to the log before the data is written to disk to become fixed data, and then, under certain conditions, triggers the call fsync () to flush the data to disk.
# set wal_keep_segments to a large enough value on the master database to prevent the master database from generating WAL logs so fast that the logs will be overwritten circularly before they can be delivered to standby.
Wal_keep_segments = 10240
# maximum number of server connections
Max_connections = 4000
# enable archive_mode
# when archive_mode is enabled, send the completed WAL segment to the archive storage by setting archive_command. In addition to off,disable, there are two modes: on,always. During normal operation, there is no difference between the two modes, but when set to always, WAL archiver is also enabled in archive recovery or standby mode. In always mode, all files restored from the archive or streamed copy stream are archived (again). Archive_mode and archive_command are separate variables, so you can change the archive_command without changing the archiving mode. This parameter can only be set when the server starts. Archive_mode cannot be enabled when wal_level is set to minimal.
Archive_mode = on
# Save the archive file to the / data/pg_data/pg_archive directory
Archive_command ='cp% p / data/pg_data/pg_archive/%f'
# set up the database server for both data archiving and data query.
Hot_standby = on
# maximum delay time for data stream backup
Max_standby_streaming_delay = 30s
# how often is the slave status reported to the master? of course, the slave status is reported to the master for each data replication. Here is only the longest interval.
Wal_receiver_status_interval = 10s
# if there is incorrect data replication, whether to give feedback to the master
Hot_standby_feedback = on
# Import pg_stat_statements dynamic library at startup
Shared_preload_libraries = 'pg_stat_statements'
# if you want to track the time consumed by IO, you also need to turn on the following parameters
Track_io_timing = on
# set the maximum length of a single SQL, exceeding the truncated display
Track_activity_query_size = 2048
# the maximum number of monitored sentences is 10000
Pg_stat_statements.max = 10000
# pg_stat_statements.track controls which statements will be counted by the module. Specify top to track top-level statements (those issued directly by the client), specify all to track nested statements (such as statements called in functions), and specify none to disable statement statistics collection. The default value is top. Only superusers can change this setting.
Pg_stat_statements.track = all
# configure pg_stat_statements.track_utility to control the module not to track tool commands. Tool commands are all commands except SELECT, INSERT, UPDATE, and DELETE. The default value is on. Only superusers can change this setting.
Pg_stat_statements.track_utility = off
# specifies that statement statistics are saved after the server is shut down. If set to off, statistics are not saved after shutdown and will not be reloaded when the server starts. The default is on. This parameter can only be set in the postgresql.conf file or on the server command line.
Pg_stat_statements.save = on
# set the amount of shared memory buffer that the database server will use
Shared_buffers = 3276MB
# work_mem was called sort_mem before pgsql 8.0. When postgresql performs a sort operation, it decides whether to split a large result set into several small and work_mem-checked temporary files based on the size of the work_mem. Obviously, the result of the split is to slow down the speed of sorting. So increasing work_mem helps to speed up sorting. It is usually set to 2-4% of the actual RAM, depending on the size of the sorted result set.
Work_mem = 655MB
# maximum cache that postgresql can use
Effective_cache_size = 2GB
# the memory defined here is only used in CREATE INDEX, VACUUM, etc., so the frequency is not high, but these instructions often consume more resources, so these instructions should be executed quickly as soon as possible.
Maintence_work_mem = 256MB
# vi / etc/sysctl.conf
Kernel.sem = 50100 128256000 50100 2560
# sysctl-p
14. Start the service from the server
First modify directory permissions
# chmod 700 / data/pg_data
# su pguser
Bash-4.2 $pg_ctl-D / data/pg_data start
15. Verification
Primary server:
Bash-4.2$ psql-h 127.0.0.1-d postgres
View synchronization database
Postgres=# select client_addr,sync_state from pg_stat_replication
Client_addr | sync_state
-+-
192.168.125.34 | async
Postgres=# create database test1
CREATE DATABASE
View the database
Postgres=#\ l
From the server:
# su pguser
Bash-4.2$ psql-h 127.0.0.1-d postgres
Check to see if the test1 library has been synchronized
Postgres=#\ l
16. Installation and configuration of master-slave highly available components
Install keepalived from the master and slave server
# yum install-y keepalived
16.2. Master server configuration
# cd / etc/keepalived
# vi keepalived.conf
! Configuration File for keepalived
Global_defs {
Smtp_server 127.0.0.1
Smtp_connect_timeout 30
Router_id pg
}
Vrrp_script chk_pg {
Script "/ etc/keepalived/scripts/pgsql_check.sh"
Interval 2
Weight-5
Fall 2
Rise 1
}
Vrrp_instance VI_1 {
State BACKUP
Interface eth0
Virtual_router_id 61
Priority 100
Advert_int 1
Authentication {
Auth_type PASS
Auth_pass 1111
}
Track_script {
Chk_pgsql
}
Virtual_ipaddress {
192.168.125.200
}
}
# mkdir log
# mkdir scripts
# vi scripts/pgsql_check.sh
#! / bin/bash
A = `ps-C postgres-- no-header | wc-l`
# determine where the vip floats
B = `ip a | grep 192.168.125.200 | wc-l`
# determine whether the slave library is in a waiting state
C = `ps-ef | grep postgres | grep 'startup process' | wc-l`
# determine whether it is normal to link to the master database from the library
D = `ps-ef | grep postgres | grep 'receiver' | wc-l`
# determine whether the master database connects to the slave database properly
E = `ps-ef | grep postgres | grep 'sender' | wc-l`
# if pg dies, write the message to the diary and close keepalived
If [$A-eq 0]; then
Echo "`date" +% Y-%m-%d--%H:%M:%S "`postgresql stop so vip stop" > > / etc/keepalived/log/check_pg.log
Systemctl stop keepalived
Else
# judged that the master was dead, vip floated to slave, and promoted the status of slave so that he could read and write
If [$B-eq 1-a $C-eq 1-a $D-eq 0]; then
Su-pguser-c "pg_ctl promote-D / data/pg_data"
Echo "`date" +% Y-%m-%d--%H:%M:%S "`standby promote" > > / etc/keepalived/log/check_pg.log
Fi
# judge that you are the master and lose contact with the slave
If [$B-eq 1-a $C-eq 0-a $D-eq 0-a $E-eq 0]; then
Sleep 10
Echo "`date" +% Y-%m-%d--%H:%M:%S "`can't find standby" > > / etc/keepalived/log/check_pg.log
Fi
Fi
16.3. Configure from the server
# cd / etc/keepalived
# vi keepalived.conf
! Configuration File for keepalived
Global_defs {
Smtp_server 127.0.0.1
Smtp_connect_timeout 30
Router_id pg
}
Vrrp_script chk_pg {
Script "/ etc/keepalived/scripts/pgsql_check.sh"
Interval 2
Weight-5
Fall 2
Rise 1
}
Vrrp_instance VI_1 {
State BACKUP
Interface eth0
Virtual_router_id 61
Priority 96
Advert_int 1
Authentication {
Auth_type PASS
Auth_pass 1111
}
Track_script {
Chk_pgsql
}
Virtual_ipaddress {
192.168.125.200
}
}
# mkdir log
# mkdir scripts
# vi scripts/pgsql_check.sh
#! / bin/bash
A = `ps-C postgres-- no-header | wc-l`
# determine where the vip floats
B = `ip a | grep 192.168.125.200 | wc-l`
# determine whether the slave library is in a waiting state
C = `ps-ef | grep postgres | grep 'startup process' | wc-l`
# determine whether it is normal to link to the master database from the library
D = `ps-ef | grep postgres | grep 'receiver' | wc-l`
# determine whether the master database connects to the slave database properly
E = `ps-ef | grep postgres | grep 'sender' | wc-l`
# if pg dies, write the message to the diary and close keepalived
If [$A-eq 0]; then
Echo "`date" +% Y-%m-%d--%H:%M:%S "`postgresql stop so vip stop" > > / etc/keepalived/log/check_pg.log
Systemctl stop keepalived
Else
# judged that the master was dead, vip floated to slave, and promoted the status of slave so that he could read and write
If [$B-eq 1-a $C-eq 1-a $D-eq 0]; then
Su-pguser-c "pg_ctl promote-D / data/pg_data"
Echo "`date" +% Y-%m-%d--%H:%M:%S "`standby promote" > > / etc/keepalived/log/check_pg.log
Fi
# judge that you are the master and lose contact with the slave
If [$B-eq 1-a $C-eq 0-a $D-eq 0-a $E-eq 0]; then
Sleep 10
Echo "`date" +% Y-%m-%d--%H:%M:%S "`can't find standby" > > / etc/keepalived/log/check_pg.log
Fi
Fi
16.4. Start the service from the master-slave server
# systemctl start keepalived.service
17. Accessing the database can be accessed directly by accessing the virtual IP192.168.125.200
18. Master-slave switching
When the master database server goes down or the database service stops abnormally, the slave database will be automatically switched to the master database, and the virtual IP will be generated on the slave database to automatically switch from the slave database to the master database. However, if the original master database is changed to the current slave database, you need to switch manually.
The original Lord as the present standby.
First make a backup.
# cp pg_data pg_data.bak-Rp
Delete a file
$cd pg_data
$rm-rf *
Synchronous data
$pg_basebackup-h 192.168.125.34-U repuser-D / data/pg_data-X stream-P
$mv recovery.done recovery.conf
$vi recovery.conf
Primary_conninfo = 'host=192.168.125.33 port=5432 user=repuser password=yourpassword keepalives_idle=60'
> >
Primary_conninfo = 'host=192.168.125.34 port=5432 user=repuser password=yourpassword keepalives_idle=60'
Start the host keepalived
# systemctl start keepalived.service
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.