In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
I believe many inexperienced people don't know what Postgresql Repmgr cascade replication and PostgreSQL failover are like. Therefore, this paper summarizes the causes and solutions of the problem. Through this article, I hope you can solve this problem.
First of all, we already have two machines. In the management of repmgr, we can see from the figure that 110,111 machines are already in the management of repmgr.
We installed another postgresql machine 112 and installed the repmgr software.
Be careful not to initialize the data. The specified data directory of the machine that the repmgr machine needs to copy must be empty.
Repmgr-h 192.168.198.111-U repmgr-d repmgr-f / etc/repmgr.conf standby clone-- upstream-node-id=2-- verbose
192.168.198.111 is a slave library, while replication is copied directly from the slave library, not the master library.
Start adding machine information to the cluster, and the first thing you need here is to edit the repmgr.conf.
For details on how to compile, see the text of the installation of repmgr a few days ago. I won't repeat it here.
Repmgr-f / etc/repmgr.conf standby register-- upstream-node-id=2
The-- upstream-node-id = 2 in the command means that the initial replication of the data comes from the slave node, and the node_id of the slave node is 2, and the subsequent data replication comes from the slave database rather than the master database.
After the command is executed, the newly added node information can be found in the host.
We can verify it next.
Select * from pg_stat_replication
By executing the statement on 110111
At 110
At 111
The related cascade replication is OK.
The failover of a database system is actually an important index to judge whether a database is reliable or not.
The PostgreSQL database itself can be switched by failover. We can do this through the witness server in repmgr.
1 first detach the 112 nodes from the current cluster
Execute the command directly on 112
Repmgr standby unregister-f / etc/repmgr.conf
The node clears the registration information directly on 110 111, but clears the registration information, does not clear the replication, and needs to break the replication connection between 111 and 112 if necessary. (how to stop copying, please Baidu or turn over the previously written words about copying)
After stopping replication on 112, stop and empty the data under the original / pgdata/data
Reinitialize the database
Initdb-D / pgdata/data
Start the database
Pg_ctl-D / pgdata/data start
Then you need to do the following work to confirm the SSH secret-free login account of 112and other 11110servers to start the pg database as the secret-free object
Configure the repmgr.conf file, and modify the pg_hba.conf file to ensure that the repmgr account on the witness server logs in to the host and slave library is OK
Repmgr-f / etc/repmgr.conf witness register-h 192.168.198.110-d repmgr-U repmgr
Execute the above command to register the witness witness server for 112
Then check whether the relevant information is correct on 112.
You can see from the figure that 112 has been registered as a witness server
Basically, our environment has been built, and we need to implement the function of automatically upgrading from the library to the main library when the host DOWN is dropped.
Need two functions of repmgr
1 Monitoring function, postgresql service status
2 by monitoring the status and triggering the script to transfer from the library to the main library
Here we want to monitor the overall cluster with the help of the monitor repmgrd daemon in repmgr. First, the repmgrd daemon needs to configure the settings for repmgrd in / etc/repmgr.conf.
The following figure shows the configuration options and related configuration values
Failover=automatic
Priority=100
Connection_check_type=ping
# reconnect_attempts=6
# reconnect_interval=10
Promote_command=repmgr standby promote-f / etc/repmgr.conf
Follow_command=repmgr standby follow-f / etc/repmgr.conf-W-- upstream-node-id=%n
After the configuration is completed, it can be run on 111 nodes.
Repmgrd-f / etc/repmgr.conf-- verbose-- monitoring-history
After running the monitoring on the 111node, we shut down the PG service of the 110node, and the monitoring immediately began to react. After 6 failed attempts to reconnect to the master node, we began to upgrade the slave library.
We can run commands to view the status of the cluster from the slave library and the witness server, and we can see that 111 has become the main library.
Repmgr as the software of FAILOVER and switch over standby to primary, its function is powerful, fully meet the high availability of enterprises or Internet enterprises, with some scripts, you can achieve the same effect as MHA, or better.
PG has the support of repmgr third-party software.
After reading the above, have you mastered the methods of Postgresql Repmgr cascade replication and PostgreSQL failover? If you want to learn more skills or want to know more about it, you are welcome to follow the industry information channel, thank you for reading!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.