In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Overview of MongoDB replication set deployment and basic management of MongoDB replication set
A Replica Sets is an additional copy of data, a process of synchronizing data across multiple servers. It provides redundancy and increases the availability of data, and can be used to recover from hardware failures and outages.
Replication sets have the following advantages:
Make data more secure and highly available (7x24) disaster recovery no downtime maintenance (such as backup, index rebuild, failover) read scaling (additional replica reads) replica set is transparent to the application how the replica set works
The replication set of MongoDB requires at least two nodes. One of the nodes is the master node (Primary), which handles the client's requests, and the rest is the slave node (Serondary), which is responsible for replicating the data on the master node.
The common collocation of each node in MongoDB is: one master, one slave or one master and multiple slaves. The master node records all operations into the oplog, periodically polls the master node from the slave node for these operations, and then performs these operations on its own data copy. Thus the data of the slave node is consistent with that of the master node. As shown in the following figure:
The client writes data in the master node and writes data in the master node. The master node interacts with the slave node to ensure the consistency of the data. If one of the nodes fails, the other nodes will immediately take over the business without downtime.
MongoDB replication set deployment configures multiple instances
I already explained how to enable multiple instances of MongoDB in the previous blog, so without going into much detail here, we have created four MongoDB instances in the same way. Before starting the four instances, modify the configuration file of each instance. Configure the replSet parameter value to be the same value, which is used as the name of the replication set, as shown below:
[root@localhost ~] # vim / usr/local/mongodb/bin/mongodb1.conf port=27017dbpath=/data/mongodb1logpath=/data/logs/mongodb/mongodb1.loglogappend=truefork=truemaxConns=5000replSet=kgcrs # configure the name of the replication set
Just add the same line of code to the last line in the configuration file of the other three instances. Start four instance processes.
[root@localhost ~] # export PATH=$PATH:/usr/local/mongodb/bin/# has been rebooted before Reset the environment variable [root@localhost] # mongod-f / usr/local/mongodb/bin/mongodb1.conf# to open the instance process 2018-07-17T10:33:29.835+0800 I CONTROL [main] Automatically disabling TLS 27017, to force-enable TLS 1.0 specify-- sslDisabledProtocols' none'about to fork child process, waiting until server is ready for connections.forked process: 3751child process started successfully, parent exiting initialize the configuration and start the replication set
After starting 4 MongoDB instances, here is how to configure and start the MongoDB replication set. Here, first configure the replication set with three nodes (we will add the last instance later), and Primary represents the primary node. Secondary stands for slave node
[root@localhost bin] # mongod-f / usr/local/mongodb/bin/mongodb2.conf-- smallfilesabout to fork child process, waiting until server is ready for connections.forked process: 20207child process started successfully, parent exiting [root@localhost bin] # mongod-f / usr/local/mongodb/bin/mongodb3.conf-- smallfilesabout to fork child process, waiting until server is ready for connections.forked process: 20230child process started successfully, parent exiting [root@localhost bin] # mongod-f / usr/local/mongodb/bin/mongodb4.conf-- smallfilesabout to fork child process Waiting until server is ready for connections.forked process: 20253child process started successfully, parent exiting
You can see that all four instances have been started successfully and the corresponding port numbers have been opened. Then go to the instance whose port number is 27017.
[root@localhost bin] # mongo # defaults to the instance with port number 27017 > cfg= {"_ id": "kgcrs", "members": [{"_ id": 0, "host": "192.168.100.201id 27017"}, {"_ id": 1, "host": "192.168.100.201bureau 27018"}, {"_ id": 2 "host": "192.168.100.201ok 27019"}]} # this code means to add three members to the replication set of kgcrs > rs.initiate (cfg) {"ok": 1} # initialize the replication set After you start the replication set, you can view the full information of the replication set through rs.status ().
Add and delete nodes
After the configuration starts the replication set, you can also easily add and remove nodes through the rs.add () and rs.remove () commands.
Kgcrs:PRIMARY > rs.add ("192.168.100.201ok 27020") {"ok": 1} # you can see that the instance on port 27020 has been added successfully kgcrs:PRIMARY > rs.remove ("192.168.100.201kgcrs:PRIMARY 27019") {"ok": 1} # you can see that the instance on port 27019 has been deleted. MongoDB replication set switching.
1. Automatic switchover of simulated failure
You can stop the current node of the replication set through the kill command, and then see that the primary node automatically switches to another node, and you can see that the instance with the current port of 27017 is the primary node.
[root@localhost bin] # netstat-ntap | grep mongotcp 0 0 0.0.0. 0 ntap 27019 0.0. 0. 0. 0 LISTEN 20230/mongod tcp 0 0 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. : 27017 0.0.0.0 LISTEN 20055/mongod tcp 00 0.0.0.0 LISTEN 20055/mongod tcp 27018 0.0.0.0 root@localhost bin * LISTEN 20207/mongod [root@localhost bin] # kill-9 20055 # kill the process of losing port 27017 kgcrs:SECONDARY > rs.status () # you can see that the master node is now an instance with port 27019
2. Switch between master and slave manually
First of all, you need to enter the instance of the master node. Only the master node has the permission to switch between master and slave nodes.
[root@localhost bin] # mongo-- port 27019kgcrs:PRIMARY > rs.freeze (30) # suspend for 30s not to participate in the election {"ok": 1} kgcrs:PRIMARY > rs.stepDown (60jie30) # tell the master node to hand over the location of the master node, and then maintain the slave node status of not less than 60s, while waiting for 30s to synchronize the master node and slave node logs, check the status again, and find that the master node has been switched to another instance. 2018-07-18T15:52:53.254+0800 I NETWORK [thread1] trying reconnect to 127.0.0.1 failed2018-07-18T15:52:53.256+0800 I NETWORK [thread1] reconnect 127.0.1 okkgcrs:SECONDARY > rs.status ()
Election principle of replication set
Replication is based on the operation log oplog, which is equivalent to the binary log in MySQL, recording only the records that have changed. Replication is the process of synchronizing and applying the oplog logs of the host point to other slave nodes
The principle of election
Node types are divided into standard (host) nodes, passive (passive) nodes and arbitration (abriter) nodes.
(1) only standard nodes may be elected as active (primary) nodes by voting rights. The passive node has a complete copy and cannot become an active node and has the right to vote. Arbitration nodes do not copy data, can not become active nodes, only the right to vote.
(2) the difference between the standard node and the passive node: the high priority value is the standard node, and the low value is the passive node.
(3) the election rule is that the person with the highest number of votes wins, and priority is a value with a priority of 0 to 1000, which is equivalent to an additional increase of 0 to 1000 votes. Election result: the person with the highest number of votes wins; if the number of votes is the same, the newcomer wins.
Configure the priority of the replication set
Reconfigure the MongoDB replication set of 4 nodes, setting up two standard nodes, one passive node and one quorum node. This setting is to be configured on the primary node.
Kgcrs:PRIMARY > > cfg= {"_ id": "kgcrs", "members": [{"_ id": 0, "host": "192.168.58.131id 27017", "priority": 100}, {"_ id": 1, "host": "192.168.58.131id 27018", "priority": 100}, {"_ id": 2, "host": "192.168.58.131 id 27019", "priority": 0}, {"_ id": 3 The code "host": "192.168.58.131VR 27020", "arbiterOnly": true}]} # sets the properties of four instances respectively. Priority kgcrs:PRIMARY > rs.reconfig (cfg) {"ok": 1}
You can see that the current instance of port 27018 is now the primary node.
Simulate the failure of the primary node
If the primary node fails, another standard node will be elected as the new primary node.
[root@promote] # mongod-f / usr/local/mongodb/bin/mongodb2.conf-- shutdown2018-07-22T09:20:02.706+0800 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify-- sslDisabledProtocols' none'killing process with pid: 9639kgcrs:PRIMARY > rs.isMaster ()
You can see that the instance of port 27017 has become the primary node.
Simulate the failure of all standard nodes
If all standard nodes fail, neither the passive node nor the arbitration node can become the primary node.
[root@promote] # mongod-f / usr/local/mongodb/bin/mongodb1.conf-- shutdown2018-07-22T09:24:08.290+0800 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify-- sslDisabledProtocols' none'killing process with pid: 9740kgcrs:SECONDARY > rs.isMaster ()
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.