Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Build a highly available MongoDB cluster (Replica set)

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

For the basis of MongoDB, please refer to https://blog.51cto.com/kaliarch/2044423.

I. Overview

1.1 MongoDB replica set

Generally speaking, the replica set of mongodb is equivalent to the master-slave cluster with automatic fault recovery. the most obvious feature of the master-slave cluster and replica set is that there is no fixed "master node" in the replica set, and the whole cluster will select the master node through a certain algorithm. at present, MongoDB officials no longer recommend to use master-slave mode. In master-slave mode, if the master database goes down, the slave database cannot automatically take over the master database. As a result, it is unable to access the data, instead of the MongoDB replica set mode, the main server is responsible for reading and writing the whole replica set, and the replica set is synchronized with data backup periodically. The replica node in the replica set will elect a new master server after the primary node is detected by the heartbeat mechanism, all of which need not be concerned about the application server.

1.2 Architecture Diagram

1.3 principle of replication

Replication of mongodb requires at least two nodes. One of them is the master node, which is responsible for handling client requests, and the rest are slave nodes, which are responsible for replicating the data on the master node.

The common collocation of each node of mongodb is: one master, one slave, one master and multiple slaves.

The master node records all operations on it oplog, and the slave node periodically polls the master node for these operations, and then performs these operations on its own data copy, thus ensuring that the data of the slave node is consistent with that of the master node.

1.4 replica set characteristics:

N-node cluster

Any node can be used as the primary node

All writes are on the primary node

Automatic failover

Automatic recovery

1.5 Bully algorithm

If the master node in the replica set is down, you need to use the bully algorithm to elect the master node. The main idea is that each member can declare himself as the master node and notify other nodes. Other nodes can choose to accept this declaration or reject and enter the master node competition. Only the node accepted by other nodes can be the master node.

The node determines who should win according to some attributes. This property can be a static ID or an updated metric like the most recent transaction ID (the latest node will win)

Official description:

Get the last operation timestamp of each server node. Every mongodb has an oplog mechanism that records local operations, which makes it easy to compare data synchronization with the main server and can also be used for error recovery.

If most of the servers in the cluster are down, the nodes that remain alive are in the secondary state and stop, and there is no election.

If the last synchronization time of the elected master node or all slave nodes in the cluster looks old, stop the election and wait for someone to operate.

If there is no problem above, select the server node with the latest timestamp of the last operation (to ensure that the data is up-to-date) as the primary node.

1.6 Replica Set members

There are three member roles in a Replica Set: Primary,Secondary and Arbiter.

Primary: receives all write operations from the client, and there is only one Primary in a Replica Set. If Primary goes down, Replica Set automatically elects a Secondary to become Primary. Primary records all operations of its data sets to oplog.

Secondary:Secondary copies the oplog from Primary and then applies the operations in oplog to its own data sets. There is asynchronous replication between Secondary and Primary, that is, the data in Secondary may not be up-to-date. By default, Secondary is not readable or writable, but you can set up the run client to read from Secondary.

There are three uses for Secondary configuration:

1. It is prevented from becoming Primary in the election and is used only as backup data. This is achieved by setting the priority priority to 0

two。 Prevent the application from reading from it by setting the priority priority to 0 and setting hidden to true. A hidden member also copies Primary's data, but it is not visible to the client application. )

3. Retain the historical mirror data for data rollback. For example, if you delete the data by mistake, you can use the data recovery in the Delayed Replica Set member.

Arbiter:Arbiter does not need to maintain its own data sets, but participates in a vote to choose which Secondary can be upgraded to Primary when Primary dies.

When the number of members in Replica Set is even, you need to add an Arbiter to vote on which can be upgraded to Primary. You cannot run Arbiter on Primary or Secondary hosts.

A Replica Set can have up to 12 members, but only 7 members can vote as Primary at the same time. If the number of members exceeds 12, you need to use Master-Slave master-slave replication.

Deploying a Replica Set requires at least three members, one Arbiter, one Secondary and one Primary or one Primary and two Secondary.

Second, set up deployment

2.1 basic environment

Hostnam

IP address

System

Mongodb-1

172.20.6.10CentOS release 6.9

Mongodb-2172.20.6.11CentOS release 6.9

Mongodb-3172.20.6.10CentOS release 6.9

2.2 Software installation

Install mongodb on three servers in turn

Wget-c https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel62-3.4.10.tgztar-zxvf mongodb-linux-x86_64-rhel62-3.4.10.tgzln-sv mongodb-linux-x86_64-rhel62-3.4.10 mongodbmkdir / usr/local/mongodb/ {conf,mongoData,mongoLog} touch / usr/local/mongodb/mongoLog/mongodb.logecho "export PATH=$PAHT:/usr/local/mongodb/bin" > / etc/profile.d/mongodb.shsource etc/profile.d/mongodb.sh

Define a profile

Cat > / usr/local/mongodb/conf/mongodb.conf rs.status () {"set": "RS", "date": ISODate ("2017-11-26T14:35:03.422Z"), "myState": 1, "term": NumberLong (2), "heartbeatIntervalMillis": NumberLong (2000), "optimes": {"lastCommittedOpTime": {"ts": Timestamp (1511706901, 1) "t": NumberLong (2)}, "appliedOpTime": {"ts": Timestamp (1511706901, 1), "t": NumberLong (2)}, "durableOpTime": {"ts": Timestamp (1511706901, 1), "t": NumberLong (2)}} "members": [{"_ id": 0, "name": "172.20.6.10 name", "health": 0, "state": 8, "stateStr": "(not reachable/healthy)", # mongodb-1 has lost the connection "uptime": 0 "optime": {"ts": Timestamp (0,0), "t": NumberLong (- 1)}, "optimeDurable": {"ts": Timestamp (0,0), "t": NumberLong (- 1)} OptimeDate: ISODate ("1970-01-01T00:00:00Z"), "optimeDurableDate": ISODate ("1970-01-01T00:00:00Z"), "lastHeartbeat": ISODate ("2017-11-26T14:35:02.502Z"), "lastHeartbeatRecv": ISODate ("2017-11-26T14:32:20.434Z"), "pingMs": NumberLong (0) "lastHeartbeatMessage": "Connection refused", "configVersion":-1}, {"_ id": 1, "name": "172.20.6.11 configVersion 27017", "health": 1, "state": 1, "stateStr": "PRIMARY" # mongodb-2 is the new master node "uptime": 1842, "optime": {"ts": Timestamp (1511706901, 1), "t": NumberLong (2)}, "optimeDate": ISODate ("2017-11-26T14:35:01Z") "electionTime": Timestamp (1511706750, 1), "electionDate": ISODate ("2017-11-26T14:32:30Z"), "configVersion": 1, "self": true}, {"_ id": 2, "name": "172.20.6.12 ISODate 27017", "health": 1 "state": 2, "stateStr": "SECONDARY", # mongodb-3 is the secondary node "uptime": 1671, "optime": {"ts": Timestamp (1511706901, 1), "t": NumberLong (2)} "optimeDurable": {"ts": Timestamp (1511706901, 1), "t": NumberLong (2)}, "optimeDate": ISODate ("2017-11-26T14:35:01Z"), "optimeDurableDate": ISODate ("2017-11-26T14:35:01Z") "lastHeartbeat": ISODate ("2017-11-26T14:35:02.354Z"), "lastHeartbeatRecv": ISODate ("2017-11-26T14:35:02.730Z"), "pingMs": NumberLong (0), "syncingTo": "172.20.6.11 Fran 27017", "configVersion": 1}], "ok": 1}

Looking at mongodb-2 's log, it is found that mongodb-1 heartbeat check has lost its connection and the primary node has been re-elected.

At this point, insert the document on the new node mongodb-2

Mongodb-1 is launched at this time to check whether the cluster status and data are properly synchronized to the mongodb-1.

Start the service of mongodb-1 to check the status of the cluster, and the mongodb-1 has been formed into a new secondary node.

RS:PRIMARY > rs.status () {"set": "RS", "date": ISODate ("2017-11-27T02:13:41.683Z"), "myState": 1, "term": NumberLong (2), "heartbeatIntervalMillis": NumberLong (2000), "optimes": {"lastCommittedOpTime": {"ts": Timestamp (1511748812, 1) "t": NumberLong (2)}, "appliedOpTime": {"ts": Timestamp (1511748812, 1), "t": NumberLong (2)}, "durableOpTime": {"ts": Timestamp (1511748812, 1), "t": NumberLong (2)}} "members": [{"_ id": 0, "name": "172.20.6.10 name", "health": 1, "state": 2, "stateStr": "SECONDARY", # mongodb-1 is the secondary node "uptime": 1945 Optime: {"ts": Timestamp (1511748812, 1), "t": NumberLong (2)}, "optimeDurable": {"ts": Timestamp (1511748812, 1), "t": NumberLong (2)} OptimeDate: ISODate ("2017-11-27T02:13:32Z"), "optimeDurableDate": ISODate ("2017-11-27T02:13:32Z"), "lastHeartbeat": ISODate ("2017-11-27T02:13:41.373Z"), "lastHeartbeatRecv": ISODate ("2017-11-27T02:13:40.854Z"), "pingMs": NumberLong (0) "syncingTo": "172.20.6.12 id 27017", "configVersion": 1}, {"_ id": 1, "name": "172.20.6.11 id 27017", "health": 1, "state": 1, "stateStr": "PRIMARY" # mongodb-2 primary node "uptime": 43760, "optime": {"ts": Timestamp (1511748812, 1), "t": NumberLong (2)}, "optimeDate": ISODate ("2017-11-27T02:13:32Z"), "electionTime": Timestamp (1511706750) 1), "electionDate": ISODate ("2017-11-26T14:32:30Z"), "configVersion": 1, "self": true}, {"_ id": 2, "name": "172.20.6.12 26T14:32:30Z 27017" # mongodb-3 is the secondary node "health": 1, "state": 2, "stateStr": "SECONDARY", "uptime": 43589, "optime": {"ts": Timestamp (1511748812, 1), "t": NumberLong (2)} "optimeDurable": {"ts": Timestamp (1511748812, 1), "t": NumberLong (2)}, "optimeDate": ISODate ("2017-11-27T02:13:32Z"), "optimeDurableDate": ISODate ("2017-11-27T02:13:32Z") "lastHeartbeat": ISODate ("2017-11-27T02:13:41.220Z"), "lastHeartbeatRecv": ISODate ("2017-11-27T02:13:41.209Z"), "pingMs": NumberLong (0), "syncingTo": "172.20.6.11 Fran 27017", "configVersion": 1}], "ok": 1}

Check that the mongodb-1 data has been synchronized normally.

IV. Other

If you take into account the excessive replication pressure on the main server, you can make an arbitration node, in which the arbitration node does not store data, but is only responsible for the failover group vote, so as to reduce the pressure of data replication.

Delete a node:

Rs.remove ("172.20.6.12 27017") # Delete a node

Add nod

Rs.add ("172.20.6.12 id 27017") # add node rs.addArb ("172.20.6.12 id 27017") # add arbiter node {"_ id": 2, "name": "172.20.6.12 arbiter 27017", "health": 1, "state": 7 "stateStr": "ARBITER", # arbiter node "uptime": 4, "lastHeartbeat": ISODate ("2017-11-27T02:35:01.634Z"), "lastHeartbeatRecv": ISODate ("2017-11-27T02:35:00.637Z"), "pingMs": NumberLong (0) "syncingTo": "172.20.6.11 27017", "configVersion": 9}

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report