Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Detailed explanation of MongoDB replication set

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

What is a replication set? Replication sets are additional replicas of data, the process of synchronizing data across multiple servers, providing redundancy and increasing data availability, through which hardware failures and interrupted services can be restored. The principle of master-slave synchronization in a replica set is similar to that of mysql. The master node records all operations on it to the oplog, and the slave node polls the master node regularly for these operations, and then performs these operations on its own data copy, thus ensuring that the data of the slave node is consistent with that of the master node.

The advantages of replication sets are as follows:

(1) make the data more secure

(2) High data availability (247)

(3) disaster recovery

(4) No downtime maintenance (such as backup, index reconstruction, failover)

(5) read scaling (additional copy reading)

(6) replica sets are transparent to applications. MongoDB replication set structure schematic analysis. MongoDB replication set requires at least two nodes. One of them is the master node (Primary), which is responsible for handling client requests, and the rest is the slave node (Secondary), which is responsible for replicating data on the master node. The common collocation of each node in mongodb is: one master, one slave or one master and multiple slaves. The client writes data in the master node, reads the data in the slave node, and the master node interacts with the slave node to ensure the consistency of the data. If one of the nodes fails, the other nodes will immediately take over the business without downtime.

The characteristics of replication set: 1. N-node cluster

two。 Any node can be used as the primary node

3. All writes are on the primary node

4. Automatic failover

5. Automatic recovery of MongoDB replication sets deployed on a CentOS7 host using yum to install Mongodb online, and create multiple instances to deploy MongoDB replication sets first configure the network YUM source, and baseurl (download path) is designated as the yum repository provided by the mongodb official website

Vim / etc/yum.repos.d/mongodb.repo

[mongodb-org]

Name=MongoDB Repository

Baseurl= https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.6/x86_64/ # specifies the path to get the download

Gpgcheck=1 # means to verify the rpm package downloaded from this source

Enabled=1 # means to enable this source.

Gpgkey= https://www.mongodb.org/static/pgp/server-3.6.asc

Reload the yum source and use the yum command to download and install mongodb

Yum list

Yum-y install mongodb-org

Prepare 4 instances, temporarily open 3, 1 master 2 add additional clusters (append instances) can also cancel the instance to create data files and log file storage paths, and grant permissions

[root@localhost] # mkdir-p / data/mongodb {2pm 3pm 4}

[root@localhost ~] # mkdir / data/logs

[root@localhost ~] # touch / data/logs/mongodb {2pm 3pm 4} .log

[root@localhost ~] # chmod 777 / data/logs/mongodb*

[root@localhost ~] # ll / data/logs/

Total dosage 0

-rwxrwxrwx. 1 root root 0 September 15 22:31 mongodb2.log

-rwxrwxrwx. 1 root root 0 September 15 22:31 mongodb3.log

-rwxrwxrwx. 1 root root 0 September 15 22:31 mongodb4.log

Edit the configuration file of 4 MongoDB instances first edit the configuration file / etc/mongod.conf of the default instance installed by yum, specify the listening IP, default port is 27017, enable replication parameter configuration, replSetName:true (Custom)

[root@localhost ~] # vim / etc/mongod.conf

# mongod.conf

# for documentation of all options, see:

# http://docs.mongodb.org/manual/reference/configuration-options/

# where to write logging data.

SystemLog:

Destination: file

LogAppend: true

Path: / var/log/mongodb/mongod.log

# Where and how to store data.

Storage:

DbPath: / var/lib/mongo

Journal:

Enabled: true

# engine:

# mmapv1:

# wiredTiger:

# how the process runs

ProcessManagement:

Fork: true # fork and run in background

PidFilePath: / var/run/mongodb/mongod.pid # location of pidfile

TimeZoneInfo: / usr/share/zoneinfo

# network interfaces

Net:

Port: 27017 # default port

BindIp: 0.0.0.0 # listen for any address

# security:

# operationProfiling:

Replication: # remove the previous "#" comment and turn on the parameter setting

ReplSetName: true # sets the replication set name

Copy the configuration file to the other instance, and configure the port parameter in mongodb2.conf to 27018mongod3.conf. The port parameter in 27019 mongod4.conf is configured to 27020. Also modify the dbpath and logpath parameters to the corresponding path values

Cp / etc/mongod.conf / etc/mongod2.conf

Cp / etc/mongod2.conf / etc/mongod3.conf

Cp / etc/mongod2.conf / etc/mongod4.conf

Mongodb2.conf modification of the configuration file of instance 2

Vim / etc/mongod2.conf

SystemLog:

Destination: file

LogAppend: true

Path: / data/logs/mongodb2.log

Storage:

DbPath: / data/mongodb/mongodb2

Journal:

Enabled: true

Port: 27018

BindIp: 0.0.0.0 # Listen to local interface only, comment to listen on all interfaces.

# security:

# operationProfiling:

Replication:

ReplSetName: true

The configuration file mongodb3.conf of instance 3 is modified

Vim / etc/mongod3.conf

SystemLog:

Destination: file

LogAppend: true

Path: / data/logs/mongodb3.log

Storage:

DbPath: / data/mongodb/mongodb3

Journal:

Enabled: true

Port: 27019

BindIp: 0.0.0.0 # Listen to local interface only, comment to listen on all interfaces.

# security:

# operationProfiling:

Replication:

ReplSetName: true

The configuration file mongodb4.conf of example 4 is modified.

Vim / etc/mongod4.conf

SystemLog:

Destination: file

LogAppend: true

Path: / data/logs/mongodb4.log

Storage:

DbPath: / data/mongodb/mongodb4

Journal:

Enabled: true

Port: 27020

BindIp: 0.0.0.0 # Listen to local interface only, comment to listen on all interfaces.

# security:

# operationProfiling:

Replication:

ReplSetName: true

Start each instance of mongodb

[root@localhost] # mongod-f / etc/mongod.conf

About to fork child process, waiting until server is ready for connections.

Forked process: 93576

Child process started successfully, parent exiting

[root@localhost] # mongod-f / etc/mongod2.conf

About to fork child process, waiting until server is ready for connections.

Forked process: 93608

Child process started successfully, parent exiting

[root@localhost] # mongod-f / etc/mongod3.conf

About to fork child process, waiting until server is ready for connections.

Forked process: 93636

Child process started successfully, parent exiting

[root@localhost] # mongod-f / etc/mongod4.conf

About to fork child process, waiting until server is ready for connections.

Forked process: 93664

Child process started successfully, parent exiting

[root@localhost ~] # netstat-antp | grep mongod / / View the status of the mongodb process

Tcp 0 0 0.0.0 0 27019 0.0.0 0 V * LISTEN 93636/mongod

Tcp 0 0 0.0.0 0 27020 0.0.0 0 15 * LISTEN 93664/mongod

Tcp 0 0 0.0.0 0 27017 0.0.0 0 15 * LISTEN 93576/mongod

Tcp 0 0 0.0.0 0 27018 0.0.0 0 15 * LISTEN 93608/mongod

Configure a replication set with three nodes

[root@localhost ~] # mongo / / enter one of the instances

MongoDB shell version v3.6.7

Connecting to: mongodb://127.0.0.1:27017

MongoDB server version: 3.6.7

> rs.status () / / View the status of the replication set, indicating that the replication set has not been configured

{

"info": "run rs.initiate (...) if not yet done for the set"

"ok": 0

"errmsg": "no replset config has been received"

"code": 94

"codeName": "NotYetInitialized"

"operationTime": Timestamp (0,0)

"$clusterTime": {

"clusterTime": Timestamp (0,0)

"signature": {

"hash": BinData (0, "AAAAAAAAAAAAAAAAAAAAAAAAAAA=")

"keyId": NumberLong (0)

}

}

> cfg= {"_ id": "true", "members": [{"_ id": 0, "host": "192.168.195.137true 27017"}, {"_ id": 1, "host": "192.168.195.137true 27018"}, {"_ id": 2, "host": "192.168.195.137true" 27019 "}]} / define cfg initialization parameters

{

"_ id": "true"

"members": [

{

"_ id": 0

"host": "192.168.195.137VR 27017"

}

{

"_ id": 1

"host": "192.168.195.137VR 27018"

}

{

"_ id": 2

"host": "192.168.195.137VR 27019"

}

]

}

> rs.initiate (cfg) / / initialize and start the replication set

{"ok": 1}

True:PRIMARY > rs.status () / / View the status information of the replication set again

{

"set": "true"

Date: ISODate ("2018-09-15T15:39:48.426Z")

"myState": 1

"term": NumberLong (1)

"syncingTo":

"syncSourceHost":

"syncSourceId":-1

"heartbeatIntervalMillis": NumberLong (2000)

"optimes": {

"lastCommittedOpTime": {

Ts: Timestamp (1537025984, 1)

"t": NumberLong (1)

}

"readConcernMajorityOpTime": {

Ts: Timestamp (1537025984, 1)

"t": NumberLong (1)

}

"appliedOpTime": {

Ts: Timestamp (1537025984, 1)

"t": NumberLong (1)

}

"durableOpTime": {

Ts: Timestamp (1537025984, 1)

"t": NumberLong (1)

}

}

"members": [

{

"_ id": 0

"name": "192.168.195.137VR 27017"

"health": 1

"state": 1

"stateStr": "PRIMARY", / / this node becomes the primary node

"uptime": 1371

"optime": {

Ts: Timestamp (1537025984, 1)

"t": NumberLong (1)

}

.

{

"_ id": 1

"name": "192.168.195.137VR 27018"

"health": 1

"state": 2

"stateStr": "SECONDARY", / / this node is a slave node

"uptime": 18

"optime": {

Ts: Timestamp (1537025984, 1)

"t": NumberLong (1)

}

"optimeDurable": {

Ts: Timestamp (1537025984, 1)

"t": NumberLong (1)

}

.

{

"_ id": 2

"name": "192.168.195.137VR 27019"

"health": 1

"state": 2

"stateStr": "SECONDARY", / / this node is a slave node

"uptime": 18

"optime": {

Ts: Timestamp (1537025984, 1)

"t": NumberLong (1)

}

"optimeDurable": {

Ts: Timestamp (1537025984, 1)

"t": NumberLong (1)

}

Add nod

True:PRIMARY > rs.add ("192.168.195.137VR 27020")

True:PRIMARY > rs.status ()

.

{

"_ id": 3

"name": "192.168.195.137VR 27020"

"health": 1

"state": 2

"stateStr": "SECONDARY"

"uptime": 8

"optime": {

Ts: Timestamp (1537026818, 1)

"t": NumberLong (1)

}

"optimeDurable": {

Ts: Timestamp (1537026818, 1)

"t": NumberLong (1)

}

.

Delete nod

True:PRIMARY > rs.remove ("192.168.195.137VR 27020")

Simulated automatic failover

[root@localhost ~] # mongod-f / etc/mongod.conf-- shutdown / / disable the primary node service

Killing process with pid: 93576

[root@localhost ~] # mongo-- port 27018 / / enter one of the slave nodes

MongoDB shell version v3.6.7

Connecting to: mongodb://127.0.0.1:27018/

MongoDB server version: 3.6.7

True:PRIMARY > rs.status () / / View replication set information

.

}

"members": [

{

"_ id": 0

"name": "192.168.195.137 27017", / / the original health value of the primary node is 0

"health": 0

"state": 8

"stateStr": "(not reachable/healthy)

"uptime": 0

"optime": {

"ts": Timestamp (0,0)

"t": NumberLong (- 1)

}

"optimeDurable": {

"ts": Timestamp (0,0)

"t": NumberLong (- 1)

}

.

{

"_ id": 1

"name": "192.168.195.137VR 27018"

"health": 1

"state": 1

"stateStr": "PRIMARY", / / this node is switched to the master node

"uptime": 2657

"optime": {

Ts: Timestamp (1537027275, 1)

"t": NumberLong (2)

}

.

{

"_ id": 2

"name": "192.168.195.137VR 27019"

"health": 1

"state": 2

"stateStr": "SECONDARY"

"uptime": 1312

"optime": {

Ts: Timestamp (1537027275, 1)

"t": NumberLong (2)

}

"optimeDurable": {

Ts: Timestamp (1537027275, 1)

"t": NumberLong (2)

}

Manually switch between master and slave

[root@localhost ~] # mongod-f / etc/mongod.conf # Open the node instance with port 27017 that was just closed

About to fork child process, waiting until server is ready for connections.

Forked process: 94723

Child process started successfully, parent exiting

[root@localhost ~] # mongo-- port 27018 # enters the primary node server instance

MongoDB shell version v3.6.7

Connecting to: mongodb://127.0.0.1:27018/

MongoDB server version: 3.6.7

True:PRIMARY > rs.freeze (30) / / suspend election for 30s

True:PRIMARY > rs.stepDown (60.30) / / hand over the location of the master node, maintain the slave node state for no less than 60 seconds, and wait 30 seconds to synchronize the master node and slave node logs

True:SECONDARY > rs.status () / / you can see that this instance has been switched to a slave node

.

"members": [

{

"_ id": 0

"name": "192.168.195.137 27017", / / the node of port 27017 becomes the primary node

"health": 1

"state": 1

"stateStr": "PRIMARY"

"uptime": 167

"optime": {

Ts: Timestamp (1537027620, 1)

"t": NumberLong (3)

}

"optimeDurable": {

Ts: Timestamp (1537027620, 1)

"t": NumberLong (3)

}

.

{

"_ id": 1

"name": "192.168.195.137 27018", / / the node of port 27018 becomes the slave node

"health": 1

"state": 2

"stateStr": "SECONDARY"

"uptime": 2997

"optime": {

Ts: Timestamp (1537027620, 1)

"t": NumberLong (3)

}

.

{

"_ id": 2

"name": "192.168.195.137VR 27019"

"health": 1

"state": 2

"stateStr": "SECONDARY"

"uptime": 1651

"optime": {

Ts: Timestamp (1537027620, 1)

"t": NumberLong (3)

}

"optimeDurable": {

Ts: Timestamp (1537027620, 1)

"t": NumberLong (3)

}

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report