Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

MongoDB sets Replication Sets

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

There are two types of MongoDB high availability:

Master-Slave master-slave replication: synchronization can be achieved by adding the-master parameter to one service and the-slave and-source parameters to the other service.

This solution is no longer recommended in the latest version of MongoDB.

Replica Sets replication set: MongoDB developed a new feature replica set in version 1.6, which is a little more powerful than the previous replication, adding automatic failover

And automatically repair member nodes, the data between each DB is completely consistent, which greatly reduces the success of maintenance. Auto shard has made it clear that replication paris is not supported, and it is recommended to use

Replica set,replica set failover is completely automatic.

The structure of Replica Sets is very similar to that of a cluster. If one node fails, the other nodes will immediately take over the business without downtime.

192.168.110.131 (node1)

192.168.110.132 (node2)

192.168.110.133 (node3)

Official documents:

Http://docs.mongoing.com/manual-zh/

Deploy replication sets:

Http://docs.mongoing.com/manual-zh/tutorial/deploy-replica-set.html

1. MongoDB installation

[root@node1 ~] # vim / etc/yum.repos.d/Mongodb.repo

[mongodb-org-3.4]

Name=MongoDB Repository

Baseurl= https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.4/x86_64/

Gpgcheck=1

Enabled=1

Gpgkey= https://www.mongodb.org/static/pgp/server-3.4.asc

[root@node1 ~] # yum install-y mongodb-org

[root@node1 ~] # service mongod start

Starting mongod: [OK]

[root@node1 ~] # ps aux | grep mong

Mongod 1361 5.7 14.8 351180 35104? Sl 01:26 0:01 / usr/bin/mongod-f / etc/mongod.conf

Change the data storage directory:

[root@node1] # mkdir-p / mongodb/data

[root@node1] # chown-R mongod:mongod / mongodb/

[root@node1 ~] # ll / mongodb/

Total 4

Drwxr-xr-x 2 mongod mongod 4096 May 18 02:04 data

[root@node1 ~] # grep-v "^ #" / etc/mongod.conf | grep-v "^ $"

SystemLog:

Destination: file

LogAppend: true

Path: / var/log/mongodb/mongod.log

Storage:

DbPath: / mongodb/data

Journal:

Enabled: true

ProcessManagement:

Fork: true # fork and run in background

PidFilePath: / var/run/mongodb/mongod.pid # location of pidfile

Net:

Port: 27017

BindIp: 0.0.0.0 # Listen to local interface only, comment to listen on all interfaces.

[root@node1 ~] # service mongod start

Starting mongod: [OK]

Node2,node2 is the same as above

2. Configure Replication Sets

Introduce the parameters involved

-- size of oplogSize log operation file

-- dbpath data file path

-- logpath log file path

-- port port number. Default is 27017. I use the same port number here.

-- the name of the replSet replication set, and this parameter for each node in a replica sets must use a replica set name, here is test.

-- replSet test/ is followed by the ip and port of other standard nodes

-- maximum number of maxConns connections

-- fork runs in the background

-- logappend log files are recycled. If the log file is full, the new log overwrites the longest log.

-- keyFile identifies the authenticated private key of the same cluster

Be sure to add the parameter oplogSize to set the size of the node when you start it, otherwise the mongodb,oplogs on 64-bit operating systems is quite large-maybe 5% of the disk space.

Set a reasonable value according to the situation.

Parameters on v3.4.4:

[root@node1 ~] # vim / etc/mongod.conf

Replication:

OplogSizeMB: 1024

ReplSetName: rs0

Deploy replication sets using Keyfile access control:

Openssl rand-base64 756 >

Chmod 400

Configuration File

If using a configuration file, set the security.keyFile option to the keyfile's path, and the replication.replSetName option to the replica set name:

Security:

KeyFile:

Replication:

ReplSetName:

Command Line

If using the command line option, start the mongod with the-keyFile and-replSet parameters:

Mongod-keyFile-replSet

Configure Replication Sets with key file:

[root@node1] # openssl rand-base64 756 > / mongodb/mongokey

[root@node1 ~] # cat / mongodb/mongokey

GxpcgjyFj2qE8b9TB/0XbdRVYH9VDb55NY03AHwxCFU58MUjJMeez844i1gaUo/t

.

.

[root@node1 ~] # chmod 400 / mongodb/mongokey

[root@node1 ~] # chown mongod:mongod / mongodb/mongokey

[root@node1 ~] # ll / mongodb/

Total 8

Drwxr-xr-x 4 mongod mongod 4096 May 19 18:39 data

-r-1 mongod mongod 1024 May 19 18:29 mongokey

[root@node1 ~] # vim / etc/mongod.conf

# security:

Security:

KeyFile: / mongodb/mongokey

# operationProfiling:

# replication:

Replication:

OplogSizeMB: 1024

ReplSetName: rs0

[root@node1 ~] # service mongod restart

Stopping mongod: [OK]

Starting mongod: [OK]

[root@node1] # iptables-I INPUT 4-m state-- state NEW-p tcp-- dport 27017-j ACCEPT

Copy the hosts file:

[root@node1] # rsync-avH-- progress'- e ssh-p 22' / etc/hosts root@node2.pancou.com:/mongodb/

[root@node1] # rsync-avH-- progress'- e ssh-p 22' / etc/hosts root@node3.pancou.com:/mongodb/

Copy the key file:

[root@node1] # rsync-avH-- progress'- e ssh-p 22' / mongodb/mongokey root@node3.pancou.com:/mongodb/

[root@node1] # rsync-avH-- progress'- e ssh-p 22' / mongodb/mongokey root@node3.pancou.com:/mongodb/

Copy the configuration file:

[root@node1] # rsync-avH-- progress'- e ssh-p 22' / etc/mongod.conf root@node2.pancou.com:/etc/

[root@node1] # rsync-avH-- progress'- e ssh-p 22' / etc/mongod.conf root@node3.pancou.com:/etc/

Note: both parties should follow rsync and openssh-clients

[root@node1 ~] # mongo

> help

Db.help () help on db methods

Db.mycoll.help () help on collection methods

Sh.help () sharding helpers

Rs.help () replica set helpers

.

> rs.help ()

Rs.status () {replSetGetStatus: 1} checks repl set status

Rs.initiate () {replSetInitiate: null} initiates set with default settings

Rs.initiate (cfg) {replSetInitiate: cfg} initiates set with configuration cfg

Rs.conf () get the current configuration object from local.system.replset

.

> rs.status ()

{

"info": "run rs.initiate (...) if not yet done for the set"

"ok": 0

"errmsg": "no replset config has been received"

"code": 94

"codeName": "NotYetInitialized"

}

> rs.initiate ()

{

Info2: "no configuration specified. Using a default configuration for the set"

"me": "node1.pancou.com:27017"

"ok": 1

}

Rs0:OTHER >

Rs0:PRIMARY > rs.status ()

{

"set": "rs0"

Date: ISODate ("2017-05-18T17:00:49.868Z")

"myState": 1

"term": NumberLong (1)

"heartbeatIntervalMillis": NumberLong (2000)

"optimes": {

"lastCommittedOpTime": {

Ts: Timestamp (1495126845, 1)

"t": NumberLong (1)

}

"appliedOpTime": {

Ts: Timestamp (1495126845, 1)

"t": NumberLong (1)

}

"durableOpTime": {

Ts: Timestamp (1495126845, 1)

"t": NumberLong (1)

}

}

"members": [

{

"_ id": 0

"name": "node1.pancou.com:27017"

"health": 1

"state": 1

"stateStr": "PRIMARY"

"uptime": 1239

"optime": {

Ts: Timestamp (1495126845, 1)

"t": NumberLong (1)

}

OptimeDate: ISODate ("2017-05-18T17:00:45Z")

"infoMessage": "could not find member to sync from"

ElectionTime: Timestamp (1495126824, 2)

ElectionDate: ISODate ("2017-05-18T17:00:24Z")

"configVersion": 1

"self": true

}

]

"ok": 1

}

Rs0:PRIMARY > rs.add ("node2.pancou.com")

{"ok": 1}

Rs0:PRIMARY > rs.add ("node3.pancou.com")

{"ok": 1}

Rs0:PRIMARY > rs.status ()

{

"set": "rs0"

Date: ISODate ("2017-05-18T17:08:47.724Z")

"myState": 1

"term": NumberLong (1)

"heartbeatIntervalMillis": NumberLong (2000)

"optimes": {

"lastCommittedOpTime": {

Ts: Timestamp (1495127325, 1)

"t": NumberLong (1)

}

"appliedOpTime": {

Ts: Timestamp (1495127325, 1)

"t": NumberLong (1)

}

"durableOpTime": {

Ts: Timestamp (1495127325, 1)

"t": NumberLong (1)

}

}

"members": [

{

"_ id": 0

"name": "node1.pancou.com:27017"

"health": 1, / / indicates that the status is normal

"state": 1, / 1 means PRIMARY,2, it means slave

"stateStr": "PRIMARY", / / indicates that this machine is the main library

"uptime": 1717

"optime": {

Ts: Timestamp (1495127325, 1)

"t": NumberLong (1)

}

OptimeDate: ISODate ("2017-05-18T17:08:45Z")

ElectionTime: Timestamp (1495126824, 2)

ElectionDate: ISODate ("2017-05-18T17:00:24Z")

"configVersion": 3

"self": true

}

{

"_ id": 1

"name": "node2.pancou.com:27017"

"health": 1

"state": 2

"stateStr": "SECONDARY"

"uptime": 64

"optime": {

Ts: Timestamp (1495127325, 1)

"t": NumberLong (1)

}

"optimeDurable": {

Ts: Timestamp (1495127325, 1)

"t": NumberLong (1)

}

OptimeDate: ISODate ("2017-05-18T17:08:45Z")

OptimeDurableDate: ISODate ("2017-05-18T17:08:45Z")

LastHeartbeat: ISODate ("2017-05-18T17:08:46.106Z")

LastHeartbeatRecv: ISODate ("2017-05-18T17:08:47.141Z")

"pingMs": NumberLong (0)

"syncingTo": "node1.pancou.com:27017"

"configVersion": 3

}

{

"_ id": 2

"name": "node3.pancou.com:27017"

"health": 1

"state": 2

"stateStr": "SECONDARY"

"uptime": 55

"optime": {

Ts: Timestamp (1495127325, 1)

"t": NumberLong (1)

}

"optimeDurable": {

Ts: Timestamp (1495127325, 1)

"t": NumberLong (1)

}

OptimeDate: ISODate ("2017-05-18T17:08:45Z")

OptimeDurableDate: ISODate ("2017-05-18T17:08:45Z")

LastHeartbeat: ISODate ("2017-05-18T17:08:46.195Z")

LastHeartbeatRecv: ISODate ("2017-05-18T17:08:46.924Z")

"pingMs": NumberLong (0)

"syncingTo": "node2.pancou.com:27017"

"configVersion": 3

}

]

"ok": 1

}

Rs0:PRIMARY > db.isMaster ()

{

"hosts": [

"node1.pancou.com:27017"

"node2.pancou.com:27017"

"node3.pancou.com:27017"

]

"setName": "rs0"

"setVersion": 3

"ismaster": true

"secondary": false

"primary": "node1.pancou.com:27017"

"me": "node1.pancou.com:27017"

"electionId": ObjectId ("7fffffff0000000000000001")

"lastWrite": {

"opTime": {

Ts: Timestamp (1495127705, 1)

"t": NumberLong (1)

}

LastWriteDate: ISODate ("2017-05-18T17:15:05Z")

}

"maxBsonObjectSize": 16777216

"maxMessageSizeBytes": 48000000

"maxWriteBatchSize": 1000

LocalTime: ISODate ("2017-05-18T17:15:11.146Z")

"maxWireVersion": 5

"minWireVersion": 0

"readOnly": false

"ok": 1

}

Rs0:PRIMARY > use testdb

Rs0:PRIMARY > show collections

Testcoll

Rs0:PRIMARY > db.testcoll.find ()

{"_ id": ObjectId ("591dd9f965cc255a5373aefa"), "name": "tom", "age": 25}

To view from the library:

Node2:

Rs0:SECONDARY > rs.slaveOk ()

Rs0:SECONDARY > show dbs

Admin 0.000GB

Local 0.000GB

Testdb 0.000GB

Rs0:SECONDARY > use testdb

Switched to db testdb

Rs0:SECONDARY > show collections

Testcoll

Rs0:SECONDARY > db.testcoll.find ()

{"_ id": ObjectId ("591dd9f965cc255a5373aefa"), "name": "tom", "age": 25}

Rs0:SECONDARY >

Node3:

Rs0:SECONDARY > rs.slaveOk ()

Rs0:SECONDARY > show dbs

Admin 0.000GB

Local 0.000GB

Testdb 0.000GB

Rs0:SECONDARY > use testdb

Switched to db testdb

Rs0:SECONDARY > show collections

Testcoll

Rs0:SECONDARY > db.testcoll.find ()

{"_ id": ObjectId ("591dd9f965cc255a5373aefa"), "name": "tom", "age": 25}

Rs0:SECONDARY >

Master-slave operation log

Rs0:PRIMARY > use local

Switched to db local

Rs0:PRIMARY > show collections

Me

Oplog.rs

Replset.election

Replset.minvalid

Startup_log

System.replset

Rs0:PRIMARY > db.oplog.rs.find ()

{"ts": Timestamp (1495126824, 1), "h": NumberLong ("3056083863196084673"), "v": 2, "op": "n", "ns": "", "o": {"msg": "initiating set"}}

{"ts": Timestamp (1495126825, 1), "t": NumberLong (1), "h": NumberLong ("7195178065440751511"), "v": 2, "op": "n", "ns": "", "o": {"msg": "new primary"}}

{"ts": Timestamp (1495126835, 1), "t": NumberLong (1), "h": NumberLong ("5723995478292318850"), "v": 2, "op": "n", "ns": "", "o": {"msg": "periodic noop"}}

{"ts": Timestamp (1495126845, 1), "t": NumberLong (1), "h": NumberLong ("- 3772304067699003381"), "v": 2, "op": "n", "ns": "", "o"

Third, view configuration information

Rs0:PRIMARY > db.printReplicationInfo ()

Configured oplog size: 1024MB

Log length start to end: 2541secs (0.71hrs)

Oplog first event time: Fri May 19 2017 01:00:24 GMT+0800 (CST)

Oplog last event time: Fri May 19 2017 01:42:45 GMT+0800 (CST)

Now: Fri May 19 2017 01:42:48 GMT+0800 (CST)

Rs0:PRIMARY >

Db.oplog.rs.find (): view logs generated by replication sets

Db.printReplicationInfo (): view some basic information about the operation log, such as the log size and when the log is enabled.

Db.printSlaveReplicationInfo (): view all slave delays.

Rs0:PRIMARY > db.printSlaveReplicationInfo ()

Source: node2.pancou.com:27017

SyncedTo: Fri May 19 2017 01:47:15 GMT+0800 (CST)

0 secs (0 hrs) behind the primary

Source: node3.pancou.com:27017

SyncedTo: Fri May 19 2017 01:47:15 GMT+0800 (CST)

0 secs (0 hrs) behind the primary

Db.system.replset.find (): view replication set

Configuration Information:

Rs0:PRIMARY > db.system.replset.find ()

{"_ id": "rs0", "version": 3, "protocolVersion": NumberLong (1), "members": [{"_ id": 0, "host": "node1.pancou.com:27017", "arbiterOnly": false, "buildIndexes": true, "hidden": false, "priority": 1, "tags": {}, "slaveDelay": NumberLong (0), "votes": 1} {"_ id": 1, "host": "node2.pancou.com:27017", "arbiterOnly": false, "buildIndexes": true, "hidden": false, "priority": 1, "tags": {}, "slaveDelay": NumberLong (0), "votes": 1}, {"_ id": 2, "host": "node3.pancou.com:27017", "arbiterOnly": false BuildIndexes: true, "hidden": false, "priority": 1, "tags": {}, "slaveDelay": NumberLong (0), "votes": 1}], "settings": {"chainingAllowed": true, "heartbeatIntervalMillis": 2000, "heartbeatTimeoutSecs": 10, "electionTimeoutMillis": 10000, "catchUpTimeoutMillis": 2000, "getLastErrorModes": {}, "getLastErrorDefaults": {"w": 1 "wtimeout": 0}, "replicaSetId": ObjectId ("591dd3284fc6957e660dc933")}}

Rs0:PRIMARY > db.system.replset.find (). ForEach (printjson) is more intuitive

4. Master-slave switching:

1. Freeze node3 for 30 seconds

Rs0:SECONDARY > rs.freeze (30)

{"ok": 1}

2. Downgrade node1 PRIMARY,

Rs0:PRIMARY > rs.stepDown (30)

2017-05-19T02:09:27.945+0800 E QUERY [thread1] Error: error doing query: failed: network error while attempting to run command 'replSetStepDown' on host' 127.0.0.1 purl 27017':

DB.prototype.runCommand@src/mongo/shell/db.js:132:1

DB.prototype.adminCommand@src/mongo/shell/db.js:150:16

Rs.stepDown@src/mongo/shell/utils.js:1261:12

@ (shell): 1:1

2017-05-19T02:09:27.947+0800 I NETWORK [thread1] trying reconnect to 127.0.0.1 thread1 27017 (127.0.0.1) failed

2017-05-19T02:09:27.949+0800 I NETWORK [thread1] reconnect 127.0.0.1 thread1 27017 (127.0.0.1) ok

After 30 seconds, it becomes

Rs0:SECONDARY > rs.status ()

{

"set": "rs0"

Date: ISODate ("2017-05-18T18:12:09.732Z")

"myState": 2

"term": NumberLong (2)

"syncingTo": "node2.pancou.com:27017"

"heartbeatIntervalMillis": NumberLong (2000)

"optimes": {

"lastCommittedOpTime": {

Ts: Timestamp (1495131128, 1)

"t": NumberLong (2)

}

"appliedOpTime": {

Ts: Timestamp (1495131128, 1)

"t": NumberLong (2)

}

"durableOpTime": {

Ts: Timestamp (1495131128, 1)

"t": NumberLong (2)

}

}

"members": [

{

"_ id": 0

"name": "node1.pancou.com:27017"

"health": 1

"state": 2

"stateStr": "SECONDARY"

"uptime": 5519

"optime": {

Ts: Timestamp (1495131128, 1)

"t": NumberLong (2)

}

OptimeDate: ISODate ("2017-05-18T18:12:08Z")

"syncingTo": "node2.pancou.com:27017"

"configVersion": 3

"self": true

}

{

"_ id": 1

"name": "node2.pancou.com:27017"

"health": 1

"state": 1

"stateStr": "PRIMARY"

"uptime": 3866

"optime": {

Ts: Timestamp (1495131118, 1)

"t": NumberLong (2)

}

"optimeDurable": {

Ts: Timestamp (1495131118, 1)

"t": NumberLong (2)

}

OptimeDate: ISODate ("2017-05-18T18:11:58Z")

OptimeDurableDate: ISODate ("2017-05-18T18:11:58Z")

LastHeartbeat: ISODate ("2017-05-18T18:12:08.333Z")

LastHeartbeatRecv: ISODate ("2017-05-18T18:12:08.196Z")

"pingMs": NumberLong (0)

ElectionTime: Timestamp (1495130977, 1)

ElectionDate: ISODate ("2017-05-18T18:09:37Z")

"configVersion": 3

}

{

"_ id": 2

"name": "node3.pancou.com:27017"

"health": 1

"state": 2

"stateStr": "SECONDARY"

"uptime": 3857

"optime": {

Ts: Timestamp (1495131118, 1)

"t": NumberLong (2)

}

"optimeDurable": {

Ts: Timestamp (1495131118, 1)

"t": NumberLong (2)

}

OptimeDate: ISODate ("2017-05-18T18:11:58Z")

OptimeDurableDate: ISODate ("2017-05-18T18:11:58Z")

LastHeartbeat: ISODate ("2017-05-18T18:12:08.486Z")

LastHeartbeatRecv: ISODate ("2017-05-18T18:12:08.116Z")

"pingMs": NumberLong (0)

"syncingTo": "node2.pancou.com:27017"

"configVersion": 3

}

]

"ok": 1

}

Rs0:SECONDARY >

5. Add or decrease nodes

1. Add nodes

Adding nodes through oplog makes the synchronization of data completely dependent on oplog, that is, how many operation logs are in the oplog, and these operation logs are executed completely in the newly added node to complete the synchronization.

Based on a 3-node replication set above, you now want to configure and start a new node and add it to the current replication set environment.

[root@node1] # rsync-avH-- progress'- e ssh-p 22' / etc/hosts root@node2.pancou.com:/etc/

[root@node1] # rsync-avH-- progress'- e ssh-p 22' / mongodb/mongokey root@node4.pancou.com:/mongodb/

[root@node1] # rsync-avH-- progress'- e ssh-p 22' / etc/mongod.conf root@node4.pancou.com:/etc/

[root@node4] # iptables-I INPUT 4-m state-- state NEW-p tcp-- dport 27017-j ACCEPT

Add a new node to the master:

Rs0:PRIMARY > rs.add ("node4.pancou.com")

{"ok": 1}

Rs0:PRIMARY > rs.status ()

{

"set": "rs0"

Date: ISODate ("2017-05-19T12:12:57.697Z")

"myState": 1

"term": NumberLong (8)

"heartbeatIntervalMillis": NumberLong (2000)

"optimes": {

"lastCommittedOpTime": {

Ts: Timestamp (1495195971, 1)

"t": NumberLong (8)

}

"appliedOpTime": {

Ts: Timestamp (1495195971, 1)

"t": NumberLong (8)

}

"durableOpTime": {

Ts: Timestamp (1495195971, 1)

"t": NumberLong (8)

}

}

"members": [

{

"_ id": 0

"name": "node1.pancou.com:27017"

"health": 1

"state": 2

"stateStr": "SECONDARY"

Uptime: 159

"optime": {

Ts: Timestamp (1495195971, 1)

"t": NumberLong (8)

}

"optimeDurable": {

Ts: Timestamp (1495195971, 1)

"t": NumberLong (8)

}

OptimeDate: ISODate ("2017-05-19T12:12:51Z")

OptimeDurableDate: ISODate ("2017-05-19T12:12:51Z")

LastHeartbeat: ISODate ("2017-05-19T12:12:56.111Z")

LastHeartbeatRecv: ISODate ("2017-05-19T12:12:57.101Z")

"pingMs": NumberLong (0)

"syncingTo": "node3.pancou.com:27017"

"configVersion": 4

}

{

"_ id": 1

"name": "node2.pancou.com:27017"

"health": 1

"state": 2

"stateStr": "SECONDARY"

"uptime": 189

"optime": {

Ts: Timestamp (1495195971, 1)

"t": NumberLong (8)

}

"optimeDurable": {

Ts: Timestamp (1495195971, 1)

"t": NumberLong (8)

}

OptimeDate: ISODate ("2017-05-19T12:12:51Z")

OptimeDurableDate: ISODate ("2017-05-19T12:12:51Z")

LastHeartbeat: ISODate ("2017-05-19T12:12:56.111Z")

LastHeartbeatRecv: ISODate ("2017-05-19T12:12:57.103Z")

"pingMs": NumberLong (0)

"syncingTo": "node3.pancou.com:27017"

"configVersion": 4

}

{

"_ id": 2

"name": "node3.pancou.com:27017"

"health": 1

"state": 1

"stateStr": "PRIMARY"

Uptime: 191

"optime": {

Ts: Timestamp (1495195971, 1)

"t": NumberLong (8)

}

OptimeDate: ISODate ("2017-05-19T12:12:51Z")

ElectionTime: Timestamp (1495195800, 1)

ElectionDate: ISODate ("2017-05-19T12:10:00Z")

"configVersion": 4

"self": true

}

{

"_ id": 3

"name": "node4.pancou.com:27017"

"health": 1

"state": 2

"stateStr": "SECONDARY"

"uptime": 71

"optime": {

Ts: Timestamp (1495195971, 1)

"t": NumberLong (8)

}

"optimeDurable": {

Ts: Timestamp (1495195971, 1)

"t": NumberLong (8)

}

OptimeDate: ISODate ("2017-05-19T12:12:51Z")

OptimeDurableDate: ISODate ("2017-05-19T12:12:51Z")

LastHeartbeat: ISODate ("2017-05-19T12:12:56.122Z")

LastHeartbeatRecv: ISODate ("2017-05-19T12:12:56.821Z")

"pingMs": NumberLong (1)

"syncingTo": "node3.pancou.com:27017"

"configVersion": 4

}

]

"ok": 1

}

View status:

Rs0:SECONDARY > rs.slaveOk ()

Rs0:SECONDARY > show dbs

Admin 0.000GB

Local 0.000GB

Testdb 0.000GB

Rs0:SECONDARY > use testdb

Switched to db testdb

Rs0:SECONDARY > show collections

Testcoll

Rs0:SECONDARY > db.testcoll.find ()

{"_ id": ObjectId ("591dd9f965cc255a5373aefa"), "name": "tom", "age": 25}

Rs0:SECONDARY >

Rs0:SECONDARY > db.isMaster ()

{

"hosts": [

"node1.pancou.com:27017"

"node2.pancou.com:27017"

"node3.pancou.com:27017"

"node4.pancou.com:27017"

]

"setName": "rs0"

"setVersion": 4

"ismaster": false

"secondary": true

"primary": "node3.pancou.com:27017"

"me": "node4.pancou.com:27017"

"lastWrite": {

"opTime": {

Ts: Timestamp (1495196261, 1)

"t": NumberLong (8)

}

LastWriteDate: ISODate ("2017-05-19T12:17:41Z")

}

"maxBsonObjectSize": 16777216

"maxMessageSizeBytes": 48000000

"maxWriteBatchSize": 1000

LocalTime: ISODate ("2017-05-19T12:17:44.104Z")

"maxWireVersion": 5

"minWireVersion": 0

"readOnly": false

"ok": 1

}

Rs0:SECONDARY >

2. Reduce number of nodes

Rs0:PRIMARY > rs.remove ("node4.pancou.com:27017")

{"ok": 1}

Rs0:PRIMARY > db.isMaster ()

{

"hosts": [

"node1.pancou.com:27017"

"node2.pancou.com:27017"

"node3.pancou.com:27017"

]

"setName": "rs0"

"setVersion": 5

"ismaster": true

"secondary": false

"primary": "node3.pancou.com:27017"

"me": "node3.pancou.com:27017"

"electionId": ObjectId ("7fffffff0000000000000008")

"lastWrite": {

"opTime": {

Ts: Timestamp (1495196531, 1)

"t": NumberLong (8)

}

LastWriteDate: ISODate ("2017-05-19T12:22:11Z")

}

"maxBsonObjectSize": 16777216

"maxMessageSizeBytes": 48000000

"maxWriteBatchSize": 1000

LocalTime: ISODate ("2017-05-19T12:22:19.874Z")

"maxWireVersion": 5

"minWireVersion": 0

"readOnly": false

"ok": 1

}

Rs0:PRIMARY >

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report