Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Mongodb replication set deployment

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Deploy replication set

A replication set of three nodes provides sufficient redundancy for network failures or other system failures. The replication set also has sufficient ability for distributed read operations. The replication set should keep an odd number of nodes, which ensures that the election can be conducted normally.

Deploy a replication set of three nodes with three existing mongodb mongodb-program docutils literal "> mongod instances

192.168.1.3 hadoop1.abc.com hadoop1

192.168.1.4 hadoop2.abc.com hadoop2

192.168.1.5 hadoop3.abc.com hadoop3

Considerations architecture for deploying replication sets

In a production environment, we should deploy each node on a separate machine and use the standard MongoDB port 27017. Use the bind_ip parameter to restrict the address of the application that accesses MongoDB.

If you are using a replication set of a remotely distributed architecture, make sure that most of the mongod instance nodes are in the primary data center.

Connectivity

Ensure that each node can communicate normally and that each client is in a secure and trusted network environment. You can consider the following:

Establish a virtual private network. Ensure that traffic between nodes is routed within the scope of the local network. (Establish a virtual private network. Ensure that your network topology routes all traffic between members within a single site over the local area network.)

Configure connection restrictions to prevent unknown clients from connecting to the replication set.

Configure network settings and firewall rules to open only the port of MongoDB to the application, so that the incoming and outgoing packets sent by the application can communicate properly with MongoDB.

Finally, make sure that the nodes of the replication set can resolve each other through DNS or hostname. We need to configure the DNS domain name or the settings / etc/hosts file to configure.

The experiment here is to turn off the firewall and set selinux to setenforce 0

The system environment is as follows:

[root@hadoop2 data] # cat / etc/issueCentOS release 6.5 (Final) Kernel\ r on an\ m [root@hadoop2 data] # uname-r2.6.32-431.el6.x86_64

Profile options:

Port = 27017

Bind_ip =

Dbpath =

Fore = true

ReplSet = testrs0

Rest = true

Detailed steps

1. Set up each node to establish a data directory

[root@hadoop1 ~] # mkidr-pv / mongodb/data/ [root@hadoop1 ~] # chown mongod.mongod / mongodb/data/

two

Start each node in the replication set with the appropriate configuration parameters.

Start mongod on each node and specify its replication set name by setting the replSet parameter, and you can specify other required parameters

[root@hadoop1 ~] # vim / etc/mongod.conf// add the following # Replica SetreplSet = testrs0 or [root@hadoop1 ~] # mongod-- replSet "testrs0"

Ensure that each node has the same replication set name

[root@hadoop1 ~] # scp / etc/mongod.conf root@hadoop2:/etc/;scp / etc/mongod.conf root@hadoop2:/etc/

Note that if an addr already in use error occurs when starting mongod, the startup port is occupied.

[root@hadoop1 data] # mongod2015-07-29T19:15:51.728+0800 E NETWORK [initandlisten] listen (): bind () failed errno:98 Address already in use for socket: 0.0.0.029T19:15:51.728+0800 270172015-07-29T19:15:51.728+0800 E NETWORK [initandlisten] addr already in use2015-07-29T19:15:51.729+0800 I STORAGE [initandlisten] exception in initAndListen: 29 Data directory / data/db not found., terminating2015-07-29T19:15:51.729+0800 I CONTROL [initandlisten] dbexit: rc: 100

Find the port and drop the kill.

[root@hadoop1 ~] # netstat-anp | moreunix 2 [ACC] STREAM LISTENING 15588 2174/mongod / tmp/mongodb-27017.sock [root@hadoop1 ~] # kill 2174 [root@hadoop1 ~] # / etc/init.d/mongod startStarting mongod: [confirm]

[root@hadoop1 ~] # mongo

4. initialize the replication set.

/ / using the rs.initiate () command, MongoDB initializes a replication set that consists of the current node and has a default configuration.

> rs.initiate () {"info2": "no configuration explicitly specified-- making one", "me": "hadoop1.abc.com:27017", "info": "try querying local.system.replset to see current configuration", "ok": 0, "errmsg": "already initialized", "code": 23} > rs.status () {"state": 10, "stateStr": "REMOVED", "uptime": 38 "optime": Timestamp (1438168698, 1), "optimeDate": ISODate ("2015-07-29T11:18:18Z"), "ok": 0, "errmsg": "Our replica set config is invalid or we are not a member of it", "code": 93}

View log files

2015-07-29T20:00:45.433+0800 W NETWORK [ReplicationExecutor] Failed to connect to 192.168.1.3 ReplicationExecutor 27017, reason: errno:111 Connection refused

2015-07-29T20:00:45.433+0800 W REPL [ReplicationExecutor] Locally stored replica set configuration does not have a valid entry for the current node; waiting for reconfig or remote heartbeat Got "NodeNotFound No host described in new configuration 1 for replica set testrs0 maps to this node" while validating {_ id: "testrs0", version: 1, members: [{_ id: 0, host: "hadoop1.abc.com:27017", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1}], settings: {chainingAllowed: true, heartbeatTimeoutSecs: 10, getLastErrorModes: {}, getLastErrorDefaults: {w: 1, wtimeout: 0}

2015-07-29T20:00:45.433+0800 I REPL [ReplicationExecutor] New replica set config in use: {_ id: "testrs0", version: 1, members: [{_ id: 0, host: "hadoop1.abc.com:27017", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1}], settings: {chainingAllowed: true, heartbeatTimeoutSecs: 10, getLastErrorModes: {}, getLastErrorDefaults: {w: 1, wtimeout: 0}

2015-07-29T20:00:45.433+0800 I REPL [ReplicationExecutor] This node is not a member of the config

2015-07-29T20:00:45.433+0800 I REPL [ReplicationExecutor] transition to REMOVED

2015-07-29T20:00:45.433+0800 I REPL [ReplicationExecutor] Starting replication applier threads

2015-07-29T20:00:49.067+0800 I NETWORK [initandlisten] connection accepted from 127.0.0.1 purl 58852 # 1 (1 connection now open)

2015-07-29T20:01:17.436+0800 I COMMAND [conn1] replSet info initiate: no configuration specified. Using a default configuration for the set

2015-07-29T20:01:17.436+0800 I COMMAND [conn1] replSet created this configuration for initiation: {_ id: "testrs0", version: 1, members: [{_ id: 0, host: "hadoop1.abc.com:27017"}]}

2015-07-29T20:01:17.436+0800 I REPL [conn1] replSetInitiate admin command received from client

[root@hadoop1 ~] # service mongod stopStopping mongod: [OK] You have new mail in / var/spool/mail/root [root@hadoop1 ~] # vim / etc/mongod.conf# enable bind 127.0.0.1 and restrict local access to # bind 127.0.0.1 [root@hadoop1 data] # service mongod startStarting mongod: [OK] > rs.initiate () {"info2": "no configuration explicitly specified-- making one" "me": "hadoop1.abc.com:27017", "info": "try querying local.system.replset to see current configuration", "ok": 0, "errmsg": "already initialized", "code": 23 testrs0:PRIMARY > rs.status () {"set": "testrs0", "date": ISODate ("2015-07-29T12:13:27.839Z"), "myState": 1 "members": [{"_ id": 0, "name": "hadoop1.abc.com:27017", "health": 1, "state": 1, "stateStr": "PRIMARY", "uptime": 232, "optime": Timestamp (1438168698, 1), "optimeDate": ISODate ("2015-07-29T11:18:18Z"), "electionTime": Timestamp (1438171776, 1) "electionDate": ISODate ("2015-07-29T12:09:36Z"), "configVersion": 1, "self": true}], "ok": 1}

5. Add other nodes to the replication set.

Add the remaining nodes to the replication set through rs.add ().

Testrs0:PRIMARY > rs.add ("192.168.1.4 testrs0:PRIMARY 27017") {"ok": 1} testrs0:PRIMARY > rs.status () {"set": "testrs0", "date": ISODate ("2015-07-30T02:09:45.871Z"), "myState": 1, "members": [{"_ id": 0, "name": "hadoop1.abc.com:27017", "health": 1 "state": 1, "stateStr": "PRIMARY", "uptime": 50410, "optime": Timestamp (1438222179, 1), "optimeDate": ISODate ("2015-07-30T02:09:39Z"), "electionTime": Timestamp (1438171776, 1), "electionDate": ISODate ("1438171776-07-29T12:09:36Z"), "configVersion": 2, "self": true} {"_ id": 1, "name": "192.168.1.4 health", "state": 2, "stateStr": "SECONDARY", "uptime": 6, "optime": Timestamp (1438222179, 1), "optimeDate": ISODate ("2015-07-30T02:09:39Z"), "lastHeartbeat": ISODate ("2015-07-30T02:09:45.081Z") "lastHeartbeatRecv": ISODate ("2015-07-30T02:09:45.183Z"), "pingMs": 1, "configVersion": 2}], "ok": 1} testrs0:PRIMARY > rs.add ("192.168.1.5 configVersion 27017") {"ok": 1} testrs0:PRIMARY > rs.status () {"set": "testrs0", "date": ISODate ("2015-07-30T02:28:52.382Z") MyState: 1, members: [{"_ id": 0, "name": "hadoop1.abc.com:27017", "health": 1, "state": 1, "stateStr": "PRIMARY", "uptime": 51557, "optime": Timestamp (1438223187, 1), "optimeDate": ISODate ("2015-07-30T02:26:27Z"), "electionTime": Timestamp (1438171776) 1), "electionDate": ISODate ("2015-07-29T12:09:36Z"), "configVersion": 3, "self": true}, {"_ id": 1, "name": "192.168.1.4 29T12:09:36Z 27017", "health": 1, "state": 2, "stateStr": "SECONDARY", "uptime": 1153, "optime": Timestamp (1438223187, 1) OptimeDate: ISODate ("2015-07-30T02:26:27Z"), "lastHeartbeat": ISODate ("2015-07-30T02:28:52.337Z"), "lastHeartbeatRecv": ISODate ("2015-07-30T02:28:50.438Z"), "pingMs": 0, "syncingTo": "hadoop1.abc.com:27017", "configVersion": 3}, {"_ id": 2 "name": "192.168.1.5 SECONDARY", "health": 1, "state": 2, "stateStr": "SECONDARY", "uptime": 13, "optime": Timestamp (1438223187, 1), "optimeDate": ISODate ("2015-07-30T02:26:27Z"), "lastHeartbeat": ISODate ("2015-07-30T02:28:50.437Z") "lastHeartbeatRecv": ISODate ("2015-07-30T02:28:50.478Z"), "pingMs": 1, "configVersion": 3}], "ok": 1} testrs0:PRIMARY > rs.isMaster () {"setName": "testrs0", "setVersion": 3, "ismaster": true, "secondary": false, "hosts": ["hadoop1.abc.com:27017", "192.168.1.4 Suzhou 27017" "192.168.1.5 hadoop1.abc.com:27017"], "primary": "hadoop1.abc.com:27017", "me": "hadoop1.abc.com:27017", "electionId": ObjectId ("55b8c280790a6c1f967f6147"), "maxBsonObjectSize": 16777216, "maxMessageSizeBytes": 48000000, "maxWriteBatchSize": 48000000, "localTime": ISODate ("2015-07-30T02:30:18Z"), "maxWireVersion": 3, "minWireVersion": 0, "ok": 1}

Verify the hadoop2 of other nodes

Testrs0:SECONDARY > rs.isMaster () {"setName": "testrs0", "setVersion": 3, "ismaster": false, "secondary": true, "hosts": ["hadoop1.abc.com:27017", "192.168.1.4 setName", "192.168.1.5 setName 27017"], "primary": "hadoop1.abc.com:27017", "me": "192.168.1.4 setName 27017" "maxBsonObjectSize": 16777216, "maxMessageSizeBytes": 48000000, "maxWriteBatchSize": 1000, "localTime": ISODate ("2015-07-30T02:32:43.546Z"), "maxWireVersion": 3, "minWireVersion": 0, "ok": 1}

6. Create data on the master node and get the data from the node

Testrs0:PRIMARY > use testdbswitched to db testdb testrs0:PRIMARY > db.testcoll.insert ({Name: "test", Age: 50 dagger: "F"}) WriteResult ({"nInserted": 1}) testrs0:PRIMARY > db.testcoll.find () {"_ id": ObjectId ("55b9945b92ad0ab98483695e"), "Name": "test", "Age": 60, "Gender": "F"} {"_ id": ObjectId ("55b994ce92ad0ab98483695f"), "Name": "test" "Age": 50, "Gender": "F"}

When querying on a slave node, you can't query directly. You should use a command rs.slave () to promote yourself to a slave node.

Testrs0:SECONDARY > rs.slaveOk () testrs0:SECONDARY > use testdb;switched to db testdbtestrs0:SECONDARY > db.testcoll.find () {"_ id": ObjectId ("55b9945b92ad0ab98483695e"), "Name": "test", "Age": 60, "Gender": "F"} {"_ id": ObjectId ("55b994ce92ad0ab98483695f"), "Name": "test", "Age": 50, "Gender": "F"}

7. Let the master node hadoop1 hang up

[root@hadoop1 data] # service mongod stopStopping mongod:

Verify it on hadoop3.

Testrs0:PRIMARY > rs.status () {"set": "testrs0", "date": ISODate ("2015-07-30T04:36:19.677Z"), "myState": 1, "members": [{"_ id": 0, "name": "hadoop1.abc.com:27017", "health": 0, "state": 8, "stateStr": "(not reachable/healthy)" Uptime: 0, optime: Timestamp (0,0), optimeDate: ISODate ("1970-01-01T00:00:00Z"), "lastHeartbeat": ISODate ("2015-07-30T04:36:19.503Z"), "lastHeartbeatRecv": ISODate ("2015-07-30T04:33:18.147Z"), "pingMs": 0, "lastHeartbeatMessage": "Failed attempt to connect to hadoop1.abc.com:27017" Couldn't connect to server hadoop1.abc.com:27017 (192.168.1.3), connection attempt failed "," configVersion ":-1}, {" _ id ": 1," name ":" 192.168.1.4 id 27017 "," health ": 1," state ": 2," stateStr ":" SECONDARY "," uptime ": 7661," optime ": Timestamp (1438225614, 1) "optimeDate": ISODate ("2015-07-30T03:06:54Z"), "lastHeartbeat": ISODate ("2015-07-30T04:36:19.335Z"), "lastHeartbeatRecv": ISODate ("2015-07-30T04:36:19.348Z"), "pingMs": 0, "configVersion": 3}, {"_ id": 2, "name": "192.168.1.5displacement 27017", "health": 1 "state": 1, "stateStr": "PRIMARY", "uptime": 7664, "optime": Timestamp (1438225614, 1), "optimeDate": ISODate ("2015-07-30T03:06:54Z"), "electionTime": Timestamp (1438230801, 1), "electionDate": ISODate ("1438230801-07-30T04:33:21Z"), "configVersion": 3, "self": true}] "ok": 1} testrs0:PRIMARY > db.isMaster {"setName": "testrs0", "setVersion": 3, "ismaster": true, "secondary": false, "hosts": ["hadoop1.abc.com:27017", "192.168.1.4 db.isMaster 27017", "192.168.1.5db.isMaster 27017"], "primary": "192.168.1.5 setName 27017" "me": "192.168.1.5 55b9a91100e446910c89a0a3", "electionId": ObjectId ("55b9a91100e446910c89a0a3"), "maxBsonObjectSize": 16777216, "maxMessageSizeBytes": 48000000, "maxWriteBatchSize": 1000, "localTime": ISODate ("2015-07-30T04:37:33.090Z"), "maxWireVersion": 3, "minWireVersion": 0, "ok": 1} testrs0:PRIMARY > db.testcoll.insert ({Name: "tom", Age: 45) Gender: "G"}) WriteResult ({"nInserted": 1})

Go back to hadoop2 and check the data.

Testrs0:SECONDARY > db.testcoll.find () {"_ id": ObjectId ("55b9942d92ad0ab98483695c"), "Name": "test", "Age": 60, "Gender": "F"} {"_ id": ObjectId ("55b9944892ad0ab98483695d"), "Name": "test", "Age": 60, "Gender": "F" {"_ id": ObjectId ("55b9945b92ad0ab98483695e"), "Name": "test" "Age": 60, "Gender": "F"} {"_ id": ObjectId ("55b994ce92ad0ab98483695f"), "Name": "test", "Age": 50, "Gender": "F" {"_ id": ObjectId ("55b9aa714b575261aff42f25"), "Name": "tom", "Age": 45, "Gender": "G"}

Get hadoop1 online

[root@hadoop1 data] # service mongod startStarting mongod:

Verify the status again, but you can't regain the initiative unless you give it a higher priority.

In this scenario, it automatically becomes a slave.

Testrs0:SECONDARY > rs.status () {"set": "testrs0", "date": ISODate ("2015-07-30T04:44:16.534Z"), "myState": 2, "syncingTo": "192.168.1.4 name 27017", "members": [{"_ id": 0, "name": "hadoop1.abc.com:27017", "health": 1, "state": 2 "stateStr": "SECONDARY", "uptime": 165,165," optime ": Timestamp (1438231153, 1)," optimeDate ": ISODate (" 2015-07-30T04:39:13Z ")," syncingTo ":" 192.168.1.4 self 27017 "," configVersion ": 3," self ": true}, {" _ id ": 1," name ":" 192.168.1.4 uptime 27017 " "health": 1, "state": 2, "stateStr": "SECONDARY", "uptime": 164,164," optime ": Timestamp (1438231153, 1)," optimeDate ": ISODate (" 2015-07-30T04:39:13Z ")," lastHeartbeat ": ISODate (" 2015-07-30T04:44:16.199Z ")," lastHeartbeatRecv ": ISODate (" 2015-07-30T04:44:15.824Z ")," pingMs ": 0 "syncingTo": "192.168.1.5 id 27017", "configVersion": 3}, {"_ id": 2, "name": "192.168.1.5 name", "health": 1, "state": 1, "stateStr": "PRIMARY", "uptime": 164," optime ": Timestamp (1438231153, 1) "optimeDate": ISODate ("2015-07-30T04:39:13Z"), "lastHeartbeat": ISODate ("2015-07-30T04:44:16.185Z"), "lastHeartbeatRecv": ISODate ("2015-07-30T04:44:14.902Z"), "pingMs": 0, "electionTime": Timestamp (1438230801, 1), "electionDate": ISODate ("2015-07-30T04:33:21Z"), "configVersion": 3}] "ok": 1} testrs0:SECONDARY > rs.slaveOk () testrs0:SECONDARY > db.testcoll.find () testrs0:SECONDARY > use testdbswitched to db testdbtestrs0:SECONDARY > db.testcoll.find () {"_ id": ObjectId ("55b9942d92ad0ab98483695c"), "Name": "test", "Age": 60, "Gender": "F"} {"_ id": ObjectId ("55b9944892ad0ab98483695d"), "Name": "test", "Age": 60 "Gender": "F"} {"_ id": ObjectId ("55b9945b92ad0ab98483695e"), "Name": "test", "Age": 60, "Gender": "F"} {"_ id": ObjectId ("55b994ce92ad0ab98483695f"), "Name": "test", "Age": 50, "Gender": "F"} {"_ id": ObjectId ("55b9aa714b575261aff42f25"), "Name": "tom" "Age": 45, "Gender": "G"}

7. Define priority

Use rs.conf () to view the replication set configuration object:

Testrs0:SECONDARY > rs.conf () {"_ id": "testrs0", "version": 3, "members": [{"_ id": 0, "host": "hadoop1.abc.com:27017", "arbiterOnly": false, "buildIndexes": true, "hidden": false, "priority": 1, "tags": {}, "slaveDelay": 0 "votes": 1}, {"_ id": 1, "host": "192.168.1.4 false 27017", "arbiterOnly": false, "buildIndexes": true, "hidden": false, "priority": 1, "tags": {}, "slaveDelay": 0, "votes": 1}, {"_ id": 2 "host": "192.168.1.5 true 27017", "arbiterOnly": false, "buildIndexes": true, "hidden": false, "priority": 1, "tags": {}, "slaveDelay": 0, "votes": 1}], "settings": {"chainingAllowed": true, "heartbeatTimeoutSecs": 10, "getLastErrorModes": {} "getLastErrorDefaults": {"w": 1, "wtimeout": 0}

The primary node of hadoop3 defines the priority of hadoop1 as 2, making it the primary node.

Copy the replication set configuration object to a variable, such as mycfg. Then set the priority to the node through this variable. Then update the replication set configuration through rs.reconfig ().

Note that the priority is set with this command mycfg.members [number of nodes in the array] .priority = 2

Testrs0:PRIMARY > rs.conf () {"_ id": "testrs0", "version": 3, "members": [{"_ id": 0, "host": "hadoop1.abc.com:27017", "arbiterOnly": false, "buildIndexes": true, "hidden": false, "priority": 1, "tags": {}, "slaveDelay": 0 "votes": 1}, {"_ id": 1, "host": "192.168.1.4 false 27017", "arbiterOnly": false, "buildIndexes": true, "hidden": false, "priority": 1, "tags": {}, "slaveDelay": 0, "votes": 1}, {"_ id": 2 "host": "192.168.1.5 true 27017", "arbiterOnly": false, "buildIndexes": true, "hidden": false, "priority": 1, "tags": {}, "slaveDelay": 0, "votes": 1}], "settings": {"chainingAllowed": true, "heartbeatTimeoutSecs": 10, "getLastErrorModes": {} "getLastErrorDefaults": {"w": 1, "wtimeout": 0}} testrs0:PRIMARY > mycfg=rs.conf () {"_ id": "testrs0", "version": 3, "members": [{"_ id": 0, "host": "hadoop1.abc.com:27017", "arbiterOnly": false, "buildIndexes": true, "hidden": false "priority": 1, "tags": {}, "slaveDelay": 0, "votes": 1}, {"_ id": 1, "host": "192.168.1.4 votes 27017", "arbiterOnly": false, "buildIndexes": true, "hidden": false, "priority": 1, "tags": {} "slaveDelay": 0, "votes": 1}, {"_ id": 2, "host": "192.168.1.5 host", "arbiterOnly": false, "buildIndexes": true, "hidden": false, "priority": 1, "tags": {}, "slaveDelay": 0, "votes": 1}] "settings": {"chainingAllowed": true, "heartbeatTimeoutSecs": 10, "getLastErrorModes": {}, "getLastErrorDefaults": {"w": 1 "wtimeout": 0}} testrs0:PRIMARY > mycfg.members [0] .priority = 22testrs0:PRIMARY > rs.reconfig (mycfg) {"ok": 1} testrs0:PRIMARY > 2015-07-30T14:34:44.437+0800 I NETWORK DBClientCursor::init call () failed2015-07-30T14:34:44.439+0800 I NETWORK trying reconnect to 127.0.1 failed2015-07-30T14:34:44.452+0800 I NETWORK reconnect 127.0.0. 1RV 27017 (127.0.0.1) oktestrs0:SECONDARY >

Check on hadoop1 and stop hadoop1's service

Testrs0:PRIMARY > rs.status () {"set": "testrs0", "date": ISODate ("2015-07-30T06:51:11.952Z"), "myState": 1, "members": [{"_ id": 0, "name": "hadoop1.abc.com:27017", "health": 1, "state": 1, "stateStr": "PRIMARY", "uptime": 7780 "optime": Timestamp (1438238074, 1), "optimeDate": ISODate ("2015-07-30T06:34:34Z"), "electionTime": Timestamp (1438238079, 1), "electionDate": ISODate ("2015-07-30T06:34:39Z"), "configVersion": 4, "self": true}, {"_ id": 1, "name": "192.168.1.4 Suzhou 27017" "health": 1, "state": 2, "stateStr": "SECONDARY", "uptime": 7780, "optime": Timestamp (1438238074, 1), "optimeDate": ISODate ("2015-07-30T06:34:34Z"), "lastHeartbeat": ISODate ("2015-07-30T06:51:11.072Z"), "lastHeartbeatRecv": ISODate ("2015-07-30T06:51:11.375Z"), pingMs: 0 "configVersion": 4}, {"_ id": 2, "name": "192.168.1.5 health", "health": 1, "state": 2, "stateStr": "SECONDARY", "uptime": 7780, "optime": Timestamp (1438238074, 1), "optimeDate": ISODate ("2015-07-30T06:34:34Z") "lastHeartbeat": ISODate ("2015-07-30T06:51:11.779Z"), "lastHeartbeatRecv": ISODate ("2015-07-30T06:51:10.299Z"), "pingMs": 0, "configVersion": 4}], "ok": 1} testrs0:PRIMARY > quit () You have new mail in / var/spool/mail/root [root@hadoop1 ~] # service mongo stopmongo: unrecognized service [root@hadoop1 ~] # service mongod stopStopping mongod:

In hadoop3, the two nodes will automatically elect the primary node.

Testrs0:PRIMARY > rs.status () {"set": "testrs0", "date": ISODate ("2015-07-30T06:55:18.238Z"), "myState": 1, "members": [{"_ id": 0, "name": "hadoop1.abc.com:27017", "health": 0, "state": 8, "stateStr": "(not reachable/healthy)" Uptime: 0, optime: Timestamp (0,0), optimeDate: ISODate ("1970-01-01T00:00:00Z"), "lastHeartbeat": ISODate ("2015-07-30T06:55:16.275Z"), "lastHeartbeatRecv": ISODate ("2015-07-30T06:51:44.879Z"), "pingMs": 5, "lastHeartbeatMessage": "Failed attempt to connect to hadoop1.abc.com:27017" Couldn't connect to server hadoop1.abc.com:27017 (192.168.1.3), connection attempt failed "," configVersion ":-1}, {" _ id ": 1," name ":" 192.168.1.4 id 27017 "," health ": 1," state ": 2," stateStr ":" SECONDARY "," uptime ": 16000," optime ": Timestamp (1438238074, 1) OptimeDate: ISODate ("2015-07-30T06:34:34Z"), "lastHeartbeat": ISODate ("2015-07-30T06:55:17.988Z"), "lastHeartbeatRecv": ISODate ("2015-07-30T06:55:17.988Z"), "pingMs": 1, "lastHeartbeatMessage": "could not find member to sync from", "configVersion": 4}, {"_ id": 2 "name": "192.168.1.5 PRIMARY", "health": 1, "state": 1, "stateStr": "PRIMARY", "uptime": 16003, "optime": Timestamp (1438238074, 1), "optimeDate": ISODate ("2015-07-30T06:34:34Z"), "electionTime": Timestamp (1438239108, 1), "electionDate": ISODate ("2015-07-30T06:51:48Z") "configVersion": 4, "self": true}], "ok": 1}

Start the hadoop1 service and recommend it as the primary node because the priority is set to 2

Testrs0:PRIMARY > rs.irs.initiate (rs.isMaster (testrs0:PRIMARY > rs.isMaster () {"setName": "testrs0", "setVersion": 4, "ismaster": true, "secondary": false, "hosts": ["hadoop1.abc.com:27017", "192.168.1.4 rs.isMaster 27017", "192.168.1.5rs.isMaster"), "primary": "hadoop1.abc.com:27017" Me: "hadoop1.abc.com:27017", "electionId": ObjectId ("55b9ca84ddeeac6a93355c18"), "maxBsonObjectSize": 16777216, "maxMessageSizeBytes": 48000000, "maxWriteBatchSize": 1000, "localTime": ISODate ("2015-07-30T06:56:28.472Z"), "maxWireVersion": 3, "minWireVersion": 0, "ok": 1}

8. Trigger re-election

A node with a priority of 0 has no right to trigger; only participate in the election

The command rs.addArb () used

9. Multiport replication set

The application needs to connect multiple replication sets, so each replication set needs to have a different name

1) establish the necessary data folders for each node:

[root@hadoop1 ~] # mkdir-pv / srv/mongodb/rs0-0 / srv/mongodb/rs0-1 / srv/mongodb/rs0-2mkdir: created directory "/ srv/mongodb" mkdir: created directory "/ srv/mongodb/rs0-0" mkdir: created directory "/ srv/mongodb/rs0-1" mkdir: created directory "/ srv/mongodb/rs0-2" You have new mail in / var/spool/mail/root

2) start the mongod instance

First node

[root@hadoop1 rs0-0] # mongod-- port 27018-- dbpath / srv/mongodb/rs0-0-- replSet rs0-- smallfiles-- oplogSize

Second node

[root@hadoop1] # mongod-- port 27019-- dbpath / srv/mongodb/rs0-1-- replSet rs0-- smallfiles-- oplogSize

The third node

[root@hadoop1] # mongod-port 27020-dbpath / srv/mongodb/rs0-2-replSet rs0-smallfiles-oplogSize 27020 &

Verification

[root@hadoop1 ~] # netstat-tlnpActive Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0 tlnpActive Internet connections 27019 0.0.0.0 Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp * LISTEN 15718/mongod tcp 0 0 0.0.0 0 0.0.0.0 * LISTEN 15785/mongod tcp 0 00.0.0.0 111 0.0.0.0 * LISTEN 1081/rpcbind tcp 000.0.0.0VOV 28017 0.0.0.0Vol * LISTEN 14221/mongod tcp 0 0 0.0.0.0 LISTEN 1157/cupsd 22 0.0.0.0 LISTEN 1157/cupsd * LISTEN 1285/sshd tcp 0 0 127.0.1 Tcp 0 0127.0.0.1 LISTEN 1099/rpc.statd tcp 25 0.0.0.0 LISTEN 1361/master tcp 00 0.0.0.0.0 LISTEN 1099/rpc.statd tcp 0 0 0.0.0.0: 27017 0.0.0.0 LISTEN 14221/mongod tcp * 27018 0.0.0.0 * LISTEN 15640/mongod tcp 0 0:: 111:: * LISTEN 1081/rpcbind tcp 0 0: 22: * LISTEN 1285/sshd tcp 0 0:: 1 LISTEN 1157/cupsd tcp 0 0:: 1:25: * LISTEN 1361/master tcp 0 0: 48510:: * LISTEN 1099/rpc.statd

3) indicate the port you need to connect to a mongod instance through the mongo command

[root@hadoop1 mongodb] # mongo-- port 27018MongoDB shell version: 3.0.5connecting to: 127.0.0.1:27018/test2015-07-30T16:26:01.442+0800 I NETWORK [initandlisten] connection accepted from 127.0.0.1 Server has startup warnings 54185 # 1 (1 connection now open) Server has startup warnings: 2015-07-30T16:19:14.667+0800 I CONTROL [initandlisten] * * WARNING: You are running this process as the root user Which is not recommended.2015-07-30T16:19:14.667+0800 I CONTROL [initandlisten] 2015-07-30T16:19:14.668+0800 I CONTROL [initandlisten] 2015-07-30T16:19:14.668+0800 I CONTROL [initandlisten] * * WARNING: / sys/kernel/mm/transparent_hugepage/enabled is' always'.2015-07-30T16:19:14.668+0800 I CONTROL [initandlisten] * * We suggest setting it to 'never'2015-07-30T16:19:14 .668 + 0800 I CONTROL [initandlisten] 2015-07-30T16:19:14.668+0800 I CONTROL [initandlisten] * * WARNING: / sys/kernel/mm/transparent_hugepage/defrag is' always'.2015-07-30T16:19:14.668+0800 I CONTROL [initandlisten] * * We suggest setting it to 'never'2015-07-30T16:19:14.668+0800 I CONTROL [initandlisten] >

Finally, about

Deploy a remotely distributed replication set consisting of four nodes.

A remotely distributed replication set consisting of four nodes has the following two points to note:

A node (such as ``hadoop4.abc.net``) must be an arbiter. This node can run on any machine of the service, or on other MongoDB machines.

We need to decide how to allocate nodes. There are three types of architectures:

In most cases, we recommend the first architecture because of its ease of use.

Three nodes are in Site A, one node with priority 0 is in Site B, and at the same time there is a voting node in Site A.

Two nodes are in Site A, two nodes with priority 0 are in Site B, and one voting node is in Site A.

Two nodes are in Site A, one priority 0 node is in Site B, one priority priority node is in Site C, and one voting node is in Site A.

In most cases, the first architecture is recommended because of its ease of use.

Replication set reference

The Replication Methods in the mongo Shell command describes that rs.add () adds nodes to the replication set. Rs.addArb () adds a new arbiter rs.conf () for the replication set to return replication set configuration information rs.freeze () to prevent the current node from being elected as the primary node for a period of time. Rs.help () returns the command for replica set to help rs.initiate () initialize a new replication set. Rs.printReplicationInfo () returns the replicated status report from the perspective of the primary node. Rs.printSlaveReplicationInfo () returns the replication status report from the node's point of view. Rs.reconfig () updates the configuration for the replication set by reapplying the replication set configuration. Rs.remove () removes a node from the replication set. Rs.slaveOk () sets the slaveOk for the current connection. It is not recommended. Use readPref () and Mongo.setReadPref () to set up read preference. Rs.status () returns replication set status information. Rs.stepDown () makes the current primary a slave node and triggers the election. Rs.syncFrom () sets the node from which the replication set node synchronizes data, overriding the default selection logic.

Replication set database command

The command describes replSetFreeze to prevent the current node from being elected as the primary node for a period of time. ReplSetGetStatus returns a status report of the replication set. ReplSetInitiate initializes a new replication set. ReplSetMaintenance turns on and off maintenance mode, and maintenance mode will make secondary enter RECOVERING state. ReplSetReconfig applies the new configuration to the existing replication set. ReplSetStepDown forces the current primary * demotion * to secondary and triggers the election. ReplSetSyncFrom overrides the default replication source selection logic. Resync forces mongod to re-initialize replication from master. Only in master-slave mode. The applyOps internal command applies oplog to the current dataset. IsMaster returns the role information of the node, including whether it is the primary node. Getoptime internal command, which returns optime.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report