In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Replica sets's extension on master-slave replication adds automatic failover and automatic repair of member nodes. The following is a technical description of how to build the replica set of mongodb (personally, there is not much practical information about building mongodb itself, but how to plan for disaster)
1 establish the data storage directory of the replication cluster node
Mkdir-p / opt/mongodata/r1
Mkdir-p / opt/mongodata/r2
Mkdir-p / opt/mongodata/r3
2 execute the following commands in the three windows:
. / mongod-- dbpath / opt/mongodata/r1-- port 27018-- rest-- replSet myset
. / mongod-- dbpath / opt/mongodata/r2-- port 27019-- rest-- replSet myset
. / mongod-- dbpath / opt/mongodata/r3-- port 27020-- rest-- replSet myset
3 execute the following command in the fourth window:
[mongodb@rac4 bin] $. / mongo 127.0.0.1 init.js 27018
MongoDB shell version: 2.0.1
Connecting to: 127.0.0.1:27018/test
The init.js content is as follows:
[mongodb@rac4 bin] $cat init.js
Rs.initiate ({
_ id: "myset"
Members: [
{_ id: 0, host: "10.250.7.220purl 27018"}
{_ id: 1, host: "10.250.7.220purl 27019"}
{_ id: 2, host: "10.250.7.220purl 27020"}
]
})
Start the three service nodes, and you can see from the log log that the three nodes negotiate with each other and select one node with port 27018 as the Primary and the other two automatically as the Secondary nodes.
Mon Oct 31 20:27:53 [conn2] replSet info saving a newer config version to local.system.replset
Mon Oct 31 20:27:53 [conn2] replSet saveConfigLocally done
Mon Oct 31 20:27:53 [conn2] replSet replSetInitiate config now saved locally. Should come online in about a minute.
Mon Oct 31 20:27:53 [conn2] command admin.$cmd command: {replSetInitiate: {_ id: "myset", members: [{_ id: 0.0, host: "10.250.7.220 command admin.$cmd command 27018"}, {_ id: 1.0, host: "10.250.7.220 command admin.$cmd command 27019"}, {_ id: 2.0, host: "10.250.7.220 provision27020"}]} ntoreturn:1 reslen:112 12095ms
Mon Oct 31 20:27:53 [conn2] end connection 127.0.0.1:15252
Mon Oct 31 20:27:53 [rsStart] replSet STARTUP2
Mon Oct 31 20:27:53 [rsHealthPoll] replSet info 10.250.7.220 is down (or slow to respond): still initializing
Mon Oct 31 20:27:53 [rsHealthPoll] replSet member 10.250.7.220:27019 is now in state DOWN
Mon Oct 31 20:27:53 [rsHealthPoll] replSet info 10.250.7.220 is down (or slow to respond): still initializing
Mon Oct 31 20:27:53 [rsHealthPoll] replSet member 10.250.7.220:27020 is now in state DOWN
Mon Oct 31 20:27:53 [rsSync] replSet SECONDARY
Mon Oct 31 20:27:55 [initandlisten] connection accepted from 10.250.7.220:44134 # 3
Mon Oct 31 20:27:59 [rsHealthPoll] replSet info member 10.250.7.220:27019 is up
Mon Oct 31 20:27:59 [rsHealthPoll] replSet member 10.250.7.220:27019 is now in state STARTUP2
Mon Oct 31 20:27:59 [rsMgr] not electing self, 10.250.7.220:27019 would veto
Mon Oct 31 20:28:01 [initandlisten] connection accepted from 10.250.7.220:44137 # 4
Mon Oct 31 20:28:05 [rsMgr] replSet info electSelf 0
Mon Oct 31 20:28:05 [rsMgr] replSet PRIMARY
Mon Oct 31 20:28:07 [rsHealthPoll] replSet member 10.250.7.220:27019 is now in state RECOVERING
Mon Oct 31 20:28:10 [initandlisten] connection accepted from 10.250.7.220:44141 # 5
Mon Oct 31 20:28:10 [initandlisten] connection accepted from 10.250.7.220:44142 # 6
Mon Oct 31 20:28:11 [slaveTracking] build index local.slaves {_ id: 1}
Mon Oct 31 20:28:11 [slaveTracking] build index done 0 records 0.001 secs
Mon Oct 31 20:28:13 [rsHealthPoll] replSet info member 10.250.7.220:27020 is up
Mon Oct 31 20:28:13 [rsHealthPoll] replSet member 10.250.7.220:27020 is now in state STARTUP2
Mon Oct 31 20:28:14 [conn6] end connection 10.250.7.220:44142
Mon Oct 31 20:28:14 [conn5] end connection 10.250.7.220:44141
Mon Oct 31 20:28:15 [initandlisten] connection accepted from 10.250.7.220:44144 # 7
Mon Oct 31 20:28:15 [rsHealthPoll] replSet member 10.250.7.220:27019 is now in state SECONDARY
Mon Oct 31 20:28:15 [rsHealthPoll] replSet member 10.250.7.220:27020 is now in state RECOVERING
Mon Oct 31 20:28:28 [initandlisten] connection accepted from 127.0.0.1:59232 # 8
Enter the database from the client:
[mongodb@rac4 bin] $. / mongo 127.0.0.1 purl 27018
MongoDB shell version: 2.0.1
Connecting to: 127.0.0.1:27018/test
PRIMARY > rs.status ()-command to view the status of replica set
{
"set": "myset"
Date: ISODate ("2011-10-31T12:29:17Z")
"myState": 1
"members": [
{
"_ id": 0
"name": "10.250.7.220 purl 27018"
"health": 1
"state": 1
"stateStr": "PRIMARY"
"optime": {
"t": 1320064073000
"I": 1
}
OptimeDate: ISODate ("2011-10-31T12:27:53Z")
"self": true
}
{
"_ id": 1
"name": "10.250.7.220 purl 27019"
"health": 1
"state": 2
"stateStr": "SECONDARY"
"uptime": 78
"optime": {
"t": 1320064073000
"I": 1
}
OptimeDate: ISODate ("2011-10-31T12:27:53Z")
LastHeartbeat: ISODate ("2011-10-31T12:29:16Z")
"pingMs": 0
}
{
"_ id": 2
"name": "10.250.7.220 purl 27020"
"health": 1
"state": 2
"stateStr": "SECONDARY"
"uptime": 64
"optime": {
"t": 1320064073000
"I": 1
}
OptimeDate: ISODate ("2011-10-31T12:27:53Z")
LastHeartbeat: ISODate ("2011-10-31T12:29:16Z")
"pingMs": 1
}
]
"ok": 1
}
Key data bits in the state
State:1 indicates that the host is currently available for reading and writing. 2: cannot read or write.
Health:1 indicates that the host is currently normal, 0: exception
Error: {"$err": "not master and slaveok=false", "code": 13435} will appear after the read test from the slave library
PRIMARY > rs.conf ()-- commands for viewing replica set configuration
{
"_ id": "myset"
"version": 1
"members": [
{
"_ id": 0
"host": "10.250.7.220 purl 27018"
}
{
"_ id": 1
"host": "10.250.7.220 purl 27019"
}
{
"_ id": 2
"host": "10.250.7.220 purl 27020"
}
]
}
PRIMARY > db.isMaster ();-- check whether it is the command of the main library. Of course, you can see primary from the prompt.
{
"setName": "myset"
"ismaster": true, # # is primary
"secondary": false
"hosts": [
"10.250.7.220 purl 27018"
"10.250.7.220 purl 27020"
"10.250.7.220 purl 27019"
]
"primary": "10.250.7.220 purl 27018"
"me": "10.250.7.220 purl 27018"
"maxBsonObjectSize": 16777216
"ok": 1
}
Log in to the other two mongodb services:
[mongodb@rac4 bin] $. / mongo 127.0.0.1 purl 27019
MongoDB shell version: 2.0.1
Connecting to: 127.0.0.1:27019/test
SECONDARY >
SECONDARY >
SECONDARY > db.isMaster ()
{
"setName": "myset"
"ismaster": false
"secondary": true
"hosts": [
"10.250.7.220 purl 27019"
"10.250.7.220 purl 27020"
"10.250.7.220 purl 27018"
]
"primary": "10.250.7.220 purl 27018"
"me": "10.250.7.220 purl 27019"
"maxBsonObjectSize": 16777216
"ok": 1
}
SECONDARY >
[mongodb@rac4 bin] $. / mongo 127.0.0.1 purl 27020
MongoDB shell version: 2.0.1
Connecting to: 127.0.0.1:27020/test
SECONDARY >
SECONDARY > db.isMaster ()
{
"setName": "myset"
"ismaster": false
"secondary": true
"hosts": [
"10.250.7.220 purl 27020"
"10.250.7.220 purl 27019"
"10.250.7.220 purl 27018"
]
"primary": "10.250.7.220 purl 27018"
"me": "10.250.7.220 purl 27020"
"maxBsonObjectSize": 16777216
"ok": 1
}
So far, it has been successfully built.
Write to the main library and view it from the library:
[mongodb@rac4 bin] $. / mongo 127.0.0.1 purl 27020
MongoDB shell version: 2.0.1
Connecting to: 127.0.0.1:27020/test
PRIMARY > use test
Switched to db test
PRIMARY >
PRIMARY > db.yql.insert ({val: "this is a message on 27020 primary!"})
PRIMARY >
Master Library Log:
Mon Oct 31 21:03:46 [FileAllocator] allocating new datafile / opt/mongodata/r3/test.ns, filling with zeroes...
Mon Oct 31 21:03:46 [FileAllocator] done allocating datafile / opt/mongodata/r3/test.ns, size: 16MB, took 0.256 secs
Mon Oct 31 21:03:46 [FileAllocator] allocating new datafile / opt/mongodata/r3/test.0, filling with zeroes...
Mon Oct 31 21:03:48 [clientcursormon] mem (MB) res:35 virt:2726 mapped:1248
Mon Oct 31 21:03:50 [FileAllocator] done allocating datafile / opt/mongodata/r3/test.0, size: 64MB, took 4.488 secs
Mon Oct 31 21:03:50 [conn6] build index test.yql {_ id: 1}
Mon Oct 31 21:03:50 [conn6] build index done 0 records 0 secs
Mon Oct 31 21:03:50 [conn6] insert test.yql 4759ms
Mon Oct 31 21:03:50 [FileAllocator] allocating new datafile / opt/mongodata/r3/test.1, filling with zeroes...
Mon Oct 31 21:03:51 [conn8] getmore local.oplog.rs query: {ts: {$gte: new Date (5669632022159556609)}} cursorid:6257712144272734285 nreturned:1 reslen:146 5031ms
Mon Oct 31 21:03:51 [conn5] getmore local.oplog.rs query: {ts: {$gte: new Date (5669632022159556609)}} cursorid:423878080662643430 nreturned:1 reslen:146 5631ms
Mon Oct 31 21:03:54 [FileAllocator] done allocating datafile / opt/mongodata/r3/test.1, size: 128MB, took 3.818 secs
From the library log, we can see the process of applying the log from the library from the main library and copying the data file.
Mon Oct 31 20:49:27 [clientcursormon] mem (MB) res:19 virt:2693 mapped:1232
Mon Oct 31 20:54:27 [clientcursormon] mem (MB) res:19 virt:2693 mapped:1232
Mon Oct 31 20:59:27 [clientcursormon] mem (MB) res:19 virt:2693 mapped:1232
Mon Oct 31 21:03:51 [FileAllocator] allocating new datafile / opt/mongodata/r2/test.ns, filling with zeroes...
Mon Oct 31 21:03:54 [FileAllocator] done allocating datafile / opt/mongodata/r2/test.ns, size: 16MB, took 3.396 secs
Mon Oct 31 21:03:54 [FileAllocator] allocating new datafile / opt/mongodata/r2/test.0, filling with zeroes...
Mon Oct 31 21:04:00 [FileAllocator] done allocating datafile / opt/mongodata/r2/test.0, size: 64MB, took 5.79 secs
Mon Oct 31 21:04:00 [rsSync] build index test.yql {_ id: 1}
Mon Oct 31 21:04:00 [rsSync] build index done 0 records 0 secs
Mon Oct 31 21:04:00 [FileAllocator] allocating new datafile / opt/mongodata/r2/test.1, filling with zeroes...
Mon Oct 31 21:04:03 [FileAllocator] done allocating datafile / opt/mongodata/r2/test.1, size: 128MB, took 2.965 secs
Mon Oct 31 21:04:37 [clientcursormon] mem (MB) res:17 virt:2853 mapped:1312
Mon Oct 31 21:04:41 [conn6] end connection 127.0.0.1:44672
As mentioned in the previous introduction to rs.status (), state:1 indicates that the host is currently available for reading and writing, and 2: cannot be read or written.
{
"_ id": 1
"name": "10.250.7.220 purl 27019"
"health": 1
"state": 2,-- the state of the slave library is 2, which is not readable or writable at this time.
"stateStr": "SECONDARY"
"uptime": 78
"optime": {
"t": 1320064073000
"I": 1
}
An error will be reported when reading from the library.
[mongodb@rac4 bin] $. / mongo 127.0.0.1 purl 27019
MongoDB shell version: 2.0.1
Connecting to: 127.0.0.1:27019/test
SECONDARY > use test
Switched to db test
SECONDARY > db.yql.find ()
Error: {"$err": "not master and slaveok=false", "code": 13435}
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.