In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
The implementation of the copy set of Mongodb-repliSet
Prepare three nodes:
RS1:192.168.1.155:27017
RS2:192.168.1.11:27017
Node1:192.168.1.112:27017
1. Time synchronization
Ntpdate192.168.1.11
2. Install the mongodb service
Rpm packet address: https://repo.mongodb.org/yum/redhat
Yum-y localinstall*.rpm
Mkdir-p/mongodb/data
Chown-Rmongod:mongod / mongodb/data
Usermod-d/mongodb/data mongod
3. Modify the configuration file
Vim / etc/mongod.conf
# dbPath:/var/lib/mongo
DbPath: / mongodb/data
Replication: enable this
ReplSetName: testrs: define a name for the replica set
Then synchronize to the other two nodes
4. Start the mongod service
Service mongodstart
Select 192.168.1.155purl 27017 as the service to log in
Mongo
Help--- à rs.help ()-à rs.initiate () initialize with this command
The error is found as follows:
The host is not mapped to the node. Google search finds out the reason. Because the configuration file bindip is bound to 127.0.0.1, and more than 127.0.0.1 hostname--- "RS1 in the / etc/hosts file does not specify the association, this modification is made.
127.0.0.1 localhost localhost.localdomain localhost4localhost4.localdomain4 RS1
Add hostname at the end, then reinitialize, and find that the problem has been solved
5. Since the services have been started in the previous step, we continue to add hosts on 192.168.1.155.
Rs.add ('192.168.1.11 27017')-à found and reported an error, as shown in the following figure
You can see that you can't connect to another node, so try to log in on 155 nodes.
Mongo-- host 192.168.1.11-port 27017 found that it was still not possible to log in. Looking through many articles, the description was not very detailed, so a group friend solved the doubt. Because the bindip in the configuration file was not modified, it was only 127.0.0.1 that was monitored. The way to configure three instances of mongodb in one system does not need to be modified, but now there are three different system ip, so it has to be modified.
Net:
Port: 27017
BindIp: 192.168.1.155 also modify the other two nodes to add native ip instead of 127.0.0.1 (of course, bindip can also bind multiple ip:192.168.1.155 127.0.0.1), and then add to the hosts file:
192.168.1.155 RS1
192.168.1.11 RS2
192.168.1.112 node1
After completing the above steps, then add again
Rs.add ('192.168.1.11 27017')-à
Testrs:PRIMARY > rs.add ('192.168.1.11purl 27017')
{"ok": 1}: successful because there is no error.
Then add the next node again
Rs.add ('192.168.1.112purl 27017')
Rs.status (): check the status between nodes
Testrs:PRIMARY > rs.status ()
{
"set": "testrs"
Date: ISODate ("2016-07-26T04:21:32.729Z")
"myState": 1
"term": NumberLong (3)
"heartbeatIntervalMillis": NumberLong (2000)
"members": [
{
"_ id": 0
"name": "RS1:27017"
"health": 1
"state": 1
"stateStr": "PRIMARY",: master node
Uptime: 346
"optime": {
Ts: Timestamp (1469506890, 1)
"t": NumberLong (3)
}
OptimeDate: ISODate ("2016-07-26T04:21:30Z")
ElectionTime: Timestamp (1469506554, 1)
ElectionDate: ISODate ("2016-07-26T04:15:54Z")
"configVersion": 3
"self": true
}
{
"_ id": 1
"name": "192.168.1.11 purl 27017"
"health": 1
"state": 2
"stateStr": "SECONDARY": slave node
"uptime": 156
"optime": {
Ts: Timestamp (1469506735, 1)
"t": NumberLong (3)
}
OptimeDate: ISODate ("2016-07-26T04:18:55Z")
LastHeartbeat: ISODate ("2016-07-26T04:21:30.870Z")
LastHeartbeatRecv: ISODate ("2016-07-26T04:21:30.974Z")
"pingMs": NumberLong (1)
"configVersion": 2
}
{
"_ id": 2
"name": "192.168.1.112purl 27017"
"health": 1
"state": 0
"stateStr": "STARTUP",: since it has just been added, it is opening
"uptime": 1
"optime": {
"ts": Timestamp (0,0)
"t": NumberLong (- 1)
}
OptimeDate: ISODate ("1970-01-01T00:00:00Z")
LastHeartbeat: ISODate ("2016-07-26T04:21:31.165Z")
LastHeartbeatRecv: ISODate ("2016-07-26T04:21:32.498Z")
"pingMs": NumberLong (193)
ConfigVersion:-2
}
]
"ok": 1
}
Now that the replica set is complete, the next step is to verify that the master and slave are synchronized?
6. Perform the following operations on the 192.168.1.155purl 27017 master node:
Testrs:PRIMARY > use huangdb: switch to the library
Switched to dbhuangdb
Testrs:PRIMARY > show collections
Testcoll
Testrs:PRIMARY > db.testcoll.find ()
{"_ id": ObjectId ("5792d2a3a4769176f5babaaa"), "Name": "huang", "Age": 24, "Gender": "F"}
Testrs:PRIMARY > db.testcoll.insert ({Name: "xiaoming", Age:23,url: "www.baidu.com"})
WriteResult ({"nInserted": 1}): add a document record
Testrs:PRIMARY > db.testcoll.find (): view document records
{"_ id": ObjectId ("5792d2a3a4769176f5babaaa"), "Name": "huang", "Age": 24, "Gender": "F"}
{"_ id": ObjectId ("5796e7333f3249e9b0b44ded"), "Name": "xiaoming", "Age": 23, "url": "www.baidu.com"}
Then the system performs login authentication on 192.168.1.11 VOL27017
Mongo-host 192.168.1.11-port 27017
Use huangdb
Show collections found that the error was as follows:
Proceed according to the error prompt
Rs.slaveOk ()
Then check it out.
Db.testcoll.find () can then see the content record of the collection, and the master and slave synchronize the ok
7. The next step is to observe whether failover failover can be carried out between nodes.
Stop the mongod service for 192.168.1.155
[root@RS1 ~] # service mongod stop
Stoppingmongod: [OK]
So check the replica set status on 192.168.1.11
Testrs:SECONDARY > rs.status ()
{
"set": "testrs"
Date: ISODate ("2016-07-26T05:06:39.686Z")
"myState": 2
Term: NumberLong (4)
"syncingTo": "192.168.1.112purl 27017"
"heartbeatIntervalMillis": NumberLong (2000)
"members": [
{
"_ id": 0
"name": "RS1:27017"
"health": 0
"state": 8
"stateStr": "(not reachable/healthy)
"uptime": 0
"optime": {
"ts": Timestamp (0,0)
"t": NumberLong (- 1)
}
OptimeDate: ISODate ("1970-01-01T00:00:00Z")
LastHeartbeat: ISODate ("2016-07-26T05:06:39.641Z")
LastHeartbeatRecv: ISODate ("2016-07-26T05:05:58.844Z")
"pingMs": NumberLong (2)
"lastHeartbeatMessage": "Connection refused"
"configVersion":-1
}
{
"_ id": 1
"name": "192.168.1.11 purl 27017"
"health": 1
"state": 2
"stateStr": "SECONDARY"
"uptime": 3019
"optime": {
Ts: Timestamp (1469509568, 2)
"t": NumberLong (4)
}
OptimeDate: ISODate ("2016-07-26T05:06:08Z")
"syncingTo": "192.168.1.112purl 27017"
"configVersion": 3
"self": true
}
{
"_ id": 2
"name": "192.168.1.112purl 27017"
"health": 1
"state": 1
"stateStr": "PRIMARY"
"uptime": 2708
"optime": {
Ts: Timestamp (1469509568, 2)
"t": NumberLong (4)
}
OptimeDate: ISODate ("2016-07-26T05:06:08Z")
LastHeartbeat: ISODate ("2016-07-26T05:06:38.433Z")
LastHeartbeatRecv: ISODate ("2016-07-26T05:06:38.907Z")
"pingMs": NumberLong (1)
ElectionTime: Timestamp (1469509568, 1)
ElectionDate: ISODate ("2016-07-26T05:06:08Z")
"configVersion": 3
}
]
"ok": 1
}
According to the state, we can see that the state of 192.168.1.155 is no longer healthy, while 192.168.1.112 rises sharply from the slave node to the master node, so the automatic failover is successful.
Attachment: http://down.51cto.com/data/2367981
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.