In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
The last article has introduced the construction process of replica set in detail. This article mainly introduces the automatic switching of faults, the addition, deletion and modification of nodes.
The process of building replica set of http://1413570.blog.51cto.com/1403570/1337619 mongodb
Simulation shows listing 1:
Res1:PRIMARY > rs.conf ()
{
"_ id": "res1"
"version": 1
"members": [
{
"_ id": 0
"host": "192.168.1.248purl 27017"
"priority": 2
}
{
"_ id": 1
"host": "192.168.1.247 purl 27018"
"priority": 0
}
{
"_ id": 2
"host": "192.168.1.250 purl 27019"
}
]
}
We can see that primary is host:192.168.1.248, because priority attribute is large, followed by host:192.168.1.250. When host 192.168.1.248 is down, host 192.168.1.250 is used as primary, the main library.
Suppose host 192.168.1.248 stops the MongoDB main process
Ps-ef | grep mongodb
Kill 8665
Try not to use kill-9, which may lead to corruption of mongo data files.
OK, now the logs of the other two server have been prompted
Fri Dec 6 16:36:10.522 [rsHealthPoll] couldn't connect to 192.168.1.248:27017: couldn't connect to server 192.168.1.248:27017
Then came host 192.168.20.250 as primary.
Fri Dec 6 16 connection now open 36 connection now open 40.707 [conn248] end connection 192.168.1.250
Fri Dec 6 16 connections now open 36 connections now open 40.708 [initandlisten] connection accepted from 192.168.1.250 connections now open 46592 # 249
Fri Dec 6 16V 36V 40.710 [conn249] authenticate db: local {authenticate: 1, nonce: "f70f5a8aea558178", user: "_ _ system", key: "19fb73382ae940816c685b2561b0a76e"}
Now log in through mongodb's shell
[root@anenjoy] # / usr/local/mongodb/bin/mongo-- port 27019
MongoDB shell version: 2.4.8
Connecting to: 127.0.0.1:27019/test
Res1:PRIMARY >
Primary will be displayed.
Then through rs.ststus ()
Res1:PRIMARY > rs.status ()
{
"set": "res1"
Date: ISODate ("2013-12-06T08:44:01Z")
"myState": 1
"members": [
{
"_ id": 0
"name": "192.168.1.248purl 27017"
"health": 0
"state": 8
"stateStr": "(not reachable/healthy)
"uptime": 0
Optime: Timestamp (1386118280, 1)
OptimeDate: ISODate ("2013-12-04T00:51:20Z")
LastHeartbeat: ISODate ("2013-12-06T08:44:00Z")
LastHeartbeatRecv: ISODate ("2013-12-06T08:41:32Z")
"pingMs": 0
}
{
"_ id": 1
"name": "192.168.1.247 purl 27018"
"health": 1
"state": 2
"stateStr": "SECONDARY"
"uptime": 3790
Optime: Timestamp (1386118280, 1)
OptimeDate: ISODate ("2013-12-04T00:51:20Z")
LastHeartbeat: ISODate ("2013-12-06T08:44:00Z")
LastHeartbeatRecv: ISODate ("2013-12-06T08:44:01Z")
"pingMs": 0
"syncingTo": "192.168.1.250 purl 27019"
}
{
"_ id": 2
"name": "192.168.1.250 purl 27019"
"health": 1
"state": 1
"stateStr": "PRIMARY"
"uptime": 4958
Optime: Timestamp (1386118280, 1)
OptimeDate: ISODate ("2013-12-04T00:51:20Z")
"self": true
}
]
"ok": 1
}
Res1:PRIMARY >
You can see that the server of name 192.168.1.248 is abnormal, and the other two LOG are also constantly unable to connect to the port host 192.168.1.248 27017.
When your host 192.168.1.248 mongodb process starts running again, it will automatically switch to primary
Fri Dec 6 16:48:35.325 [conn246] SocketException handling request, closing client connection: 9001 socket exception [SEND_ERROR] server [192.168.1.247:27047]
Fri Dec 6 16:48:35.388 [rsHealthPoll] replSet member 192.168.1.248:27017 is now in state PRIMARY
[root@test02 bin] # / usr/local/mongodb/bin/mongo-- port 27017
MongoDB shell version: 2.4.8
Connecting to: 127.0.0.1:27017/test
Res1:PRIMARY >
If your host 192.168.1.248 goes down, host 192.168.1.250 acts as primary to write data.
Db.appstore.save ({'eyed nameplate VRV > XIAOWANGZHANGZHANGZHANGZHANGZHANG]
Res1:PRIMARY > db.appstore.find (); db.appstore.find ()
{"_ id": ObjectId ("529e7c88d4d317e4bd3eece9"), "e_name": "frank", "e_id": 1101, "class_id": 1}
{"_ id": ObjectId ("52a18f3bd36b29b9c78be267"), "e_name": "xiaowang", "e_id": 1103, "class_id": 2}
After that, when host 192.168.1.248 acts as primary, the newly added data will also be synchronized, similar to mysql's master-slave synchronization.
Listing 2: add, delete and modify operations of replica set nodes
Now, suppose my primary host 192.168.1.248 is down and I want to delete this node.
First ps-aux | grep mongodb, and then kill drops the process
Now host 192.168.20.250 has been set as primary
[root@anenjoy] # / usr/local/mongodb/bin/mongo-- port 27019
MongoDB shell version: 2.4.8
Connecting to: 127.0.0.1:27019/test
Res1:PRIMARY >
View node configuration through rs.conf ()
Res1:PRIMARY > rs.conf ()
{
"_ id": "res1"
"version": 1
"members": [
{
"_ id": 0
"host": "192.168.1.248purl 27017"
"priority": 2
}
{
"_ id": 1
"host": "192.168.1.247 purl 27018"
"priority": 0
}
{
"_ id": 2
"host": "192.168.1.250 purl 27019"
}
]
Res1:PRIMARY > rs.remove ('192.168.1.248purl 27017')
Fri Dec 6 16 59 failed 01.480 failed
Fri Dec 6 16:59:01.482 Error: error doing query: failed at src/mongo/shell/query.js:78
Fri Dec 6 16:59:01.482 trying reconnect to 127.0.0.1:27019
Fri Dec 6 16:59:01.482 reconnect 127.0.0.1:27019 ok
Check again, the ok node has been deleted
Res1:PRIMARY > rs.conf ()
{
"_ id": "res1"
"version": 2
"members": [
{
"_ id": 1
"host": "192.168.1.247 purl 27018"
"priority": 0
}
{
"_ id": 2
"host": "192.168.1.250 purl 27019"
}
]
}
There will be no output of [rsHealthPoll] couldn't connect to 192.168.1.248 couldn't connect to 27017: couldn't connect to server 192.168.1.248 couldn't connect to 27017 in the log.
Add nodes:
Adding nodes directly through oplog is simple and does not require too much human participation, but oplog is capped collection and will be recycled, so if you simply use oplog to add nodes, it may lead to data inconsistency, because the information stored in the log may have been refreshed.
You can add nodes by using a combination of database snapshot (--fastsync) and oplog. The general procedure is:
First, the physical file of a replication set member is taken as the initialization data, and then the rest is appended with the oplog log to achieve data consistency.
The latest preparation steps are all the same:
Build DB storage directory, key files, permissions of 600th
Step 1: configure the storage path,-- parameters of dbpath
All are placed under / data/mon_db, and directory permissions are granted to mongodb users.
Mkdir-p / data/mon_db
Chown-R mongodb:mongodb / data/mon_db/
Create a log file,-- parameters of logpath, and define the location by yourself.
Just put it in mkdir-p / usr/local/mongodb/log
Touch / usr/local/mongodb/log/mongodb.log
Chown-R mongodb:mongodb / usr/local/mongodb/
Step 2: create a master-slave key file to identify the full path of the private key of the cluster. If the key file content of each instance is inconsistent, the program will not work properly.
[root@test02] # mkdir-p / data/mon_db/key
[root@test02 ~] # echo "this is res key" > / data/mon_db/key/res1
Chmod + R 600 / data/mon_db/key/res1 permission is granted to 600, otherwise error message will be prompted
Wed Dec 4 06:22:36.413 permissions on / data/mon_db/key/res1 are too open
Just change a different name.
Suppose you synchronize the physical files of host 192.168.1.247.
Scp-r / data/mongodb/res2/ root@ip:/data/mon_db/res4
After that, you can insert new data in primary (verify the use)
Start mongodb
/ usr/local/mongodb/bin/mongod-- port 27020-- replSet res1-- keyFile / data/mon_db/key/res4-- oplogSize 100-- dbpath=/data/mon_db/res4/-- logpath=/usr/local/mongodb/log/mongodb.log-- logappend-- fastsync-- fork
After that
Add nodes on primary:
Rs.add ('192.168.1.xpurl 27020')
Then, on the newly added node, log in to mongodb, get the read permission, and check whether the data is synchronized successfully.
Changes to nodes
What is a node change is actually nothing more than a change to the node host, port, and priority. Here is a brief description of how to change it.
Currently my replica set has three nodes
/ usr/local/mongodb/bin/mongo-- port 27019
Rs.status ()
{
"set": "res1"
Date: ISODate ("2013-12-06T11:56:42Z")
"myState": 1
"members": [
{
"_ id": 1
"name": "192.168.1.247 purl 27018"
"health": 1
"state": 2
"stateStr": "SECONDARY"
"uptime": 10661
Optime: Timestamp (1386330980, 1)
OptimeDate: ISODate ("2013-12-06T11:56:20Z")
LastHeartbeat: ISODate ("2013-12-06T11:56:42Z")
LastHeartbeatRecv: ISODate ("2013-12-06T11:56:40Z")
"pingMs": 0
"syncingTo": "192.168.1.250 purl 27019"
}
{
"_ id": 2
"name": "192.168.1.250 purl 27019"
"health": 1
"state": 1
"stateStr": "PRIMARY"
"uptime": 16519
Optime: Timestamp (1386330980, 1)
OptimeDate: ISODate ("2013-12-06T11:56:20Z")
"self": true
}
{
"_ id": 3
"name": "192.168.1.248purl 27017"
"health": 1
"state": 2
"stateStr": "SECONDARY"
"uptime": 22
Optime: Timestamp (1386330980, 1)
OptimeDate: ISODate ("2013-12-06T11:56:20Z")
LastHeartbeat: ISODate ("2013-12-06T11:56:42Z")
LastHeartbeatRecv: ISODate ("2013-12-06T11:56:41Z")
"pingMs": 0
"lastHeartbeatMessage": "syncing to: 192.168.1.250) 27019"
"syncingTo": "192.168.1.250 purl 27019"
}
]
"ok": 1
}
I want to change the direct priority of the node. Now host 192.168.1.250 is primary and priority is 2. I want host:192.168.1.248 as primary, as long as its priority is 3 and greater than 2.
Res1:PRIMARY > cfg=rs.conf ()
{
"_ id": "res1"
"version": 3
"members": [
{
"_ id": 1
"host": "192.168.1.247 purl 27018"
"priority": 0
}
{
"_ id": 2
"host": "192.168.1.250 purl 27019"
}
{
"_ id": 3
"host": "192.168.1.248purl 27017"
}
]
}
Res1:PRIMARY > cfg.members [2] .priority = 3
Res1:PRIMARY > rs.reconfig (cfg); rs.reconfig () similar to reinitialization
Fri Dec 6 20 failed 00 DBClientCursor::init call () failed 29.788
Fri Dec 6 20:00:29.792 trying reconnect to 127.0.0.1:27019
Fri Dec 6 20:00:29.793 reconnect 127.0.0.1:27019 ok
Reconnected to server after rs command (which is normal)
Hit enter twice more and you will find that before it was primary, it became secondary, and your host 192.168.1.248 became primary.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.