In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
This paper is divided into two parts to introduce the configuration of mongodb3.2.1 shard deployment and fault simulation verification.
Part I installation configuration
1. Experimental environment
Two sets of replicas for slicing
Version 3.2.1
Replica set 1 Vera 192.168.115.11 Vera 27017192.168.115.12 Fraser 27017192.168.115.11Rd 47017 (arbiter)
Replica set 2VAS 192.168.115.11VOUR 37017192.168.115.12VOUR 37017192.168.115.12VOUR 47017 (arbiter)
Configserver:192.168.115.11:10000192.168.115.11:10001192.168.115.12:10000
Mongos:192.168.115.11:20000
II. Introduction to slicing
1. Logic diagram
Shard: one copy set for each slice
Configuration server (config server): stores the configuration information of the cluster. Version 3.2 and above supports deployment in replica set mode.
Routing process (mongos): routes all requests and then aggregates the results. It does not save stored data or configuration information, which is loaded into memory from the configuration server.
Deployment of confiserver in replica set mode
I. deployment conditions
1. There can be no arbitration node in the cluster
two。 There can be no delay nodes in the cluster
3. Each member must be able to create an index
II. Configserver installation and configuration
1. Modify the configuration file (the other two node configuration files are similar, mainly modify the listening port and the data path. If multiple instances are running on a machine, note that the configuration file name should be different)
Cat config.conffork = truequiet = trueport = 10000dbpath = / data/configlogpath = / usr/local/mongodb/logs/config.loglogappend = truedirectoryperdb = trueconfigsvr = truereplSet = hnrconfig/192.168.115.11:10000192.168.115.11:10001192.168.115.12:10000
two。 Service start and stop
/ usr/local/mongodb/bin/mongod-f / usr/local/mongodb/etc/config.conf
/ usr/local/mongodb/bin/mongod-shutdown-port 10000-dbpath=/data/config
3. Configure copy set
Connect any node for configuration
> show dbs
2016-11-17T09:06:08.088+0800 E QUERY [thread1] Error: listDatabases failed: {"ok": 0, "errmsg": "not master and slaveOk=false", "code": 13435}:
_ getErrorWithCode@src/mongo/shell/utils.js:23:13
Mongo.prototype.getDBs@src/mongo/shell/mongo.js:53:1
ShellHelper.show@src/mongo/shell/utils.js:700:19
ShellHelper@src/mongo/shell/utils.js:594:15
@ (shellhelp2): 1:1
The above error occurs and needs to be executed
> rs.slaveOk ()
> use admin
> db.runCommand ({"replSetInitiate": {"_ id": "hnrconfig", "members": [{"_ id": 1, "host": "192.168.115.11 id 10000"}, {"_ id": 2, "host": "192.168.115.12 id 10000"}, {"_ id": 3, "host": "192.168.115.11 id 10001"}]})
{"ok": 1}
III. Mongos configuration
1. Configuration file
Cat mongos.conffork = truequiet = trueport = 20000logpath = / usr/local/mongodb/logs/mongos.loglogappend = trueconfigdb = 192.168.115.11 Virgo 10000192.168.115.11 Vera 10001192.168.115.12 Vor10000
two。 Start the mongos service
/ usr/local/mongodb/bin/mongos-f / usr/local/mongodb/etc/mongos.conf
Add shards to the cluster
Connect mongos
Mongos > sh.addShard ("hnrtest1/192.168.115.11:27017")
{"shardAdded": "hnrtest1", "ok": 1}
Mongos >
Mongos > sh.addShard ("hnrtest2/192.168.115.12:37017")
{"shardAdded": "hnrtest2", "ok": 1}
Mongos >
5. Open slicing
1. First enable the sharding function to the database
Mongos > sh.enableSharding ("shardtest")
{"ok": 1}
Mongos >
two。 Enable sharding for the collection (automatic sharding creation)
Mongos > sh.shardCollection ("shardtest.student", {"cre_id": 1})
{"collectionsharded": "shardtest.student", "ok": 1}
Mongos >
3. Modify the default chunk size (64m by default). The automatic sharding test does not work well. You need to insert a large amount of data and modify it to 1m.
Mongos > use config
Mongos > db.settings.save ({"_ id": "chunksize", "value": NumberLong (1)})
Fragment the student2 collection after modification
Mongos > sh.shardCollection ("shardtest.student2", {"cre_id": 1})
Insert 50,000 pieces of data
Query directly on the back-end fragmented copy set
Hnrtest2:PRIMARY > db.student2.find (). Count ()
27081
Hnrtest2:PRIMARY >
Hnrtest1:PRIMARY > db.student2.find (). Count ()
22918
Hnrtest1:PRIMARY >
4. Hash slicing
Modify chunk to the default value 64m
Mongos > db.settings.save ({"_ id": "chunksize", "value": NumberLong (64)})
The student3 collection uses hash fragmentation in the cre_id field
Mongos > sh.shardCollection ("shardtest.student3", {"cre_id": "hashed"})
{"collectionsharded": "shardtest.student3", "ok": 1}
Mongos > sh.status ()
Shardtest.student3
Shard key: {"cre_id": "hashed"}
Unique: false
Balancing: true
Chunks:
Hnrtest1 2
Hnrtest2 2
{"cre_id": {"$minKey": 1}}-> {"cre_id": NumberLong ("- 4611686018427387902")} on: hnrtest1 Timestamp (2,2)
{"cre_id": NumberLong ("- 4611686018427387902")}-> {"cre_id": NumberLong (0)} on: hnrtest1 Timestamp (2,3)
{"cre_id": NumberLong (0)}-> {"cre_id": NumberLong ("4611686018427387902")} on: hnrtest2 Timestamp (2,4)
{"cre_id": NumberLong ("4611686018427387902")}-> {"cre_id": {"$maxKey": 1}} on: hnrtest2 Timestamp (2,5)
Insert 10, 000 pieces of data into student3 and query on each shard
Hnrtest1:PRIMARY > db.student3.find (). Count ()
4952
Hnrtest1:PRIMARY >
Hnrtest2:PRIMARY > db.student3.find (). Count ()
5047
Hnrtest2:PRIMARY >
The second part is fault simulation verification.
1. Simulate the downtime of primary nodes in config service replica set
1. Shut down the service
/ usr/local/mongodb/bin/mongod-shutdown-port 10000-dbpath=/data/config
two。 Replica set re-elects a primary node
3. Read the data, all the data are returned normally
Mongos > use shardtest
Switched to db shardtest
Mongos >
Mongos > db.student.find (). Count ()
99999
Mongos > db.student2.find (). Count ()
49999
Mongos > db.student3.find (). Count ()
9999
Mongos >
4. Fragment the new collection and insert 5,000 pieces of data
Mongos > sh.shardCollection ("shardtest.student4", {"cre_id": "hashed"})
{"collectionsharded": "shardtest.student4", "ok": 1}
Mongos >
Query data on each shard
Hnrtest2:PRIMARY > db.student4.find (). Count ()
2525
Hnrtest2:PRIMARY >
Hnrtest1:PRIMARY > db.student4.find (). Count ()
2474
Hnrtest1:PRIMARY >
II. Backup and recovery of config service data
1. Data backup
/ usr/local/mongodb/bin/mongodump-h 192.168.115.11 10001-o configdata
two。 Close all config service nodes
/ usr/local/mongodb/bin/mongod-shutdown-port 10000-dbpath=/data/config
/ usr/local/mongodb/bin/mongod-shutdown-port 10001-dbpath=/data/config1
3. Data read operation
Because mongos loads all the configuration information of config into memory, querying data through mongos is fine at this time, but the new collection cannot be sliced.
Mongos > db.student.find (). Count ()
99999
Mongos > db.student2.find (). Count ()
49999
Mongos > db.student3.find (). Count ()
9999
Mongos > db.student4.find (). Count ()
4999
Mongos >
4. The collection is fragmented and cannot be completed.
Mongos > sh.shardCollection ("shardtest.student5", {"cre_id": "hashed"})
{
"ok": 0
"errmsg": "None of the hosts for replica set hnrconfig could be contacted."
"code": 71
}
Mongos >
5. Close the mongos service and delete all data from the config node
6. Restart three config services
7. Reinitialize the replica set
> rs.slaveOk ()
> use admin
> db.runCommand ({"replSetInitiate": {"_ id": "hnrconfig", "members": [{"_ id": 1, "host": "192.168.115.11 id 10000"}, {"_ id": 2, "host": "192.168.115.12 id 10000"}, {"_ id": 3, "host": "192.168.115.11 id 10001"}]})
8. Start the mongos service without any data at this time
9. Import backed up config data
/ usr/local/mongodb/bin/mongorestore-h 192.168.115.11 10000-d config configdata/config/
In mongos query, but the query data will time out and the data cannot be queried
10. Execute the following command on mongos
Mongos > sh.enableSharding ("shardtest")
{"ok": 0, "errmsg": "Operation timed out", "code": 50}
Mongos log
2016-11-17T14:46:21.197+0800 I SHARDING [Balancer] about to log metadata event into actionlog: {_ id: "node1.hnr.com-2016-11-17T14:46:21.197+0800-582d523ded1c4b679a84877b", server: "node1.hnr.com", clientAddr: "", time: new Date (1479365181197), what: "balancer.round", ns: "", details: {executionTimeMillis: 30007, errorOccured: true, errmsg: "could not get updated shard list from config server due to ExceededTimeLimit Operation timed out"}}
The official website said it was bug, and the recovery failed.
Https://jira.mongodb.org/browse/SERVER-22392
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.