In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
A total of 2 servers, each with a route, a configuration, a shard, mongodb version 3.4
Server 1VR 10.2.4.214
Server 2VR 10.2.4.215
1. First, write the configuration files for the two servers, and establish the path folder in the configuration file.
Routing profile mongodb_rout.conf:
Net: port: 5000 ipv6: truesystemLog: destination: file path: "/ data/mongodb/formal_5000/log/route.log" logAppend: trueprocessManagement: fork: truesharding: configDB: rsConf/10.2.4.214:5100,10.2.4.215:5100...
Configuration file mongodb_cfg.conf for 214:
Net: port: 5100 ipv6: truestorage: dbPath: "/ data/mongodb/formal_5000/data/config" directoryPerDB: truesystemLog: destination: file path: "/ data/mongodb/formal_5000/log/config.log" logAppend: truestorage: journal: enabled: trueprocessManagement: fork: truesharding: clusterRole: configsvrreplication: replSetName: rsConf...
The sharding profile mongodb_s0.conf:
Storage: dbPath: "/ data/mongodb/formal_5000/data/s0" directoryPerDB: truesystemLog: destination: file path: "/ data/mongodb/formal_5000/log/s0.log" logAppend: truenet: port: 5010 ipv6: trueprocessManagement: fork: truestorage: journal: enabled: truereplication: replSetName: rsShardAsharding: clusterRole: shardsvr...
Routing profile mongodb_rout.conf of 215:
Net: port: 5000 ipv6: truesystemLog: destination: file path: "/ data/mongodb/formal_5000/log/route.log" logAppend: trueprocessManagement: fork: truesharding: configDB: rsConf/10.2.4.214:5100,10.2.4.215:5100...
Configuration file mongodb_cfg.conf for 215:
Net: port: 5100 ipv6: truestorage: dbPath: "/ data/mongodb/formal_5000/data/config" directoryPerDB: truesystemLog: destination: file path: "/ data/mongodb/formal_5000/log/config.log" logAppend: truestorage: journal: enabled: trueprocessManagement: fork: truesharding: clusterRole: configsvrreplication: replSetName: rsConf...
Fragment profile mongodb_s0.conf of 215:
Storage: dbPath: "/ data/mongodb/formal_5000/data/s0" directoryPerDB: truesystemLog: destination: file path: "/ data/mongodb/formal_5000/log/s0.log" logAppend: truenet: port: 5010 ipv6: trueprocessManagement: fork: truestorage: journal: enabled: truereplication: replSetName: rsShardBsharding: clusterRole: shardsvr...
Execute the following command on 2 servers to create a folder, otherwise mongodb will not start
Mkdir-p / data/mongodb/formal_5000/log/mkdir-p / data/mongodb/formal_5000/data/config/mkdir-p / data/mongodb/formal_5000/data/s0
two。 Start 2 configuration services and configure them as replica sets
# start the configuration service numactl on both servers-- interleave=all mongod-f / etc/mongodb/formal_5000/mongodb_cfg.conf# enter one of the services to configure mongo-- port 5100config= {_ id: "rsConf", members: [{_ id:0,host: "10.2.4.214 etc/mongodb/formal_5000/mongodb_cfg.conf# 5100"}, {_ id:1 Host: "10.2.4.215 rs.status 5100"}]} rs.initiate (config) # check whether the replica set configuration is successful or not ()
3. Start 2 sharding and routing services and configure
# start the sharding service numactl on both servers-- interleave=all mongod-f / etc/mongodb/formal_5000/mongodb_s0.conf# start the routing service numactl on both servers-- interleave=all mongos-f / etc/mongodb/formal_5000/mongodb_rout.conf# enter the 214sharding and configure the primarymongo of sharding into replica sets-- port 5010config= {_ id: "rsShardA", members: [{_ id:0 Host: "10.2.4.214 port 5010"}]} rs.initiate (config) # enters the 215shard and configures the primarymongo that is shredded into a replica set-- port 5010config= {_ id: "rsShardB", members: [{_ id:0,host: "10.2.4.215 primarymongo 5010"}]} rs.initiate (config) # enters the route and configures the sharding mongo-port 5000use admindb.runCommand ({addshard: "rsShardA/10.2.4.214:5010", name: "shard_0") MaxSize:0}) db.runCommand ({addshard: "rsShardB/10.2.4.215:5010", name: "shard_1", maxSize:0}) # View sharding configuration sh.status ()
4. Set monitor_center to be sharable
Use monitor_centeruse admindb.runCommand ({enablesharding: "monitor_center"})
5. Use Studio-3T to connect routing, configuration, and fragmentation, and to establish users (omitted from the tutorial here)
6. Shut down the database
# pass the route first, enter the routing mongo of the two servers-- port 5000use admindb.shutdownServer () # then close the sharding, enter the sharding mongo of the two servers-- port 5010use admindb.shutdownServer ({force:true}) # finally close the configuration, and enter the configuration mongo-port 5100use admindb.shutdownServer () of the two servers.
7. Add keyfile to the last line of the routing, configuration, and sharding configuration file for the two servers, as follows:
Routing profile mongodb_rout.conf:
Net: port: 5000 ipv6: truesystemLog: destination: file path: "/ data/mongodb/formal_5000/log/route.log" logAppend: trueprocessManagement: fork: truesharding: configDB: rsConf/10.2.4.214:5100,10.2.4.215:5100security: keyFile: "/ data/mongodb/formal_5000/key/mongodb_key".
Other configuration files are similar, all need to add keyfile
# generate keyFilemkdir-p / data/mongodb/formal_5000/key/cd / data/mongodb/formal_5000/keyecho-e "formal mongodb keyFile" > mongodb_keychmod 600 / data/mongodb/formal_5000/key/mongodb_key
8. Finally, start configuration, fragmentation, and routing in turn.
Numactl-- interleave=all mongod-f / etc/mongodb/formal_5000/mongodb_cfg.confnumactl-- interleave=all mongod-f / etc/mongodb/formal_5000/mongodb_s0.confnumactl-- interleave=all mongos-f / etc/mongodb/formal_5000/mongodb_rout.conf
PS:
1. Balancer
Sh.setBalancerState (true) # start balancer sh.setBalancerState (false) # close balancer sh.getBalancerState () # View balancer status sh.isBalancerRunning () # check whether balancer is currently running # set balancer startup time range First make sure that the balancer is in the startup state db.settings.update ({_ id: "balancer"}, {$set: {activeWindow: {start: "21:00", stop: "08:00"}, {upsert: true}) # cancel the balancer startup time range db.settings.update ({_ id: "balancer"}, {$unset: {activeWindow: true}})
two。 Build a table
Use monitor_centerdb.createCollection ("origdata_20171024") # Building tables db.origdata_20171024.createIndex ({mac:1,time:1}, {background: true}) # indexing use admindb.runCommand ({shardcollection: "monitor_center.origdata_20171024", key: {mac:1,time:1}}) # whether the collection allows db.runCommand fragmentation ({moveChunk: "monitor_center.origdata_20171024", bounds: [{mac:MinKey,time:MinKey}, {mac:MaxKey,time:MaxKey}] To: "shard_1"}) # manually move the collection to Shard 1
3. Migration of tables
Sh.enableBalancing ("monitor_center.origdata_20171024") # turn on data balancing for this collection sh.disableBalancing ("monitor_center.origdata_20171024") # turn off data balancing for this collection db.getSiblingDB ("config") .collections.findOne ({_ id: "monitor_center.origdata_20171024"}) .noBalance # check whether the collection starts data balancing
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.