Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

MongoDB fragmentation (Cluster)

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Basic environment:

Due to the shortage of resources, there are only three virtual machines, and only two replicaSet are made. The configuration of each machine is as follows:

10.10.1.55 this machine installs Primary1,configServer1 and Arbiter1

10.10.1.56 install Primary2,configServer2, Arbiter2

10.10.1.57 install Secondary1,Secondary2,configServer3,mongos

1.55 the configuration file of the machine is as follows:

Conf file for Primary1:

Dbpath=/data/mongodb/rs0_0logpath=/data/mongodb/log/rs0_0.loglogappend=trueport=40000bind_ip=192.168.11.55,10.10.1.55oplogSize=10000fork=truejournal = true#noprealloc = truereplSet=rs0directoryperdb=true

Configuration file for Arbiter1:

Dbpath=/data/mongodb/rs0_arbiterlogpath=/data/mongodb/log/rs0_arbiter.loglogappend=trueport=40002bind_ip=192.168.11.55,10.10.1.55oplogSize=10000fork=truejournal = true#noprealloc = truereplSet=rs0directoryperdb=true

Configuration file for ConfigServer1:

Dbpath=/data/mongodb/rs0_conflogpath=/data/mongodb/log/rs0_conf.loglogappend=trueport=40006bind_ip=192.168.11.55,10.10.1.55fork=truejournal = true#noprealloc = trueconfigsvr=truedirectoryperdb=true

Different mongo processes are started through mongod-- config filename. After successful startup, you can view through netstat that the Primary1 port is assigned on the 1.55 machine: 40000 discipline Arbiter1 port: 40002 training reServer1 port: 40006

1.56 configuration of the machine:

Configuration file for Primary2:

Dbpath=/data/mongodb/rs1_primarylogpath=/data/mongodb/log/rs1_p.loglogappend=truebind_ip=192.168.11.56,10.10.1.56directoryperdb=true port=40003oplogSize=10000fork=truejournal = truenoprealloc = truereplSet=rs1

Arbiter2 profile:

Dbpath=/data/mongodb/rs1_arbiterlogpath=/data/mongodb/log/rs1_a.loglogappend=truebind_ip=192.168.11.56,10.10.1.56directoryperdb=true port=40005oplogSize=10000fork=truejournal = truenoprealloc = truereplSet=rs1

ConfigureServer2 profile:

Dbpath=/data/mongodb/rs1_conflogpath=/data/mongodb/log/rs1_conf.loglogappend=truebind_ip=192.168.11.56,10.10.1.56directoryperdb=true port=40007oplogSize=10000fork=truejournal = truenoprealloc = trueconfigsvr=true

Different mongo processes are started through mongod-- config filename. After starting successfully, you can view the Primary2 port assigned on the 1.55 machine via netstat: 40003Magi Arbiter2 Port: 40005MakeReServer2 Port: 40007

1.56 configuration of the machine:

Rs0_Secondary1 configuration:

Dbpath=/data/mongodb/rs0_secondary1logpath=/data/mongodb/log/rs0_secondary1.loglogappend=trueport=40001bind_ip=192.168.11.57,10.10.1.57oplogSize=10000fork=truejournal = true#noprealloc = truereplSet=rs0directoryperdb=true

Rs1_Secondary1 configuration:

Dbpath=/data/mongodb/rs1_secondary1logpath=/data/mongodb/log/rs1_secondary1.loglogappend=truebind_ip=192.168.11.57,10.10.1.57directoryperdb=true port=40004oplogSize=10000fork=truejournal = truenoprealloc = truereplSet=rs1

ConfigureServer3 configuration:

Dbpath=/data/mongodb/confSvr3logpath=/data/mongodb/log/conf3.loglogappend=truebind_ip=192.168.11.57,10.10.1.57directoryperdb=true port=40008oplogSize=10000fork=truejournal = trueconfigsvr=true

Mongos configuration: (when starting a mongos router, note that multiple server times must be synchronized, otherwise an error will occur)

Logpath=/data/mongodb/log/mongos.logport = 40009configdbstation 10.10.1.55 freed 40006 Magazine 10.10.1.56 Freud 40007 minicolor 10.10.1.57 Glop 40008 fork = true

Different mongo processes are started through mongod-- config filename. After starting successfully, you can view through netstat that the rs0_secondary1 port is assigned on the 1.55 machine: 40001memrs1secondary1 port: 40004dyadireServer3 port: 40008dymongos routing port: 40009

Now log in to primary1 with mongo shell to configure replicaSet0, as follows:

Cfg= {"_ id": "rs0", "members": [{"_ id": 0, "host": "10.10.1.55 id 40000"}, {"_ id": 1, "host": "10.10.1.57virtual 40001"}]}

Rs.initiate (cfg)

Rs.status ()

Rs.addArb ("10.10.1.55 40002")

Now log in to primary2 with mongo shell to configure replicaSet1, as follows:

Cfg= {"_ id": "rs1", "members": [{"_ id": 0, "host": "10.10.1.56 id 40003"}, {"_ id": 1, "host": "10.10.1.57 Suzhou 40004"}]}

Rs.initiate (cfg)

Rs.status ()

Rs.addArb ("10.10.1.56 40005")

Use mongo shell to log in to mongos route to add sharding information:

Mongo-host 10.10.1.57-port 40009mongos > sh.addShard ("rs0/10.10.1.55:40000,10.10.1.57:40001") {"shardAdded": "rs0", "ok": 1} mongos > sh.addShard ("rs1/10.10.1.56:40003,10.10.1.57:40004") {"shardAdded": "rs1" "ok": 1} mongos > sh.status ()-- Sharding Status-sharding version: {"_ id": 1, "version": 4, "minCompatibleVersion": 4, "currentVersion": 5, "clusterId": ObjectId ("561c7bdd4315b18f9862adb4")} shards: {"_ id": "rs0" "host": "rs0/10.10.1.55:40000,10.10.1.57:40001"} {"_ id": "rs1", "host": "rs1/10.10.1.56:40003,10.10.1.57:40004"} databases: {"_ id": "admin", "partitioned": false, "primary": "config"}

Now create a new database to test the shards:

Mongos > use people

Switched to db people

Mongos > for (var iTunes 1 / 2 / 1 / 2 / 1 / 2 / 1 / 2 / 1 / 2 / 1 / 2 / 1 / 2 / 1 / 2 / 1 / 2 / 3 / 3 / 3 / 1 / 2 / 3 / 3 / 3 / 3 / 1 / 2 / 3 / 1 / mongos / for / var / for / for / var / item1 I db.customers.ensureIndex ({country:1,_id:1}) {"raw": {"rs0/10.10.1.55:40000,10.10.1.57:40001": {"createdCollectionAutomatically": false, "numIndexesBefore": 1, "numIndexesAfter": 2, "ok": 1}} "ok": 1} mongos > sh.shardCollection ("people.customers", {country:1,_id:1}) {"collectionsharded": "people.customers", "ok": 1} mongos > sh.status ()-Sharding Status-sharding version: {"_ id": 1, "version": 4, "minCompatibleVersion": 4, "currentVersion": 5 "clusterId": ObjectId ("561c7bdd4315b18f9862adb4")} shards: {"_ id": "rs0", "host": "rs0/10.10.1.55:40000,10.10.1.57:40001"} {"_ id": "rs1", "host": "rs1/10.10.1.56:40003,10.10.1.57:40004"} databases: {"_ id": "admin" "partitioned": false, "primary": "config"} {"_ id": "test", "partitioned": false, "primary": "rs0"} {"_ id": "people", "partitioned": true, "primary": "rs0"} people.customers shard key: {"country": 1 "_ id": 1} chunks: rs0 1 {"country": {"$minKey": 1}, "_ id": {"$minKey": 1}}-- > {"country": {"$maxKey": 1}, "_ id": {"$maxKey": 1}} on: rs0 Timestamp (1,0)

Now, since there is only one shard on the rs0, you can increase the amount of data to improve the shard:

For (var iTunes 10 rs1 Timestamp I > {"country": "American", "_ id": ObjectId ("561c7da73af7c7865defefb1")} on: rs1 Timestamp (2,0)

{"country": "American", "_ id": ObjectId ("561c7da73af7c7865defefb1")}-- > > {"country": "UK", "_ id": ObjectId ("561c7db63af7c7865defefd4")} on: rs0 Timestamp (3,1)

{"country": "UK", "_ id": ObjectId ("561c7db63af7c7865defefd4")}-> {"country": {"$maxKey": 1}, "_ id": {"$maxKey": 1}} on: rs1 Timestamp (3,0)

Now there is one shard on rs0 and two shards on rs1.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report