Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

A case of high availability cluster built by MongoDB

2025-02-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

Editor to share with you the case of MongoDB to build a high-availability cluster, I believe that most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's go to know it!

First, plan the port ip

The architecture diagram is as follows, arbitrarily extracting a shard (non-arbitration node) from each replica set can form a complete data.

1. First replica set rs1

Share1 10.0.0.7:30011:/data/share_rs/share_rs1/share1/data/share2 10.0.0.7:40011:/data/share_rs/share_rs1/share2/data/share3 10.0.0.7:50011:/data/share_rs/share_rs1/share3/data/

two。 Second replica set rs2

Share1 10.0.0.7:30012:/data/share_rs/share_rs2/share1/data/share2 10.0.0.7:40012:/data/share_rs/share_rs2/share2/data/share3 10.0.0.7:50012:/data/share_rs/share_rs2/share3/data/

3. The third replica set rs3

Share1 10.0.0.7:30013:/data/share_rs/share_rs3/share1/data/share2 10.0.0.7:40013:/data/share_rs/share_rs3/share2/data/share3 10.0.0.7:50013:/data/share_rs/share_rs3/share3/data/

4.config server

Config1 10.0.0.7:30002:/data/share_rs/config/config1/data/config2 10.0.0.7:30002:/data/share_rs/config/config2/data/config3 10.0.0.7:30002:/data/share_rs/config/config3/data/

5. Mongos

Mongos1 10.0.0.7:30001:/data/share_rs/mongos/mongos1/data/mongos2 10.0.0.7:30001:/data/share_rs/mongos/mongos2/data/mongos3 10.0.0.7:30001:/data/share_rs/mongos/mongos3/data/

Second, create the corresponding directory

Mkdir-p / data/share_rs/ {share_rs1,share_rs2,share_rs3} / {share1,share2,share3} / {data,log} mkdir-p / data/share_rs/mongos/ {mongos1,mongos2,mongos3} / {data,log} mkdir-p / data/share_rs/config/ {config1,config2,config3} / {data,log}

Configure the configuration files of mongs and config (other copies refer to the modified port and ip)

[mongo@mongo config1] $cat mongo.confdbpath=/data/share_rs/config/config1/data/logpath=/data/share_rs/config/config1/log/mongo.loglogappend=trueport=30002fork=truerest=truehttpinterface=trueconfigsvr=true [mongo@mongo mongs1] $cat mongo.conf logpath=/data/share_rs/mongos/mongos1/log/mongo.loglogappend=trueport=30001fork=trueconfigdb=10.0.0.7:30002,10.0.0.7:40002,10.0.0.7:50002chunkSize=1

4. Start the config server and mongs server on the three replicas in turn

Mongod-f / data/share_rs/config/config1/mongo.confmongod-f / data/share_rs/config/config2/mongo.confmongod-f / data/share_rs/config/config3/mongo.confmongos-f / data/share_rs/mongos/mongos1/mongo.confmongos-f / data/share_rs/mongos/mongos2/mongo.confmongos-f / data/share_rs/mongos/mongos3/mongo.conf

5. The configuration file for configuring mong shards (other copies refer to the modified port and ip). The copy set of the same shard has the same name, that is, replSet.

One shard of the first replica set [mongo@mongo share_rs1] $cat share1/mongo.confdbpath=/data/share_rs/share_rs1/share1/datalogpath=/data/share_rs/share_rs1/share1/log/mongo.loglogappend=trueport=30011fork=truerest=truehttpinterface=truereplSet=rs1shardsvr=true the third vice of the second replica set [mongo@mongo share_rs2] $cat share1/mongo.confdbpath=/data/share_rs/share_rs2/share1/datalogpath=/data/share_rs/share_rs2/share1/log/mongo.loglogappend=trueport=30012fork=truerest=truehttpinterface=truereplSet=rs2shardsvr=true A fragment of this episode [mongo@mongo share_rs1] $cat share1/mongo.confdbpath=/data/share_rs/share_rs3/share1/datalogpath=/data/share_rs/share_rs3/share1/log/mongo.loglogappend=trueport=30013fork=truerest=truehttpinterface=truereplSet=rs3shardsvr=true

6. Start each part and the corresponding copy

Mongod-f / data/share_rs/share_rs1/share1/mongo.confmongod-f / data/share_rs/share_rs1/share2/mongo.confmongod-f / data/share_rs/share_rs1/share3/mongo.confmongod-f / data/share_rs/share_rs2/share1/mongo.confmongod-f / data/share_rs/share_rs2/share2/mongo.confmongod-f / data/share_rs/share_rs2/share3/mongo.confmongod-f / data/share_rs/share_ Rs3/share1/mongo.confmongod-f / data/share_rs/share_rs3/share2/mongo.confmongod-f / data/share_rs/share_rs3/share3/ Mongo.confession [Mongo @ mongo share_rs] $ps-ef | grepmongo | grep share | grep-v grepmongo 2480 1 0 12:50? 00:00:03 mongod-f / data/share_rs/share_rs1/share1/mongo.confmongo 2506 10 12:50? 00:00:03 mongod-f / data/share_rs / share_rs1/share2/mongo.confmongo 2532 10 12:50? 00:00:02 mongod-f / data/share_rs/share_rs1/share3/mongo.confmongo 2558 10 12:50? 00:00:03 mongod-f / data/share_rs/share_rs2/share1/mongo.confmongo 2584 10 12:50? 00:00:03 mongod-f / data/share_rs/share_rs2/share2/mongo.confmongo 2610 10 12:50? 00:00:02 mongod -f / data/share_rs/share_rs2/share3/mongo.confmongo 2636 10 12:50? 00:00:01 mongod-f / data/share_rs/share_rs3/share1/mongo.confmongo 2662 10 12:50? 00:00:01 mongod-f / data/share_rs/share_rs3/share2/mongo.confmongo 2688 10 12:50? 00:00:01 mongod-f / data/share_rs/share_rs3/share3/mongo.confmongo 3469 10 13:17 ? 00:00:00 mongod-f / data/share_rs/config/config1/mongo.confmongo 3485 10 13:17? 00:00:00 mongod-f / data/share_rs/config/config2/mongo.confmongo 3513 10 13:17? 00:00:00 mongod-f / data/share_rs/config/config3/mongo.confmongo 3535 10 13:18? 00:00:00 mongos-f / data/share_rs/mongos/mongos1/mongo.confmongo 3629 10 13:22 ? 00:00:00 mongos-f / data/share_rs/mongos/mongos2/mongo.confmongo 3678 10 13:22? 00:00:00 mongos-f / data/share_rs/mongos/mongos3/mongo.conf

7. Set up the replica set

1. Log in to a shard of the first copy and set the replica set mongo 127.0.0.1:30011/adminconfig = {_ id: "rs1", members: [{_ id:0,host: "10.0.0.7 rs1 30011"}, {_ id:1,host: "10.0.0.7 rs1 40011"}, {_ id:2,host: "10.0.0.7 rs1 50011", arbiterOnly:true}]}-- > Note: here id rs1 needs to be the same as the name in the replica set, that is, the value of replSet rs.initiate (config) {"ok": 1}-- >; prompt that the initialization is successful. Log in to a fragment of the second copy Set the replica set for it: mongo 127.0.0.1:30012/adminconfig = {_ id: "rs2", members: [{_ id:0,host: "10.0.0.7 rs2"}, {_ id:1,host: "10.0.0.7 rs2 40012"}, {_ id:2,host: "10.0.0.7 members 50012" ArbiterOnly:true}]} rs.initiate (config) {"ok": 1}-- > This indicates that the initialization was successful. Log in to a fragment of the third copy Set the replica set for it: mongo 127.0.0.1:30013/adminconfig = {_ id: "rs3", members: [{_ id:0,host: "10.0.0.7 rs3 30013"}, {_ id:1,host: "10.0.0.7 rs3 40013"}, {_ id:2,host: "10.0.0.7 Suzhou 50013" ArbiterOnly:true}]} rs.initiate (config) {"ok": 1}-- > This indicates that the initialization is successful.

8. Currently, mongodb configuration servers, routing servers and sharding servers have been built. However, the sharding mechanism cannot be used when the application connects to the mongos routing server. Sharding configuration needs to be set in the program to make sharding take effect.

Connect to mongo 10.0.0.7:30001/admindb.runCommand ({addshard: "rs1/10.0.0.7:30011,10.0.0.7:40011,10.0.0.7:50011", allowLocal:true}) on the first mongos; db.runCommand ({addshard: "rs2/10.0.0.7:30012,10.0.0.7:40012,10.0.0.7:50012"}) Db.runCommand ({addshard: "rs3/10.0.0.7:30013,10.0.0.7:40013,10.0.0.7:50013"});-- >; add all copies of the first shard-- >; if shard is a single server, add it with commands such as db.runCommand ({addshard: "[:]"})-- > If shard is a replica set, use db.runCommand ({addshard: "replicaSetName/ [: port] [, serverhostname2 [: port],...]" }); this format represents .mongos > Sh.status ()-- Sharding Status-sharding version: {"_ id": 1, "minCompatibleVersion": 5, "currentVersion": 6, "clusterId": ObjectId ("57f33f4d35d9c494714adfa7")} shards: {"_ id": "rs1", "host": "rs1/10.0.0.7:30011,10.0.0.7:40011"} {"_ id": "rs2" "host": "rs2/10.0.0.7:30012,10.0.0.7:40012"} {"_ id": "rs3", "host": "rs3/10.0.0.7:30013,10.0.0.7:40013"} active mongoses: "3.2.7": 3 balancer: Currently enabled: yes Currently running: no Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: No recent migrations databases:

9. The assembly will be divided into pieces.

Db.runCommand ({enablesharding: "testcol"});-- >; specify the effective db.runCommand of testdb sharding ({shardcollection: "testcol.testdoc", key: {id: 1}})-- >; specify the collection and key in the database that need sharding-- >; insert test data for (var I = 1; I; view the status of the collection db.testcol.stats ()) The above is all the content of this article "the case of building high availability clusters by MongoDB". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report