Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Mongodb2.6 deployment replica set + Partition

2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Deployment planning

Operating system: redhat6.4 64 bit

Config

Route

Shard 1

Shard 2

Shard 3

Use Port

28000

27017

27018

27019

27020

IP address

192.168.1.30

/ etc/config.conf

/ etc/route.conf

/ etc/sd1.conf (Master)

/ etc/sd2.conf (Arbitration)

/ etc/sd3.conf (standby)

192.168.1.52

/ etc/config.conf

/ etc/route.conf

/ etc/sd1.conf (standby)

/ etc/sd2.conf (Master)

/ etc/sd3.conf (Arbitration)

192.168.1.108

/ etc/config.conf

/ etc/route.conf

/ etc/sd1.conf (Arbitration)

/ etc/sd2.conf (standby)

/ etc/sd3.conf (Master)

First, create the following directory on the three nodes. If you do the test, it is recommended to make sure that there is about 15g of space left in the / directory.

[root@orcl] # mkdir-p / var/config

[root@orcl] # mkdir-p / var/sd1

[root@orcl] # mkdir-p / var/sd2

[root@orcl] # mkdir-p / var/sd3

Second, view configuration files

[root@orcl ~] # cat / etc/config.conf

Port=28000

Dbpath=/var/config

Logpath=/var/config/config.log

Logappend=true

Fork=true

Configsvr=true

[root@orcl ~] # cat / etc/route.conf

Port=27017

Configdb=192.168.1.30:28000192.168.1.52:28000192.168.1.108:28000

Logpath=/var/log/mongos.log

Logappend=true

Fork=true

[root@orcl ~] # cat / etc/sd1.conf

Port = 27018

Dbpath=/var/sd1

Logpath = / var/sd1/shard1.log

Logappend = true

Shardsvr = true

ReplSet = set1

Fork = true

[root@orcl ~] # cat / etc/sd2.conf

Port = 27019

Dbpath=/var/sd2

Logpath = / var/sd2/shard2.log

Logappend = true

Shardsvr = true

ReplSet = set2

Fork = true

[root@orcl ~] # cat / etc/sd3.conf

Port = 27020

Dbpath=/var/sd3

Logpath = / var/sd3/shard1.log

Logappend = true

Shardsvr = true

ReplSet = set3

Fork = true

Third, synchronize time on three nodes

Slightly

Start the config server on three nodes

Node 1

[root@orcl] # mongod-f / etc/config.conf

About to fork child process, waiting until server is ready for connections.

Forked process: 3472

Child process started successfully, parent exiting

[root@orcl ~] # ps-ef | grep mongo

Root 3472 1 1 19:15? 00:00:01 mongod-f / etc/config.conf

Root 3499 2858 0 19:17 pts/0 00:00:00 grep mongo

[root@orcl ~] # netstat-anltp | grep 28000

Tcp 00 0.0.0.0 28000 0.0.0.015 * LISTEN 3472/mongod

Node 2

[root@localhost] # mongod-f / etc/config.conf

About to fork child process, waiting until server is ready for connections.

Forked process: 2998

Child process started successfully, parent exiting

[root@localhost ~] # ps-ef | grep mongo

Root 2998 1 8 19:15? 00:00:08 mongod-f / etc/config.conf

Root 3014 2546 0 19:17 pts/0 00:00:00 grep mongo

[root@localhost ~] # netstat-anltp | grep 28000

Tcp 00 0.0.0.0 28000 0.0.0.015 * LISTEN 2998/mongod

Node 3

[root@db10g] # mongod-f / etc/config.conf

About to fork child process, waiting until server is ready for connections.

Forked process: 4086

Child process started successfully, parent exiting

[root@db10g ~] # ps-ef | grep mongo

Root 4086 1 2 19:25? 00:00:00 mongod-f / etc/config.conf

Root 4100 3786 0 19:25 pts/0 00:00:00 grep mongo

[root@db10g ~] # netstat-anltp | grep 28000

Tcp 00 0.0.0.0 28000 0.0.0.015 * LISTEN 4086/mongod

Start the routing server on three nodes

Node 1

[root@orcl] # mongos-f / etc/route.conf

About to fork child process, waiting until server is ready for connections.

Forked process: 3575

Child process started successfully, parent exiting

[root@orcl ~] # netstat-anltp | grep 2701

Tcp 0 0 0.0.0 0 27017 0.0.0 0 15 * LISTEN 3575/mongos

Node 2

[root@localhost] # mongos-f / etc/route.conf

About to fork child process, waiting until server is ready for connections.

Forked process: 3057

Child process started successfully, parent exiting

[root@localhost ~] # netstat-anltp | grep 2701

Tcp 0 0 0.0.0.0:27017

Node 3

[root@db10g] # mongos-f / etc/route.conf

About to fork child process, waiting until server is ready for connections.

Forked process: 4108

Child process started successfully, parent exiting

[root@db10g ~] # netstat-anltp | grep 27017

Tcp 0 0 0.0.0 0 27017 0.0.0 0 15 * LISTEN 4108/mongos

Enable shard on three nodes

Mongod-f / etc/sd1.conf

Mongod-f / etc/sd2.conf

Mongod-f / etc/sd3.conf

Node 1

[root@orcl ~] # ps-ef | grep mongo

Root 3472 1 2 19:15? 00:02:18 mongod-f / etc/config.conf

Root 3575 1 0 19:28? 00:00:48 mongos-f / etc/route.conf

Root 4135 1 0 20:52? 00:00:07 mongod-f / etc/sd1.conf

Root 4205 1 0 20:55? 00:00:05 mongod-f / etc/sd2.conf

Root 4265 1 0 20:58? 00:00:04 mongod-f / etc/sd3.conf

Node 2

[root@localhost ~] # ps-ef | grep mongo

Root 2998 1 1 19:15? 00:02:02 mongod-f / etc/config.conf

Root 3057 1 1 19:28? 00:01:02 mongos-f / etc/route.conf

Root 3277 1 1 20:52? 00:00:20 mongod-f / etc/sd1.conf

Root 3334 1 6 20:56? 00:00:52 mongod-f / etc/sd2.conf

Root 3470 1 1 21:01? 00:00:07 mongod-f / etc/sd3.conf

Node 3

[root@db10g data] # ps-ef | grep mongo

Root 4086 1 1 19:25? 00:01:58 mongod-f / etc/config.conf

Root 4108 10 19:27? 00:00:55 mongos-f / etc/route.conf

Root 4592 1 0 20:54? 00:00:07 mongod-f / etc/sd1.conf

Root 4646 1 3 20:56? 00:00:30 mongod-f / etc/sd2.conf

Root 4763 1 4 21:04? 00:00:12 mongod-f / etc/sd3.conf

7. Configure the replica set

192.168.1.30

[root@orcl] # mongo-- port 27018

MongoDB shell version: 2.6.4

Connecting to: 127.0.0.1:27018/test

> use admin

Switched to db admin

> rs1= {_ id: "set1", members: [{_ id:0,host: "192.168.1.30 members 27018", priority:2}, {_ id:1,host: "192.168.1.52 set1 27018"}, {_ id:2,host: "192.168.1.10 set1 27018", arbiterOnly:true}]}

{

"_ id": "set1"

"members": [

{

"_ id": 0

"host": "192.168.1.30 purl 27018"

"priority": 2

}

{

"_ id": 1

"host": "192.168.1.52 purl 27018"

}

{

"_ id": 2

"host": "192.168.1.108purl 27018"

"arbiterOnly": true

}

]

}

> rs.initiate (rs1)

{

Info: "Config now saved locally. Should come online in about a minute."

"ok": 1

}

192.168.1.52

[root@orcl] # mongo-- port 27019

MongoDB shell version: 2.6.4

Connecting to: 127.0.0.1:27019/test

> use admin

Switched to db admin

> rs2= {_ id: "set2", members: [{_ id:0,host: "192.168.1.52 members 27019", priority:2}, {_ id:1,host: "192.168.1.108 set2 27019"}, {_ id:2,host: "192.168.1.30 set2 27019", arbiterOnly:true}]}

{

"_ id": "set2"

"members": [

{

"_ id": 0

"host": "192.168.1.52 purl 27019"

"priority": 2

}

{

"_ id": 1

"host": "192.168.1.108purl 27019"

}

{

"_ id": 2

"host": "192.168.1.30 purl 27019"

"arbiterOnly": true

}

]

}

> rs.initiate (rs2)

{

Info: "Config now saved locally. Should come online in about a minute."

"ok": 1

}

192.168.1.108

[root@localhost sd3] # mongo-- port 27020

MongoDB shell version: 2.6.4

Connecting to: 127.0.0.1:27020/test

> use admin

Switched to db admin

> rs3= {_ id: "set3", members: [{_ id:0,host: "192.168.1.108 members 27020", priority:2}, {_ id:1,host: "192.168.1.30 members 27020"}, {_ id:2,host: "192.168.1.52 set3", arbiterOnly:true}]}

{

"_ id": "set3"

"members": [

{

"_ id": 0

"host": "192.168.1.108purl 27020"

"priority": 2

}

{

"_ id": 1

"host": "192.168.1.30 purl 27020"

}

{

"_ id": 2

"host": "192.168.1.52 purl 27020"

"arbiterOnly": true

}

]

}

> rs.initiate (rs3)

{

Info: "Config now saved locally. Should come online in about a minute."

"ok": 1

}

Add slicing

Any node can operate on three nodes

192.168.1.30

[root@orcl sd3] # mongo-- port 27017

MongoDB shell version: 2.6.4

Connecting to: 127.0.0.1:27017/test

Mongos > use admin

Switched to db admin

Mongos > db.runCommand ({addshard: "set1/192.168.1.30:27018192.168.1.52:27018192.168.1.108:27018"})

{"shardAdded": "set1", "ok": 1}

Mongos > db.runCommand ({addshard: "set2/192.168.1.30:27019192.168.1.52:27019192.168.1.108:27019"})

{"shardAdded": "set2", "ok": 1}

Mongos > db.runCommand ({addshard: "set3/192.168.1.30:27020192.168.1.52:27020192.168.1.108:27020"})

{"shardAdded": "set3", "ok": 1}

9. View sharding information

Mongos > db.runCommand ({listshards: 1})

{

"shards": [

{

"_ id": "set1"

"host": "set1/192.168.1.30:27018192.168.1.52:27018"

}

{

"_ id": "set2"

"host": "set2/192.168.1.108:27019192.168.1.52:27019"

}

{

"_ id": "set3"

"host": "set3/192.168.1.108:27020192.168.1.30:27020"

}

]

"ok": 1

}

10. Delete shards

Mongos > db.runCommand ({removeshard: "set3"})

{

"msg": "draining started successfully"

"state": "started"

"shard": "set3"

"ok": 1

}

11. Manage sharding

Mongos > use config

Switched to db config

Mongos > db.shards.find ()

{"_ id": "set1", "host": "set1/192.168.1.30:27018192.168.1.52:27018"}

{"_ id": "set2", "host": "set2/192.168.1.108:27019192.168.1.52:27019"}

{"_ id": "set3", "host": "set3/192.168.1.108:27020192.168.1.30:27020"}

12. Declare the libraries and tables to be sliced

Switch to the admin library

Mongos > use admin

Declare that the test library allows sharding

Mongos > db.runCommand ({enablesharding: "test"})

{"ok": 1}

Declare that the users table is to be fragmented

Mongos > db.runCommand ({shardcollection: "test.lineqi", key: {id: "hashed"}})

{"collectionsharded": "test.lineqi", "ok": 1}

XIII. Test script

Switch to test

Mongos > use test

Mongos > for (var I = 1; i use config

Switched to db config

Mongos > db.chunks.find ()

{"_ id": "test.users-id_MinKey", "lastmod": Timestamp (2,0), "lastmodEpoch": ObjectId ("55ddb3a70f613da70e8ce303"), "ns": "test.users", "min": {"id": {"$minKey": 1}}, "max": {"id": 1}, "shard": "set1"}

{"_ id": "test.users-id_1.0", "lastmod": Timestamp (3,1), "lastmodEpoch": ObjectId ("55ddb3a70f613da70e8ce303"), "ns": "test.users", "min": {"id": 1}, "max": {"id": 4752}, "shard": "set2"}

{"_ id": "test.users-id_4752.0", "lastmod": Timestamp (3,0), "lastmodEpoch": ObjectId ("55ddb3a70f613da70e8ce303"), "ns": "test.users", "min": {"id": 4752}, "max": {"id": {"$maxKey": 1}}, "shard": "set3"}

{"_ id": "test.lineqi-id_MinKey", "lastmod": Timestamp (3,2), "lastmodEpoch": ObjectId ("55ddb7460f613da70e8ce380"), "ns": "test.lineqi", "min": {"id": {"$minKey": 1}}, "max": {"id": NumberLong ("- 6148914691236517204")}, "shard": "set2"}

{"_ id": "test.lineqi-id_-3074457345618258602", "lastmod": Timestamp (3,4), "lastmodEpoch": ObjectId ("55ddb7460f613da70e8ce380"), "ns": "test.lineqi", "min": {"id": NumberLong ("- 3074457345618258602")}, "max": {"id": NumberLong (0)}, "shard": "set3"}

{"_ id": "test.lineqi-id_3074457345618258602", "lastmod": Timestamp (3,6), "lastmodEpoch": ObjectId ("55ddb7460f613da70e8ce380"), "ns": "test.lineqi", "min": {"id": NumberLong ("3074457345618258602")}, "max": {"id": NumberLong ("6148914691236517204")}, "shard": "set1"}

{"_ id": "test.lineqi-id_-6148914691236517204", "lastmod": Timestamp (3,3), "lastmodEpoch": ObjectId ("55ddb7460f613da70e8ce380"), "ns": "test.lineqi", "min": {"id": NumberLong ("- 6148914691236517204")}, "max": {"id": NumberLong ("- 307445745618258602")}, "shard": "set2"}

{"_ id": "test.lineqi-id_0", "lastmod": Timestamp (3,5), "lastmodEpoch": ObjectId ("55ddb7460f613da70e8ce380"), "ns": "test.lineqi", "min": {"id": NumberLong (0)}, "max": {"id": NumberLong ("3074457345618258602")}, "shard": "set3"}

{"_ id": "test.lineqi-id_6148914691236517204", "lastmod": Timestamp (3,7), "lastmodEpoch": ObjectId ("55ddb7460f613da70e8ce380"), "ns": "test.lineqi", "min": {"id": NumberLong ("6148914691236517204")}, "max": {"id": {"$maxKey": 1}, "shard": "set1"}

View storage information for the users table

Mongos > use test

Mongos > db.lineqi.stats ()

{

"sharded": true

"systemFlags": 1

"userFlags": 1

"ns": "test.lineqi"

"count": 100000

"numExtents": 18

"size": 11200000

"storageSize": 33546240

"totalIndexSize": 8086064

"indexSizes": {

"_ id_": 3262224

"id_hashed": 4823840

}

"avgObjSize": 112

"nindexes": 2

"nchunks": 6

"shards": {

"set1": {

"ns": "test.lineqi"

"count": 33102

"size": 3707424

"avgObjSize": 112

"storageSize": 11182080

"numExtents": 6

"nindexes": 2

"lastExtentSize": 8388608

"paddingFactor": 1

"systemFlags": 1

"userFlags": 1

"totalIndexSize": 2649024

"indexSizes": {

"_ id_": 1079232

"id_hashed": 1569792

}

"ok": 1

}

"set2": {

"ns": "test.lineqi"

"count": 33755

"size": 3780560

"avgObjSize": 112

"storageSize": 11182080

"numExtents": 6

"nindexes": 2

"lastExtentSize": 8388608

"paddingFactor": 1

"systemFlags": 1

"userFlags": 1

"totalIndexSize": 2755312

"indexSizes": {

"_ id_": 1103760

"id_hashed": 1651552

}

"ok": 1

}

"set3": {

"ns": "test.lineqi"

"count": 33143

"size": 3712016

"avgObjSize": 112

"storageSize": 11182080

"numExtents": 6

"nindexes": 2

"lastExtentSize": 8388608

"paddingFactor": 1

"systemFlags": 1

"userFlags": 1

"totalIndexSize": 2681728

"indexSizes": {

"_ id_": 1079232

"id_hashed": 1602496

}

"ok": 1

}

}

"ok": 1

}

15. Reference documents

Http://blog.sina.com.cn/s/blog_8d3bcbdb01015vne.html

Http://jingyan.baidu.com/article/495ba841f1ee2b38b30ede01.html

Http://server.ctocio.com.cn/150/13087650.shtml

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report