Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Fragmentation of MongoDB

2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

1. Environment

Operating system information:

IP operating system MongoDB10.163.91.15RHLE6.5_x64mongodb-linux-x86_64-rhel62-3.4.7.tgz10.163.91.16RHLE6.5_x64mongodb-linux-x86_64-rhel62-3.4.7.tgz10.163.91.17RHLE6.5_x64mongodb-linux-x86_64-rhel62-3.4.7.tgz

Server Planning:

10.163.97.1510.163.97.1610.163.97.17 Port mongosmongosmongos20000config serverconfig serverconfig server21000shard server1 Primary Node shard server1 Secondary Node shard server1 Arbitration 27001shard server2 Arbitration shard server2 Primary Node shard server2 Secondary Node 27002shard server3 Auxiliary Node shard server3 Arbitration shard server3 Primary Node 27003

You can see from the above table that there are four components: mongos, config server, shard, and replica set.

Mongos: the entry of database cluster requests. All requests are coordinated through mongos. There is no need to add a route selector in the application. Mongos itself is a request distribution center, which is responsible for forwarding the corresponding data requests to the corresponding shard server. In a production environment, there is usually multiple mongos as the entry point for requests, and there is no way to prevent one of them from hanging up all mongodb requests.

Config server: as the name implies, it is a configuration server that stores all database meta-information (routing, sharding) configuration. Mongos itself does not physically store sharding server and data routing information, but caches it in memory, while the configuration server actually stores the data. The first time mongos starts or shuts down and restarts, the configuration information will be loaded from config server, and then all mongos will be notified to update their status if the configuration server information changes, so that mongos can continue to route accurately. There are usually multiple config server configuration servers in a production environment (one or three must be configured) because it stores sharded routing metadata to prevent data loss!

Shard: sharding refers to the process of splitting a database and distributing it on different machines. By distributing the data to different machines, you don't need a powerful server to store more data and handle larger loads. The basic idea is to cut the set into small pieces, which are scattered into several pieces, each of which is only responsible for a part of the total data, and finally equalizes each slice (data migration) through an equalizer.

Replica set: a Chinese translation copy set is actually a backup of shard to prevent data loss after the shard is hung up. Replication provides redundant backup of data, and stores copies of data on multiple servers, which improves the availability of data and ensures the security of data.

Arbitrator (Arbiter): an instance of MongoDB in a replication set that does not hold data. The quorum node uses minimal resources and cannot deploy Arbiter in the same dataset node. It can be deployed in other application servers or monitoring servers, or in separate virtual machines. To ensure that there are an odd number of voting members (including primary) in the replication set, you need to add an arbitration node as a vote, otherwise the primary will not be automatically switched when primary cannot run.

After a brief understanding, you can sum up that the application requests mongos to operate the addition, deletion, modification and query of mongodb, configure the server to store database meta-information, and synchronize with mongos, and the data is finally stored on shard (shard). In order to prevent data loss and synchronization, a copy is stored in the replica set, and arbitration decides which node to store the data in the shard.

A total of 3 mongos and 3 config server are planned, and the data is divided into 3 shard server and 3 shard server. Each shard has one copy and one arbitration, that is, 3 * 2 = 6. A total of 15 instances need to be deployed. These instances can be deployed on either a stand-alone machine or a single machine. We have limited test resources here, and only 3 machines have been prepared, as long as the ports are different on the same machine.

2. Install mongodb

Install mongodb on three machines

[root@D2-POMS15] # tar-xvzf mongodb-linux-x86_64-rhel62-3.4.7.tgz-C / usr/local/

[root@D2-POMS15 ~] # mv / usr/local/mongodb-linux-x86_64-rhel62-3.4.7 / / usr/local/mongodb

Configure environment variables

[root@D2-POMS15 ~] # vim .bash _ profile

Export PATH=$PATH:/usr/local/mongodb/bin/

[root@D2-POMS15 ~] # source .bash _ profile

Create six directories: conf, mongos, config, shard1, shard2 and shard3 on each machine, because mongos does not store data, you only need to establish a log file directory.

[root@D2-POMS15] # mkdir-p / usr/local/mongodb/conf

[root@D2-POMS15] # mkdir-p / usr/local/mongodb/mongos/log

[root@D2-POMS15] # mkdir-p / usr/local/mongodb/config/data

[root@D2-POMS15] # mkdir-p / usr/local/mongodb/config/log

[root@D2-POMS15] # mkdir-p / usr/local/mongodb/shard1/data

[root@D2-POMS15] # mkdir-p / usr/local/mongodb/shard1/log

[root@D2-POMS15] # mkdir-p / usr/local/mongodb/shard2/data

[root@D2-POMS15] # mkdir-p / usr/local/mongodb/shard2/log

[root@D2-POMS15] # mkdir-p / usr/local/mongodb/shard3/data

[root@D2-POMS15] # mkdir-p / usr/local/mongodb/shard3/log

3. Config server configuration server

In the future, mongodb3.4 requires the configuration server to also create a replica set, otherwise the cluster will not be built successfully.

Add a profile to the three servers:

[root@D2-POMS15 ~] # vi / usr/local/mongodb/conf/config.conf

# # content of configuration File

Pidfilepath = / usr/local/mongodb/config/log/configsrv.pid

Dbpath = / usr/local/mongodb/config/data

Logpath = / usr/local/mongodb/config/log/congigsrv.log

Logappend = true

Bind_ip = 0.0.0.0

Port = 21000

Fork = true

# declare this is a config db of a cluster

Configsvr = true

# replica set name

ReplSet=configs

# set the maximum number of connections

MaxConns=20000

Start config server for each of the three servers

[root@D2-POMS15] # mongod-f / usr/local/mongodb/conf/config.conf

About to fork child process, waiting until server is ready for connections.

Forked process: 15368

Child process started successfully, parent exiting

Log in to any configuration server and initialize the configuration copy set

[root@D2-POMS15] # mongo-- port 21000

> config = {

... _ id: "configs"

... Members: [

... {_ id: 0, host: "10.163.97.15 21000"}

... {_ id: 1, host: "10.163.97.16VR 21000"}

... {_ id: 2, host: "10.163.97.17VR 21000"}

...]

...}

{

"_ id": "configs"

"members": [

{

"_ id": 0

"host": "10.163.97.15 purl 21000"

}

{

"_ id": 1

"host": "10.163.97.16 purl 21000"

}

{

"_ id": 2

"host": "10.163.97.17 purl 21000"

}

]

}

> rs.initiate (config)

{"ok": 1}

Where "_ id": "configs" should be consistent with the replSet configured in the configuration file, and "host" in "members" is the ip and port of the three nodes.

4. Configure sharded copy set (three machines)

Set the first fragmented copy set

Add a profile:

[root@D2-POMS15 ~] # vi / usr/local/mongodb/conf/shard1.conf

# content of configuration file

Pidfilepath = / usr/local/mongodb/shard1/log/shard1.pid

Dbpath = / usr/local/mongodb/shard1/data

Logpath = / usr/local/mongodb/shard1/log/shard1.log

Logappend = true

Bind_ip = 0.0.0.0

Port = 27001

Fork = true

# enable web Monitoring

Httpinterface=true

Rest=true

# replica set name

ReplSet=shard1

# declare this is a shard db of a cluster

Shardsvr = true

# set the maximum number of connections

MaxConns=20000

Start the shard1 server of three servers

[root@D2-POMS15] # mongod-f / usr/local/mongodb/conf/shard1.conf

About to fork child process, waiting until server is ready for connections.

Forked process: 15497

Child process started successfully, parent exiting

Log in to a server (not at the arbitration node) and initialize the replica set

[root@D2-POMS15] # mongo-- port 27001

# using admin database

> use admin

Switched to db admin

# defines the replica set configuration, and the "arbiterOnly": true of the third node represents it as the arbitration node.

> config = {

... _ id: "shard1"

... Members: [

... {_ id: 0, host: "10.163.97.15purl 27001"}

... {_ id: 1, host: "10.163.97.16purl 27001"}

. {_ id: 2, host: "10.163.97.17 arbiterOnly 27001", arbiterOnly: true}

...]

...}

{

"_ id": "shard1"

"members": [

{

"_ id": 0

"host": "10.163.97.15 purl 27001"

}

{

"_ id": 1

"host": "10.163.97.16 purl 27001"

}

{

"_ id": 2

"host": "10.163.97.17 purl 27001"

"arbiterOnly": true

}

]

}

# initialize replica set configuration

> rs.initiate (config)

{"ok": 1}

Set the second fragmented copy set

Add a profile:

[root@D2-POMS15 ~] # vi / usr/local/mongodb/conf/shard2.conf

# content of configuration file

Pidfilepath = / usr/local/mongodb/shard2/log/shard2.pid

Dbpath = / usr/local/mongodb/shard2/data

Logpath = / usr/local/mongodb/shard2/log/shard2.log

Logappend = true

Bind_ip = 0.0.0.0

Port = 27002

Fork = true

# enable web Monitoring

Httpinterface=true

Rest=true

# replica set name

ReplSet=shard2

# declare this is a shard db of a cluster

Shardsvr = true

# set the maximum number of connections

MaxConns=20000

Start the shard2 server for the three servers:

[root@D2-POMS15] # mongod-f / usr/local/mongodb/conf/shard2.conf

About to fork child process, waiting until server is ready for connections.

Forked process: 15622

Child process started successfully, parent exiting

Log in to a server (not at the arbitration node) and initialize the replica set

[root@D2-POMS15] # mongo-- port 27002

> use admin

Switched to db admin

> config = {

... _ id: "shard2"

... Members: [

. {_ id: 0, host: "10.163.97.15 arbiterOnly 27002", arbiterOnly: true}

... {_ id: 1, host: "10.163.97.16purl 27002"}

... {_ id: 2, host: "10.163.97.17ghug 27002"}

...]

...}

{

"_ id": "shard2"

"members": [

{

"_ id": 0

"host": "10.163.97.15 purl 27002"

"arbiterOnly": true

}

{

"_ id": 1

"host": "10.163.97.16 purl 27002"

}

{

"_ id": 2

"host": "10.163.97.17 virtual 27002"

}

]

}

> rs.initiate (config)

{"ok": 1}

Set the third fragmented copy set

Add a profile:

[root@D2-POMS15 ~] # vi / usr/local/mongodb/conf/shard3.conf

# content of configuration file

Pidfilepath = / usr/local/mongodb/shard3/log/shard3.pid

Dbpath = / usr/local/mongodb/shard3/data

Logpath = / usr/local/mongodb/shard3/log/shard3.log

Logappend = true

Bind_ip = 0.0.0.0

Port = 27003

Fork = true

# enable web Monitoring

Httpinterface=true

Rest=true

# replica set name

ReplSet=shard3

# declare this is a shard db of a cluster

Shardsvr = true

# set the maximum number of connections

MaxConns=20000

Start the shard3 server of three servers

[root@D2-POMS15] # mongod-f / usr/local/mongodb/conf/shard3.conf

About to fork child process, waiting until server is ready for connections.

Forked process: 15742

Child process started successfully, parent exiting

Log in to a server (not at the arbitration node) and initialize the replica set

> use admin

Switched to db admin

> config = {

... _ id: "shard3"

... Members: [

... {_ id: 0, host: "10.163.97.15purl 27003"}

... {_ id: 1, host: "10.163.97.16 arbiterOnly 27003", true:

... {_ id: 2, host: "10.163.97.17ghug 27003"}

...]

...}

{

"_ id": "shard3"

"members": [

{

"_ id": 0

"host": "10.163.97.15 purl 27003"

}

{

"_ id": 1

"host": "10.163.97.16 purl 27003"

"arbiterOnly": true

}

{

"_ id": 2

"host": "10.163.97.17 virtual 27003"

}

]

}

> rs.initiate (config)

{"ok": 1}

You can see that the configuration server and sharding server have been started so far.

[root@D2-POMS15 ~] # ps-ef | grep mongo | grep-v grep

Root 15368 1 0 15:52? 00:00:07 mongod-f / usr/local/mongodb/conf/config.conf

Root 15497 1 0 16:00? 00:00:04 mongod-f / usr/local/mongodb/conf/shard1.conf

Root 15622 1 0 16:06? 00:00:02 mongod-f / usr/local/mongodb/conf/shard2.conf

Root 15742 1 0 16:21? 00:00:00 mongod-f / usr/local/mongodb/conf/shard3.conf

5. Configure the routing server mongos

Add a profile to the three servers:

[root@D2-POMS15 ~] # vi / usr/local/mongodb/conf/mongos.conf

# content

Pidfilepath = / usr/local/mongodb/mongos/log/mongos.pid

Logpath = / usr/local/mongodb/mongos/log/mongos.log

Logappend = true

Bind_ip = 0.0.0.0

Port = 20000

Fork = true

# configuration server that is listening. Only one or three configs can be the replica set name of the configuration server.

Configdb = configs/10.163.97.15:21000,10.163.97.16:21000,10.163.97.17:21000

# set the maximum number of connections

MaxConns=20000

Start the mongos server of three servers

[root@D2-POMS15] # mongos-f / usr/local/mongodb/conf/mongos.conf

About to fork child process, waiting until server is ready for connections.

Forked process: 20563

Child process started successfully, parent exiting

[root@D2-POMS15] # mongo-- port 20000

Mongos > db.stats ()

{

"raw": {

"shard1/10.163.97.15:27001,10.163.97.16:27001": {

"db": "admin"

"collections": 1

"views": 0

"objects": 3

"avgObjSize": 146.66666666666666

DataSize: 440

"storageSize": 36864

"numExtents": 0

"indexes": 2

"indexSize": 65536

"ok": 1

"$gleStats": {

"lastOpTime": Timestamp (0,0)

"electionId": ObjectId ("7fffffff0000000000000001")

}

}

"shard2/10.163.97.16:27002,10.163.97.17:27002": {

"db": "admin"

"collections": 1

"views": 0

"objects": 2

"avgObjSize": 114

DataSize: 228

"storageSize": 16384

"numExtents": 0

"indexes": 2

"indexSize": 32768

"ok": 1

"$gleStats": {

"lastOpTime": Timestamp (0,0)

"electionId": ObjectId ("7fffffff0000000000000001")

}

}

"shard3/10.163.97.15:27003,10.163.97.17:27003": {

"db": "admin"

"collections": 1

"views": 0

"objects": 2

"avgObjSize": 114

DataSize: 228

"storageSize": 16384

"numExtents": 0

"indexes": 2

"indexSize": 32768

"ok": 1

"$gleStats": {

"lastOpTime": Timestamp (0,0)

"electionId": ObjectId ("7fffffff0000000000000002")

}

}

}

"objects": 7

"avgObjSize": 127.71428571428571

"dataSize": 896

"storageSize": 69632

"numExtents": 0

"indexes": 6

"indexSize": 131072

"fileSize": 0

"extentFreeList": {

"num": 0

"totalSize": 0

}

"ok": 1

}

6. Enable sharding

Currently, a mongodb configuration server, a routing server and each sharding server have been built. However, the sharding mechanism cannot be used when the application connects to the mongos routing server. You need to set the sharding configuration in the routing server to make sharding effective.

Log in to any mongos

[root@D2-POMS15] # mongo-- port 20000

# using admin database

Mongos > use admin

Switched to db admin

# concatenated routing server and assigned replica set

Mongos > sh.addShard ("shard1/10.163.97.15:27001,10.163.97.16:27001,10.163.97.17:27001")

{"shardAdded": "shard1", "ok": 1}

Mongos > sh.addShard ("shard2/10.163.97.15:27002,10.163.97.16:27002,10.163.97.17:27002")

{"shardAdded": "shard2", "ok": 1}

Mongos > sh.addShard ("shard3/10.163.97.15:27003,10.163.97.16:27003,10.163.97.17:27003")

{"shardAdded": "shard3", "ok": 1}

# View cluster status

Mongos > sh.status ()

-Sharding Status

Sharding version: {

"_ id": 1

"minCompatibleVersion": 5

"currentVersion": 6

"clusterId": ObjectId ("599d34bf612249caec3fc9fe")

}

Shards:

{"_ id": "shard1", "host": "shard1/10.163.97.15:27001,10.163.97.16:27001", "state": 1}

{"_ id": "shard2", "host": "shard2/10.163.97.16:27002,10.163.97.17:27002", "state": 1}

{"_ id": "shard3", "host": "shard3/10.163.97.15:27003,10.163.97.17:27003", "state": 1}

Active mongoses:

"3.4.7": 1

Autosplit:

Currently enabled: yes

Balancer:

Currently enabled: yes

Currently running: no

Balancer lock taken at Wed Aug 23 2017 15:54:40 GMT+0800 (CST) by ConfigServer:Balancer

Failed balancer rounds in last 5 attempts: 0

Migration Results for the last 24 hours:

No recent migrations

Databases:

7. Testing

At present, the configuration service, routing service, sharding service and replica set service have been connected in series, but our purpose is to insert data so that the data can be sliced automatically. Connect to the mongos and prepare to make the specified database and the specified collection shard effective.

[root@D2-POMS15] # mongo-- port 20000

Mongos > use admin

Switched to db admin

# specify that testdb sharding takes effect

Mongos > db.runCommand ({enablesharding: "testdb"})

{"ok": 1}

# specify the collection and keys that need sharding in the database

Mongos > db.runCommand ({shardcollection: "testdb.table1", key: {id: 1}})

{"collectionsharded": "testdb.table1", "ok": 1}

Mongos > sh.status ()

-Sharding Status

Sharding version: {

"_ id": 1

"minCompatibleVersion": 5

"currentVersion": 6

"clusterId": ObjectId ("599d34bf612249caec3fc9fe")

}

Shards:

{"_ id": "shard1", "host": "shard1/10.163.97.15:27001,10.163.97.16:27001", "state": 1}

{"_ id": "shard2", "host": "shard2/10.163.97.16:27002,10.163.97.17:27002", "state": 1}

{"_ id": "shard3", "host": "shard3/10.163.97.15:27003,10.163.97.17:27003", "state": 1}

Active mongoses:

"3.4.7": 1

Autosplit:

Currently enabled: yes

Balancer:

Currently enabled: yes

Currently running: no

Balancer lock taken at Wed Aug 23 2017 15:54:40 GMT+0800 (CST) by ConfigServer:Balancer

Failed balancer rounds in last 5 attempts: 0

Migration Results for the last 24 hours:

4: Success

Databases:

{"_ id": "testdb", "primary": "shard1", "partitioned": true}

Testdb.table1

Shard key: {"id": 1}

Unique: false

Balancing: true

Chunks:

Shard1 1

{"id": {"$minKey": 1}}-- > > {"id": {"$maxKey": 1}} on: shard1 Timestamp (1,0)

The above is to set the table1 table of testdb, which needs to be sliced, and automatically shred to shard1 and shard2,shard3 according to id. This is set because not all mongodb databases and tables need sharding!

Test shards:

# Connect to mongos server

[root@D2-POMS15] # mongo-- port 20000

# using testdb

Mongos > use testdb

Switched to db testdb

# insert test data

Mongos > for (var iTune1 witch I sh.status ()

-Sharding Status

Sharding version: {

"_ id": 1

"minCompatibleVersion": 5

"currentVersion": 6

"clusterId": ObjectId ("599d34bf612249caec3fc9fe")

}

Shards:

{"_ id": "shard1", "host": "shard1/10.163.97.15:27001,10.163.97.16:27001", "state": 1}

{"_ id": "shard2", "host": "shard2/10.163.97.16:27002,10.163.97.17:27002", "state": 1}

{"_ id": "shard3", "host": "shard3/10.163.97.15:27003,10.163.97.17:27003", "state": 1}

Active mongoses:

"3.4.7": 1

Autosplit:

Currently enabled: yes

Balancer:

Currently enabled: yes

Currently running: no

Balancer lock taken at Wed Aug 23 2017 15:54:40 GMT+0800 (CST) by ConfigServer:Balancer

Failed balancer rounds in last 5 attempts: 0

Migration Results for the last 24 hours:

6: Success

Databases:

{"_ id": "testdb", "primary": "shard1", "partitioned": true}

Testdb.table1

Shard key: {"id": 1}

Unique: false

Balancing: true

Chunks:

Shard1 1

Shard2 1

Shard3 1

{"id": {"$minKey": 1}}-- > {"id": 2} on: shard2 Timestamp (2,0)

{"id": 2}-- > {"id": 20} on: shard3 Timestamp (3,0)

{"id": 20}-- > {"id": {"$maxKey": 1}} on: shard1 Timestamp (3,1)

You can see that the sharding here is very uneven, because the default chunkSize is 64m, and the amount of data here does not reach 64m. You can modify the size of the chunkSize to facilitate testing:

[root@D2-POMS15] # mongo-- port 20000

Mongos > use config

Switched to db config

Mongos > db.settings.save ({_ id: "chunksize", value: 1})

WriteResult ({"nMatched": 0, "nUpserted": 1, "nModified": 0, "_ id": "chunksize"})

Mongos > db.settings.find ()

{"_ id": "balancer", "stopped": false, "mode": "full"}

{"_ id": "chunksize", "value": 1}

Test again after modification:

Mongos > use testdb

Switched to db testdb

Mongos > db.table1.drop ()

True

Mongos > use admin

Switched to db admin

Mongos > db.runCommand ({shardcollection: "testdb.table1", key: {id: 1}})

{"collectionsharded": "testdb.table1", "ok": 1}

Mongos > use testdb

Switched to db testdb

Mongos > for (var iTune1 witch I sh.status ()

-Sharding Status

Sharding version: {

"_ id": 1

"minCompatibleVersion": 5

"currentVersion": 6

"clusterId": ObjectId ("599d34bf612249caec3fc9fe")

}

Shards:

{"_ id": "shard1", "host": "shard1/10.163.97.15:27001,10.163.97.16:27001", "state": 1}

{"_ id": "shard2", "host": "shard2/10.163.97.16:27002,10.163.97.17:27002", "state": 1}

{"_ id": "shard3", "host": "shard3/10.163.97.15:27003,10.163.97.17:27003", "state": 1}

Active mongoses:

"3.4.7": 1

Autosplit:

Currently enabled: yes

Balancer:

Currently enabled: yes

Currently running: no

Balancer lock taken at Wed Aug 23 2017 15:54:40 GMT+0800 (CST) by ConfigServer:Balancer

Failed balancer rounds in last 5 attempts: 0

Migration Results for the last 24 hours:

14: Success

Databases:

{"_ id": "testdb", "primary": "shard1", "partitioned": true}

Testdb.table1

Shard key: {"id": 1}

Unique: false

Balancing: true

Chunks:

Shard1 4

Shard2 4

Shard3 3

{"id": {"$minKey": 1}}-- > {"id": 2} on: shard2 Timestamp (5,1)

{"id": 2}-- > {"id": 20} on: shard3 Timestamp (6,1)

{"id": 20}-- > {"id": 9729} on: shard1 Timestamp (7,1)

{"id": 9729}-- > {"id": 21643} on: shard1 Timestamp (3,3)

{"id": 21643}-- > {"id": 31352} on: shard2 Timestamp (4,2)

{"id": 31352}-- > {"id": 43021} on: shard2 Timestamp (4,3)

{"id": 43021}-- > {"id": 52730} on: shard3 Timestamp (5,2)

{"id": 52730}-- > {"id": 64695} on: shard3 Timestamp (5,3)

{"id": 64695}-- > {"id": 74404} on: shard1 Timestamp (6,2)

{"id": 74404}-- > {"id": 87088} on: shard1 Timestamp (6,3)

{"id": 87088}-- > {"id": {"$maxKey": 1}} on: shard2 Timestamp (7,0)

Mongos > db.table1.stats ()

{

"sharded": true

"capped": false

"ns": "testdb.table1"

"count": 100000

"size": 5400000

"storageSize": 1736704

"totalIndexSize": 2191360

"indexSizes": {

"_ id_": 946176

"id_1": 1245184

}

"avgObjSize": 54

"nindexes": 2

"nchunks": 11

"shards": {

"shard1": {

"ns": "testdb.table1"

"size": 2376864

"count": 44016

"avgObjSize": 54

"storageSize": 753664

"capped": false

"nindexes": 2

"totalIndexSize": 933888

"indexSizes": {

"_ id_": 405504

"id_1": 528384

}

"ok": 1

}

"shard2": {

"ns": "testdb.table1"

"size": 1851768

"count": 34292

"avgObjSize": 54

"storageSize": 606208

"capped": false

"nindexes": 2

"totalIndexSize": 774144

"indexSizes": {

"_ id_": 335872

"id_1": 438272

}

"ok": 1

}

"shard3": {

"ns": "testdb.table1"

"size": 1171368

"count": 21692

"avgObjSize": 54

"storageSize": 376832

"capped": false

"nindexes": 2

"totalIndexSize": 483328

"indexSizes": {

"_ id_": 204800

"id_1": 278528

}

"ok": 1

}

}

"ok": 1

}

You can see that the data is now much more evenly distributed.

8. Late operation and maintenance

Start and shut down

The startup sequence of mongodb is to start the configuration server first, then start shards, and finally start mongos.

Mongod-f / usr/local/mongodb/conf/config.conf

Mongod-f / usr/local/mongodb/conf/shard1.conf

Mongod-f / usr/local/mongodb/conf/shard2.conf

Mongod-f / usr/local/mongodb/conf/shard3.conf

Mongod-f / usr/local/mongodb/conf/mongos.conf

On shutdown, direct killall kills all processes

Killall mongod

Killall mongos

Reference:

Https://docs.mongodb.com/manual/sharding/

Http://www.lanceyan.com/tech/arch/mongodb_shard1.html

Http://www.cnblogs.com/ityouknow/p/7344005.html

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report