Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

The principle of mongdb fragmentation and the Construction of fragment replica Cluster

2025-01-20 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Mongdb slicing principle

Slicing refers to splitting the data and distributing it to different machines. The slicing is similar to raid0, and the copy is similar to raid1.

The copy set of MongoDB is different from our common master and slave, in which all services will stop after the host goes down.

The sharding cluster is mainly composed of three components: mongos,config server,shard

1) mongos (routing process, in which the application connects to mongos and then queries specific shards)

The entry of database cluster requests. All requests are coordinated through mongos. There is no need to add a route selector in the application. Mongos itself is a request distribution center, which is responsible for the corresponding data requests.

The request is forwarded to the corresponding shard server. In a production environment, there are usually multiple mongos as entry points for requests, and there is no way to prevent one of them from hanging up all mongodb requests.

2) config server (routing table service. Each has routing information for all chunk)

As the name implies, it is the configuration server that stores all database meta-information (routing, sharding). Mongos itself does not physically store sharding server and data routing information, but caches it in memory, while the configuration server actually stores it.

These data. The first time mongos starts or shuts down and restarts, the configuration information will be loaded from config server, and then all mongos will be notified to update their status if the configuration server information changes, so that

Mongos can continue to route accurately. There are usually multiple config server configuration servers in a production environment because it stores sharded routing metadata, which cannot be lost! Even if you hang up one of them, as long as there's still in stock.

The mongodb cluster will not fail.

3) shard (slicing for data storage. Each piece can be a copy set (replica set)

This is the legendary slice. As shown in the figure, a datasheet Collection1 of a machine stores 1T of data, which is too stressful! After being divided into four machines, each machine is 256g, then it is divided into one machine.

The pressure of the machine. In fact, the above four shards without a replica set (replica set) is an incomplete architecture, assuming that one of the shards dies and 1/4 of the data is lost, so in the high availability shard architecture

You need to build a replica set replica set for each shard to ensure the reliability of the shard. The production environment is usually 2 copies + 1 arbitration.

Don't say much nonsense.

1. Pull configuration files from github

Git clone git@github.com:herrywen-nanj/mongodb.git

two。 The startup sequence is configserver-- > mongos-- > shared.

3. Delete the contents under dbPath

Rm-rf configserver/dbPath

4. Start the mongdb process according to the corresponding configuration file and enter the mongdb to configure the replica set

Configserver launch:

Mongod-f mongdb.conf

Configserver2 launch:

Mongod-f mongdb.conf

Configure the copy set:

# enter configservermongo-- port 270 initialization > > rs.initiate () # add replica node > > rs.add ("worker2:27019") # View replica set status > rs.status returned content: MongoDB Enterprise config-rs:PRIMARY > rs.status () {"set": "config-rs", # replica set has been configured successfully "date": ISODate ("2019-11-23T04:56:35.588Z") "myState": 1, "term": NumberLong (1), "syncingTo": "," syncSourceHost ":"," syncSourceId ":-1," configsvr ": true," heartbeatIntervalMillis ": NumberLong (2000)," majorityVoteCount ": 2," writeMajorityCount ": 2 Optimes: {"lastCommittedOpTime": {"ts": Timestamp (1574484982, 1), "t": NumberLong (1)}, "lastCommittedWallTime": ISODate ("2019-11-23T04:56:22.464Z") ReadConcernMajorityOpTime: {"ts": Timestamp (1574484982, 1), "t": NumberLong (1)}, "readConcernMajorityWallTime": ISODate ("2019-11-23T04:56:22.464Z") AppliedOpTime: {"ts": Timestamp (1574484982, 1), "t": NumberLong (1)}, "durableOpTime": {"ts": Timestamp (1574484982, 1) "t": NumberLong (1)}, "lastAppliedWallTime": ISODate ("2019-11-23T04:56:22.464Z"), "lastDurableWallTime": ISODate ("2019-11-23T04:56:22.464Z")}, "lastStableRecoveryTimestamp": Timestamp (1574484952, 30), "lastStableCheckpointTimestamp": Timestamp (1574484952, 30) "electionCandidateMetrics": {"lastElectionReason": "electionTimeout", "lastElectionDate": ISODate ("2019-11-23T04:55:51.134Z"), "termAtElection": NumberLong (1), "lastCommittedOpTimeAtElection": {"ts": Timestamp (0,0) "t": NumberLong (- 1)}, "lastSeenOpTimeAtElection": {"ts": Timestamp (1574484951, 1), "t": NumberLong (- 1)}, "numVotesNeeded": 1 PriorityAtElection: 1, electionTimeoutMillis: NumberLong (10000), newTermStartDate: ISODate ("2019-11-23T04:55:52.141Z"), "wMajorityWriteAvailabilityDate": ISODate ("2019-11-23T04:55:52.266Z")} "members": [{"_ id": 0, "name": "worker2:27018", "ip": "192.168.255.134", "health": 1, "state": 1 "stateStr": "PRIMARY", "uptime": 722, "optime": {"ts": Timestamp (1574484982, 1), "t": NumberLong (1)} "optimeDate": ISODate ("2019-11-23T04:56:22Z"), "syncingTo": "," syncSourceHost ":"," syncSourceId ":-1," infoMessage ":" could not find member to sync from " ElectionTime: Timestamp (1574484951, 2), electionDate: ISODate ("2019-11-23T04:55:51Z"), "configVersion": 2, "self": true, "lastHeartbeatMessage": ""} {"_ id": 1, "name": "worker2:27019", "ip": "192.168.255.134", "health": 1, "state": 2 "stateStr": "SECONDARY", "uptime": 13, "optime": {"ts": Timestamp (1574484982, 1), "t": NumberLong (1)} OptimeDurable: {"ts": Timestamp (1574484982, 1), "t": NumberLong (1)}, "optimeDate": ISODate ("2019-11-23T04:56:22Z") OptimeDurableDate: ISODate ("2019-11-23T04:56:22Z"), "lastHeartbeat": ISODate ("2019-11-23T04:56:34.705Z"), "lastHeartbeatRecv": ISODate ("2019-11-23T04:56:35.176Z"), "pingMs": NumberLong (0) "lastHeartbeatMessage": "," syncingTo ":", "syncSourceHost": "," syncSourceId ":-1," infoMessage ":" "configVersion": 2}], "ok": 1, "$gleStats": {"lastOpTime": {"ts": Timestamp (1574484982, 1), "t": NumberLong (1)} "electionId": ObjectId ("7fffffff0000000000000001")}, "lastCommittedOpTime": Timestamp (1574484982, 1), "$clusterTime": {"clusterTime": Timestamp (1574484982,1), "signature": {"hash": BinData (0, "AAAAAAAAAAAAAAAAAAAAAAAAAAA=") "keyId": NumberLong (0)}}, "operationTime": Timestamp (1574484982)}

5. Delete the dbPath directory and start the mongs routing process

Cd luyoumongos-f mongdb.conflsof-iRu 40000

6. Enter the sharding node directory, start the process, and complete the replica cluster add operation.

Cd shared & & rm-rf dbpath & & mongod-f mongdb.conflsof-imongod 27021 & & lsof-i:27022mongo-- port 27020 > rs.initiate () > rs.add ("worker2:27021") > rs.add ("worker2:27022") > > rs.status ()

7. Add a shard cluster to the routing node and add a shard

Sh.addShard ("shard-rs/worker2:27020,worker2:27021,worker2:27022")

Add shards:

Cd shard4sh.addShard ("worker2:27023")

Check the return status:

MongoDB Enterprise mongos > sh.status ()-Sharding Status-sharding version: {"_ id": 1, "minCompatibleVersion": 5, "currentVersion": 6, "clusterId": ObjectId ("5dd8bbd8bbb4a8ac81b4b0b6")} shards: {"_ id": "shard-rs", "host": "shard-rs/worker2:27020,worker2:27021,worker2:27022" "state": 1} {"_ id": "shard0001", "host": "worker2:27023" "state": 1} active mongoses: "4.2.1": 1 autosplit: Currently enabled: yes balancer: Currently enabled: no Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: No recent migrations databases: {"_ id": "config", "primary": "config", "partitioned": true}

8. Chip key operation

a. Enable sharding function for herrywen database

Sh.enableSharding ("herrywen")

Check the return status:

MongoDB Enterprise mongos > sh.status ()-Sharding Status-sharding version: {"_ id": 1, "minCompatibleVersion": 5, "currentVersion": 6, "clusterId": ObjectId ("5dd8bbd8bbb4a8ac81b4b0b6")} shards: {"_ id": "shard-rs", "host": "shard-rs/worker2:27020,worker2:27021,worker2:27022" "state": 1} active mongoses: "4.2.1": 1 autosplit: Currently enabled: yes balancer: Currently enabled: no Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: No recent migrations databases: {"_ id": "config", "primary": "config" "partitioned": true} {"_ id": "herrywen", "primary": "shard-rs", "partitioned": true, "version": {"uuid": UUID ("56cf9d23-2f3a-4b53-8b5d-512f1f9e00c6"), "lastMod": 1}}

b. Turn on the specific collection function and specify id as the chip key

Sh.shardCollection ("herrywen.collections_1", {"_ id": 1})

View the returned result

MongoDB Enterprise mongos > sh.status () -Sharding Status-sharding version: {"_ id": 1, "minCompatibleVersion": 5, "currentVersion": 6, "clusterId": ObjectId ("5dd8bbd8bbb4a8ac81b4b0b6")} shards: {"_ id": "shard-rs", "host": "shard-rs/worker2:27020,worker2:27021,worker2:27022" "state": 1} {"_ id": "shard0001", "host": "worker2:27023" "state": 1} active mongoses: "4.2.1": 1 autosplit: Currently enabled: yes balancer: Currently enabled: yes Currently running: no Collections with active migrations: herrywen.collections_1 started at Sat Nov 23 2019 14:15:13 GMT+0800 (CST) Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: 2: Success databases: {"_ id": "config" "primary": "config" "partitioned": true} config.system.sessions shard key: {"_ id": 1} unique: false balancing: true chunks: shard-rs 1 {"_ id": {"$minKey": 1}}-- > > {"_ id": {"$maxKey": 1}} on: shard-rs Timestamp (1 0) {"_ id": "herrywen", "primary": "shard-rs", "partitioned": true, "version": {"uuid": UUID ("56cf9d23-2f3a-4b53-8b5d-512f1f9e00c6") "lastMod": 1}} herrywen.collections_1 shard key: {"_ id": 1} unique: false balancing: true chunks: shard-rs 3 Shard0001 4 {"_ id": {"$minKey": 1}}-- > {"_ id": 2} on: shard0001 Timestamp (2) 0) {"_ id": 2}-- > {"_ id": 28340} on: shard-rs Timestamp (3,1) {"_ id": 28340}-- > {"_ id": 42509} on: shard-rs Timestamp (2) 2) {"_ id": 42509}-- > {"_ id": 61031} on: shard-rs Timestamp (2,3) {"_ id": 61031}-- > {"_ id": 75200} on: shard0001 Timestamp (3) 2) {"_ id": 75200}-- > {"_ id": 94169} on: shard0001 Timestamp (3,3) {"_ id": 94169}-- > > {"_ id": {"$maxKey": 1}} on: shard0001 Timestamp (3,4)

9. test

A. # modify chunk block size to 1m, with a default of 64m

Use config;db.settings.find () db.settings.save ({_ id: "chunksize", value:1}) MongoDB Enterprise mongos > db.settings.find () {"_ id": "chunksize", "value": 1}

b. View the amount of data in the current collection

MongoDB Enterprise mongos > use herrywen;switched to db herrywenMongoDB Enterprise mongos > db.collections_1.count () 0

c. To see the sharding effect, write 10000 pieces of data to the herrywen.collections_1 collection

Mongo-- port 40000MongoDB Enterprise mongos > for (var I use herrywen;switched to db herrywenMongoDB Enterprise mongos > db.collections_1.count (); 41561MongoDB Enterprise mongos > db.collections_1.count (); 42971MongoDB Enterprise mongos > db.collections_1.count (); 43516MongoDB Enterprise mongos > db.collections_1.count (); 43776MongoDB Enterprise mongos > db.collections_1.count (); 44055MongoDB Enterprise mongos > db.collections_1.count (); 44291MongoDB Enterprise mongos > db.collections_1.count (); 44541MongoDB Enterprise mongos > db.collections_1.count () 44775MongoDB Enterprise mongos > db.collections_1.count (); 45012MongoDB Enterprise mongos > db.collections_1.count (); 45257MongoDB Enterprise mongos > db.collections_1.count (); 45470

d. Check and see that the data has also been written into other slices.

MongoDB Enterprise mongos > sh.status () -Sharding Status-sharding version: {"_ id": 1, "minCompatibleVersion": 5, "currentVersion": 6, "clusterId": ObjectId ("5dd8bbd8bbb4a8ac81b4b0b6")} shards: {"_ id": "shard-rs", "host": "shard-rs/worker2:27020,worker2:27021,worker2:27022" "state": 1} {"_ id": "shard0001", "host": "worker2:27023" "state": 1} active mongoses: "4.2.1": 1 autosplit: Currently enabled: yes balancer: Currently enabled: yes Currently running: no Collections with active migrations: herrywen.collections_1 started at Sat Nov 23 2019 14:15:13 GMT+0800 (CST) Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: 2: Success databases: {"_ id": "config" "primary": "config" "partitioned": true} config.system.sessions shard key: {"_ id": 1} unique: false balancing: true chunks: shard-rs 1 {"_ id": {"$minKey": 1}}-- > > {"_ id": {"$maxKey": 1}} on: shard-rs Timestamp (1 0) {"_ id": "herrywen", "primary": "shard-rs", "partitioned": true, "version": {"uuid": UUID ("56cf9d23-2f3a-4b53-8b5d-512f1f9e00c6") "lastMod": 1}} herrywen.collections_1 shard key: {"_ id": 1} unique: false balancing: true chunks: shard-rs 3 Shard0001 4 {"_ id": {"$minKey": 1}}-- > {"_ id": 2} on: shard0001 Timestamp (2) 0) {"_ id": 2}-- > {"_ id": 28340} on: shard-rs Timestamp (3,1) {"_ id": 28340}-- > {"_ id": 42509} on: shard-rs Timestamp (2) 2) {"_ id": 42509}-- > {"_ id": 61031} on: shard-rs Timestamp (2,3) {"_ id": 61031}-- > {"_ id": 75200} on: shard0001 Timestamp (3) 2) {"_ id": 75200}-- > {"_ id": 94169} on: shard0001 Timestamp (3,3) {"_ id": 94169}-- > > {"_ id": {"$maxKey": 1}} on: shard0001 Timestamp (3,4)

10. Without much change in the program code, connect the database to interface 40000 just as you would connect to an ordinary mongo database.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report