Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Mongodb sharding cluster deployment

2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Introduction to Mongodb sharding Cluster

Sharding refers to the process of splitting a database and distributing it on different machines. By distributing the data to different machines, you don't need a powerful server to store more data and handle larger loads. The basic idea is to cut the set into small pieces, which are scattered into several pieces, each of which is only responsible for a part of the total data, and finally equalizes each slice (data migration) through an equalizer. Through a routing process called mongos, mongos knows the correspondence between the data and the slice (by configuring the server). Most usage scenarios solve the problem of disk space, which may get worse for writes, and queries try to avoid cross-shard queries. Timing of using slicing:

1. The machine does not have enough disks. Use slicing to solve disk space problems.

2. A single mongod can no longer meet the performance requirements for writing data. Through sharding, the writing pressure is distributed to each shard, and the resources of the sharding server are used.

3. Want to put a lot of data in memory to improve performance. As above, the sharding server's own resources are used through sharding.

Server planning

Download Mongodb

Https://www.mongodb.com/download-center/community

Wget-c https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel70-4.0.8.tgz

Decompress the Mongodb package

Tar-zxvf mongodb-linux-x86_64-rhel70-4.0.7.tgz-C / usr/local/mongodb

Add environment variabl

Export MONGO_HOME=/usr/local/mongodbexport PATH=$PATH:$MONGO_HOME/bin

Cluster preparation

Create a new directory required for Mongodb

Operate on the 192.168.2.177 server, and the following configuration operations are performed on 192.168.2.177

Mkdir-p / wdata/mongodb/ {data,logs,config,keys} mkdir-p / wdata/mongodb/data/ {mongosrv,shard1}

Send the newly created directory to the other two servers

For i in 178 180; do scp-r / wdata/mongodb root@192.168.2.$i; done

Generate key files for sharding clusters

Openssl rand-base64 90-out / wdata/mongodb/keys/keyfile

Modify key file properties

Chmod 600 / wdata/mongodb/keys/keyfile

Note: it must be modified here, otherwise an error may be reported.

Edit the mongos.conf file

SystemLog: destination: file# log storage location path: / wdata/mongodb/logs/mongos.log logAppend: trueprocessManagement: # fork and run in background fork: true pidFilePath: / wdata/mongodb/data/mongos.pid# port configuration net: port: 30000 maxIncomingConnections: 500 unixDomainSocket: enabled: true # pathPrefix: / tmp filePermissions: 0700 bindIp: 0.0.0.0security: keyFile: / wdata/mongodb/keys/keyfile# add confige server to the route sharding: configDB: Configs/192.168.2.177:21000192.168.2.178:21000192.168.2.180:21000

Edit the mongosrv1.conf file

SystemLog: destination: file logAppend: true path: / wdata/mongodb/logs/mongosrv.log storage: dbPath: / wdata/mongodb/data/mongosrv journal: enabled: true wiredTiger: engineConfig: directoryForIndexes: true processManagement:# fork and run in background fork: true# location of pidfile pidFilePath: / wdata/mongodb/data/mongosrv/mongosrv.pid net: port: 21000 # bindIp: 0.0.0.0 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively Use the net.bindIpAll setting. BindIpAll: true maxIncomingConnections: 65535 unixDomainSocket: enabled: true filePermissions: 0700 security: keyFile: / wdata/mongodb/keys/keyfile authorization: enabled replication: replSetName: configssharding: clusterRole: configsvr

Description: this file needs 3 in this environment, distributed on 3 servers, and only 1 is configured here

Edit the shard1.conf file

SystemLog: destination: file logAppend: true path: / wdata/mongodb/logs/shard1.log storage: dbPath: / wdata/mongodb/data/shard1 journal: enabled: true wiredTiger: engineConfig: directoryForIndexes: true processManagement:# fork and run in background fork: true pidFilePath: / wdata/mongodb/data/mongosrv/mongosrv.pid # location of pidfile# timeZoneInfo: / usr/share/zoneinfonet: port: 27001 # bindIp: 0.0.0.0 # Enter 0.0.0.0 :: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting. BindIpAll: true maxIncomingConnections: 65535 unixDomainSocket: enabled: true # pathPrefix: / tmp/mongod1 filePermissions: 0700 security: keyFile: / wdata/mongodb/keys/keyfile authorization: enabled replication: replSetName: shard1sharding: clusterRole: shardsvr

Description: shard can also be configured on one server, and different directories can be built to act as sharding servers, depending on the requirements.

Transfer the configuration file to the other two servers

For i in 178 180; do scp-r / wdata/mongodb/config root@192.168.2.$i:/wdata/mongodb/; done

Start the cluster

Launch mongod-f / wdata/mongodb/config/mongosrv1.confmongod-f / wdata/mongodb/config/shard1.confmongos-f / wdata/mongodb/config/mongos.conf on 192.168.1.177 launch mongod-f / wdata/mongodb/config/mongosrv1.confmongod-f / wdata/mongodb/config/shard1.confmongos-f / wdata/mongodb/config/mongos.conf on 192.168.1.178 launch mongod-f / wdata/mongodb on 192.168.1.180 / config/mongosrv1.confmongod-f / wdata/mongodb/config/shard1.confmongos-f / wdata/mongodb/config/mongos.conf

Create a replica set by a sharded cluster

Operate on 192.168.2.177

[root@localhost mongodb] # mongo-- port 27001MongoDB shell version v4.0.7connecting to: mongodb://127.0.0.1:27001/?gssapiServiceName=mongodbImplicit session: session {"id": UUID ("b56157d2-fbc7-4226-aeb4-4de0b79dfcda")} MongoDB server version: 4.0.7 > use adminswitched to db admin > config = {_ id: 'shard1', members: [{_ id: 0, host:' 192.168.2.177virtual 27001'}, {_ id: 1 Host:' 192.168.2.178host:'192.168.2.180:27001' 27001'}, {_ id: 2, host:'192.168.2.180:27001'}} {"_ id": "shard1", "members": [{"_ id": 0, "host": "192.168.2.177host:' 27001"}, {"_ id": 1, "host": "192.168.2.178displacement 27001"}, {"_ id": 2 "host": "192.168.2.180 shard1:PRIMARY 27001"}} > rs.initiate (config) {"ok": 1} shard1:PRIMARY > rs.status () {"set": "shard1", "date": ISODate ("2019-04-03T10:08:16.477Z"), "myState": 1, "term": NumberLong (1), "syncingTo": "", "syncSourceHost": "," syncSourceId ":-1," heartbeatIntervalMillis ": NumberLong (2000) "optimes": {"lastCommittedOpTime": {"ts": Timestamp (1554286090, 1), "t": NumberLong (1)}, "readConcernMajorityOpTime": {"ts": Timestamp (1554286090, 1), "t": NumberLong (1)}, "appliedOpTime": {"ts": Timestamp (1554286090, 1), "t": NumberLong (1)}, "durableOpTime": {"ts": Timestamp (15542860901), "t": NumberLong (1)}} "lastStableCheckpointTimestamp": Timestamp (1554286040, 2), "members": [{"_ id": 0, "name": "192.168.2.177 members", "health": 1, "state": 1, "stateStr": "PRIMARY", "uptime": 2657, "optime": {"ts": Timestamp (1554286090, 1), "t": NumberLong (1)}, "optimeDate": ISODate ("2019-04-03T10:08:10Z") "syncingTo": "", "syncSourceHost": "", "syncSourceId":-1, "infoMessage": "could not find member to sync from", "electionTime": Timestamp (1554286040, 1), "electionDate": ISODate ("2019-04-03T10:07:20Z"), "configVersion": 1, "self": true, "lastHeartbeatMessage": "}, {" _ id ": 1," name ":" 192.168.2.178 infoMessage 27001 " "health": 1, "state": 2, "SECONDARY", "uptime": 67, "optime": {"ts": Timestamp (1554286090, 1), "t": NumberLong (1)}, "optimeDurable": {"ts": Timestamp (1554286090, 1), "t": NumberLong (1)}, "optimeDate": ISODate ("2019-04-03T10:08:10Z"), "optimeDurableDate": ISODate ("2019-04-03T10:08:10Z") "lastHeartbeat": ISODate ("2019-04-03T10:08:16.259Z"), "lastHeartbeatRecv": ISODate ("2019-04-03T10:08:15.440Z"), "pingMs": NumberLong (6), "lastHeartbeatMessage": "", "syncingTo": "192.168.2.177 syncSourceHost", "syncSourceId": 0, "infoMessage": "," configVersion ": 1}, {" _ id ": 2 "name": "192.168.2.180 SECONDARY", "health": 1, "state": 2, "stateStr": "SECONDARY", "uptime": 67, "optime": {"ts": Timestamp (1554286090, 1), "t": NumberLong (1)}, "optimeDurable": {"ts": Timestamp (1554286090, 1), "t": NumberLong (1)}, "optimeDate": ISODate ("2019-04-03T10:08:10Z") "optimeDurableDate": ISODate ("2019-04-03T10:08:10Z"), "lastHeartbeat": ISODate ("2019-04-03T10:08:16.219Z"), "lastHeartbeatRecv": ISODate ("2019-04-03T10:08:15.440Z"), "pingMs": NumberLong (4), "lastHeartbeatMessage": "", "syncingTo": "192.168.2.17703T10:08:10Z 27001", "syncSourceHost": "192.168.2.177 NumberLong 27001", "syncSourceId": 0 "infoMessage": "," configVersion ": 1}]," ok ": 1} shard1:PRIMARY > exitbye

Create a sharded cluster database and users

[root@localhost mongodb] # mongo-port 27001MongoDB shell version v4.0.7connecting to: mongodb://127.0.0.1:27001/?gssapiServiceName=mongodbImplicit session: session {"id": UUID ("a4c12af4 85e2 49e3 9a02-dd481307bcda")} MongoDB server version: 4.0.7shard1:PRIMARY > use adminswitched to db adminshard1:PRIMARY > db.createUser ({user: "admin", pwd: "123456", roles: [{role: "userAdminAnyDatabase" Db: "admin"}]}) Successfully added user: {"user": "admin", "roles": [{"role": "userAdminAnyDatabase", "db": "admin"}]} shard1:PRIMARY > db.auth ("admin", "123456") 1shard1:PRIMARY > use testswitched to db testshard1:PRIMARY > db.createUser ({user: "root", pwd: "123456", roles: [{role: "dbOwner", db: "test"}]}) Successfully added user: {"user": "root" "roles": [{"role": "dbOwner", "db": "test"}]} shard1:PRIMARY > exitbye

Configure mongosrv replica set

[root@localhost config] # mongo-- port 21000MongoDB shell version v4.0.7connecting to: mongodb://127.0.0.1:21000/?gssapiServiceName=mongodbImplicit session: session {"id": UUID ("f9896034-d90f-4d52-bc55-51c3dc85aae9")} MongoDB server version: 4.0.7 > config = {_ id: 'configs', members: [{_ id: 0, host:' 192.168.2.177lv 21000'}, {_ id: 1, host: '192.168.2.1782port 21000MongoDB shell version v4.0.7connecting to'} {_ id: 2, host: '192.168.2.180 configs 21000'}} {"_ id": "configs", "members": [{"_ id": 0, "host": "192.168.2.177id 21000"}, {"_ id": 1, "host": "192.168.2.178 configs 21000"}, {"_ id": 2 "host": "192.168.2.180 gleStats 21000"}} > rs.initiate (config) {"ok": 1, "$gleStats": {"lastOpTime": Timestamp (2019, 1), "electionId": ObjectId ("0000000000000000000000")}, "lastCommittedOpTime": Timestamp (0,0)} configs:SECONDARY > rs.status (config) {"set": "configs", "date": ISODate ("2019-04-03T10:54:43.283Z"), "myState": 1 "term": NumberLong (1), "syncingTo": "," syncSourceHost ":", "syncSourceId":-1, "configsvr": true, "heartbeatIntervalMillis": NumberLong (2000), "optimes": {"lastCommittedOpTime": {"ts": Timestamp (1554288879, 4), "t": NumberLong (1)}, "readConcernMajorityOpTime": {"ts": Timestamp (1554288879, 4), "t": NumberLong (1)} "appliedOpTime": {"ts": Timestamp (1554288879, 4), "t": NumberLong (1)}, "durableOpTime": {"ts": Timestamp (1554288879, 4), "t": NumberLong (1)}}, "lastStableCheckpointTimestamp": Timestamp (1554288864, 1), "members": [{"_ id": 0, "name": "192.168.2.177 Timestamp 21000", "health": 1, "state": 1, "stateStr": "PRIMARY" "uptime": 5454, "optime": {"ts": Timestamp (1554288879, 4), "t": NumberLong (1)}, "optimeDate": ISODate ("2019-04-03T10:54:39Z"), "syncingTo": "," syncSourceHost ":", "syncSourceId":-1, "infoMessage": "could not find member to sync from", "electionTime": Timestamp (1554288863, 1), "electionDate": ISODate ("2019-04-03T10:54:23Z") "configVersion": 1, "self": true, "lastHeartbeatMessage": ""}, {"_ id": 1, "name": "192.168.2.178 lastHeartbeatMessage 21000", "health": 1, "state": 2, "stateStr": "SECONDARY", "uptime": 31, "optime": {"ts": Timestamp (1554288879, 4), "t": NumberLong (1)}, "optimeDurable": {"ts": Timestamp (1554288879, 4) "t": NumberLong (1)}, "optimeDate": ISODate ("2019-04-03T10:54:39Z"), "optimeDurableDate": ISODate ("2019-04-03T10:54:39Z"), "lastHeartbeat": ISODate ("2019-04-03T10:54:41.362Z"), "lastHeartbeatRecv": ISODate ("2019-04-03T10:54:41.906Z"), "pingMs": NumberLong (0), "lastHeartbeatMessage": "," syncingTo ":" 192.168.2.177 03T10:54:39Z " "syncSourceHost": "192.168.2.177configVersion 21000", "syncSourceId": 0, "infoMessage": "," configVersion ": 1}, {" _ id ": 2," name ":" 192.168.2.180 id 21000 "," health ": 1," state ": 2," stateStr ":" SECONDARY "," uptime ": 31," optime ": {" ts ": Timestamp (1554288879, 4)," t ": NumberLong (1)} OptimeDurable: {"ts": Timestamp (1554288879, 4), "t": NumberLong (1)}, "optimeDate": ISODate ("2019-04-03T10:54:39Z"), "optimeDurableDate": ISODate ("2019-04-03T10:54:39Z"), "lastHeartbeat": ISODate ("2019-04-03T10:54:41.362Z"), "lastHeartbeatRecv": ISODate ("2019-04-03T10:54:41.597Z"), "pingMs": NumberLong (0) "lastHeartbeatMessage": "," syncingTo ":" 192.168.2.177 lastOpTime 21000 "," syncSourceHost ":" 192.168.2.177 lastOpTime 21000 "," syncSourceId ": 0," infoMessage ":", "configVersion": 1}], "ok": 1, "operationTime": Timestamp (1554288879, 4), "$gleStats": {"lastOpTime": Timestamp (1554288851, 1), "electionId": ObjectId ("7fffffff0000000000000001")}, "lastCommittedOpTime": Timestamp (1554288879) 4), "$clusterTime": {"clusterTime": Timestamp (1554288879, 4), "signature": {"hash": BinData (0, "VVM2Bsa9KiZy8Sew9Oa8CsbDBPU="), "keyId": NumberLong ("6675619852301893642")} configs:PRIMARY >

After the deployment of the mongodb sharding cluster is completed, the common operation instructions of Mongodb are attached.

Show dbs # View all databases db # View the current database sh.status () # View the cluster information sh.enableSharding ("dba") # enable sharding db.help () # View help db.account.stats () # View the points of the data collection Db.createUser () # New user use databasename # enter or create a new database db.shutdownServer () # close the database db.dropDatabase () # Delete the database (you must first switch to the database)

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report