In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
MongoDB Auto-Sharding solves the problem of mass storage and dynamic expansion, but it is higher than that required by the actual production environment.
There is still some distance between high availability and high availability, so there is a "Replica Sets + Sharding" solution:
Shard:
Use Replica Sets to ensure that each data node has backup, automatic fault-tolerant transfer, and automatic recovery.
Config:
Use 3 configuration servers to ensure metadata integrity
Route:
Use 3 routing processes to achieve load balancing and improve client access performance
The basic structure is as follows:
Host IP service and port
ServerA
192.168.10.150
Mongod shard1_1:27017
Mongod shard2_1:27018
Mongod config1:20000
Mongs1:30000
ServerB
192.168.10.151
Mongod shard1_2:27017
Mongod shard2_2:27018
Mongod config2:20000
Mongs2:30000
ServerC
192.168.10.154
Mongod shard1_3:27017
Mongod shard2_3:27018
Mongod config3:20000
Mongs3:30000
Create a data directory
ServerA:
Mkdir-p / data/shardset/ {shard1_1,shard2_1,config} /
ServerB:
Mkdir-p / data/shardset/ {shard1_2,shard2_2,config} /
ServerC
Mkdir-p / data/shardset/ {shard1_3,shard2_3,config} /
Configure replication set
Configure the Replica Sets used by shard1
ServerA:
Mongod-shardsvr-replSet shard1-port 27017-dbpath / data/shardset/shard1_1-logpath / data/shardset/shard1_1/shard1_1.log-logappend-fork
ServerB:
Mongod-shardsvr-replSet shard1-port 27017-dbpath / data/shardset/shard1_2-logpath / data/shardset/shard1_2/shard1_2.log-logappend-fork
ServerC:
Mongod-shardsvr-replSet shard1-port 27017-dbpath / data/shardset/shard1_3-logpath / data/shardset/shard1_3/shard1_3.log-logappend-fork
Use mongo to connect the mongod on port 27017 of one of the machines, initialize the Replica Sets "shard1", and execute:
[root@template] # mongo-- port 27017
MongoDB shell version: 2.6.0
Connecting to: 127.0.0.1:27017/test
> config = {_ id: 'shard1', members: [{_ id: 0, host:' 192.168.10.150 id 27017'}, {_ id: 1, host: '192.168.10.151 Vera 27017'}, {_ id: 2, host: '192.168.10.154VR 27017'}]}
{
"_ id": "shard1"
"members": [
{
"_ id": 0
"host": "192.168.10.150 purl 27017"
}
{
"_ id": 1
"host": "192.168.10.151 purl 27017"
}
{
"_ id": 2
"host": "192.168.10.154purl 27017"
}
]
}
> rs.initiate (config)
{
Info: "Config now saved locally. Should come online in about a minute."
"ok": 1
}
Configure the Replica Sets used by shard2
ServerA:
Mongod-shardsvr-replSet shard2-port 27018-dbpath / data/shardset/shard2_1-logpath / data/shardset/shard2_1/shard2_1.log-logappend-fork
ServerB:
Mongod-shardsvr-replSet shard2-port 27018-dbpath / data/shardset/shard2_2-logpath / data/shardset/shard2_2/shard2_2.log-logappend-fork
ServerC:
Mongod-shardsvr-replSet shard2-port 27018-dbpath / data/shardset/shard2_3-logpath / data/shardset/shard2_3/shard2_3.log-logappend-fork
Use mongo to connect the mongod on port 27018 of one of the machines, initialize the Replica Sets "shard2", and execute:
[root@template] # mongo-- port 27018
MongoDB shell version: 2.6.0
Connecting to: 127.0.0.1:27018/test
> config = {_ id: 'shard2', members: [{_ id: 0, host:' 192.168.10.150 id 27018'}, {_ id: 1, host: '192.168.10.151 shard2', members 27018'}, {_ id: 2, host: '192.168.10.154lv 27018'}]}
{
"_ id": "shard2"
"members": [
{
"_ id": 0
"host": "192.168.10.150 purl 27018"
}
{
"_ id": 1
"host": "192.168.10.151 purl 27018"
}
{
"_ id": 2
"host": "192.168.10.154purl 27018"
}
]
}
> rs.initiate (config)
{
Info: "Config now saved locally. Should come online in about a minute."
"ok": 1
}
Configure 3 Config Server
Execute on Server A, B, C:
Mongod-configsvr-dbpath / data/shardset/config-port 20000-logpath/data/shardset/config/config.log-logappend-fork
Configure 3 Route Process
Execute on Server A, B, C:
Mongos-- configdb 192.168.10.150 chunkSize 20000192.168.10.151 mongos 20000192.168.10.154Rod 20000-- port 30000-- chunkSize 1-- logpath / data/shardset/mongos.log-- logappend-- fork
Configure Shard Cluster
Connect to the mongos process on port 30000 of one of the machines and switch to the admin database for the following configuration:
[root@template] # mongo-- port 30000
MongoDB shell version: 2.6.0
Connecting to: 127.0.0.1:30000/test
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
Http://docs.mongodb.org/
Questions? Try the support group
Http://groups.google.com/group/mongodb-user
Mongos >
Mongos > use admin
Switched to db admin
Mongos > db.runCommand ({addshard: "shard1/192.168.10.150:27017192.168.10.151:27017192.168.10.154:27017"})
{"shardAdded": "shard1", "ok": 1}
Mongos > db.runCommand ({addshard: "shard2/192.168.10.150:27018192.168.10.151:27018192.168.10.154:27018"})
{"shardAdded": "shard2", "ok": 1}
Mongos >
Activate the fragmentation of databases and collections
Mongos > db.runCommand ({enablesharding: "test"})
{"ok": 1}
Mongos > db.runCommand ({shardcollection: "test.users", key: {_ id:1}})
{"collectionsharded": "test.users", "ok": 1}
Mongos >
View configuration
Mongos > use admin
Switched to db admin
Mongos > db.runCommand ({listshards: 1})
{
"shards": [
{
"_ id": "shard1"
"host": "shard1/192.168.10.150:27017192.168.10.151:27017192.168.10.154:27017"
}
{
"_ id": "shard2"
"host": "shard2/192.168.10.150:27018192.168.10.151:27018192.168.10.154:27018"
}
]
"ok": 1
}
Verify that Sharding is working properly
Mongos > use test
Switched to db test
Mongos > for (var iTune1 witch I db.users.stats ()
{
"sharded": true
"systemFlags": 1
"userFlags": 1
"ns": "test.users"
"count": 200000
"numExtents": 13
"size": 22400000
"storageSize": 33689600
"totalIndexSize": 6908720
"indexSizes": {
"_ id_": 6908720
}
"avgObjSize": 112
"nindexes": 1
"nchunks": 13
"shards": {
"shard1": {
"ns": "test.users"
"count": 147600
"size": 16531200
"avgObjSize": 112
"storageSize": 22507520
"numExtents": 7
"nindexes": 1
"lastExtentSize": 11325440
"paddingFactor": 1
"systemFlags": 1
"userFlags": 1
"totalIndexSize": 4807488
"indexSizes": {
"_ id_": 4807488
}
"ok": 1
}
"shard2": {
"ns": "test.users"
"count": 52400
"size": 5868800
"avgObjSize": 112
"storageSize": 11182080
"numExtents": 6
"nindexes": 1
"lastExtentSize": 8388608
"paddingFactor": 1
"systemFlags": 1
"userFlags": 1
"totalIndexSize": 2101232
"indexSizes": {
"_ id_": 2101232
}
"ok": 1
}
}
"ok": 1
}
ServerA:
[root@template] # mongo-- port 27017
MongoDB shell version: 2.6.0
Connecting to: 127.0.0.1:27017/test
Shard1:PRIMARY > show dbs
Admin (empty)
Local 4.076GB
Test 0.078GB
Shard1:PRIMARY >
ServerB:
[root@template] # mongo-- port 27017
MongoDB shell version: 2.6.0
Connecting to: 127.0.0.1:27017/test
Shard1:SECONDARY >
Shard1:SECONDARY > show dbs
Admin (empty)
Local 4.076GB
Test 0.078GB
Shard1:SECONDARY >
The slave library is not queried by default, but it can be queried by executing the following statement.
Rs.slaveOk ()
Or
Db.getMongo () .setSlaveOk ()
The above sentences are essential for the separation of read and write.
ServerC:
[root@template] # mongo-- port 27017
MongoDB shell version: 2.6.0
Connecting to: 127.0.0.1:27017/test
Shard1:SECONDARY > show dbs
Admin (empty)
Local 4.076GB
Test 0.078GB
Shard1:SECONDARY >
If you want to switch between master and slave libraries online, you need to change the priority:
[root@template] # mongo-- port 27017
Need to be set in the main library
Conf=rs.conf ()
Conf.members [0] .priority = 2
Conf.members [1] .priority = 5
Conf.members [2] .priority = 1
Rs.reconfig (conf)
View Node
Db.status ()
Or
Db.isMaster ()
Check whether the data is enabled for sharding
Use config
Db.databases.find ()
Add nodes
Db.remove ("192.168.10.155 purl 27017")
Delete nod
Db.add ("192.168.10.154purl 27017")
Add arbitration node
Rs.addArb ("192.168.10.155 purl 27017")
Rs.isMaster ()
"arbiters": [
"192.168.10.155purl 27017"
]
Create a user
Use admin
Db.createUser (
{
User: "adminUserName"
Pwd: "userPassword"
Roles:
[
{
Roles: "userAdminAnyDatabase"
Db: "admin"
}
]
}
)
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.