In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Architectural deployment of Mongo (Replica Sets+Sharding)
I. Environment
To build a MongoDB Sharding Cluster, you need three roles:
Shard Server: mongod instance, which is used to store actual data blocks.
Config Server: mongod instance, which stores the entire Cluster Metadata, including chunk information.
Route Server: mongos instance, front-end routing, through which the client accesses, and makes the entire cluster look like a single process
Database.
Option 1:
192.168.136.14
192.168.136.15
192.168.136.16
192.168.136.26
192.168.136.29
Shard1 (master)
Shard2 (master)
Shard3 (master)
Shard1 (slave)
Shard1 (arbiter)
Shard2 (slave)
Shard3 (slave)
Shard1 (slave)
Shard3 (slave)
Shard2 (arbiter)
Shard3 (arbiter)
Shard1 (arbiter)
Shard2 (arbiter)
Shard2 (slave)
Shard3 (arbiter)
1. Node:
S1: 192.168.136.. 14192.168.136.. 26192.168.136.. 16192.168.136.15192.168.136.29 (arbiter)
S2: 192.168.136.15192.168.136.26192.168.136.14192.168.136.16192.168.136.29 (arbiter)
S3: 192.168.136.16192.168.136.26192.168.136.15192.168.136.14192.168.136.29 (arbiter)
C
Mainframe
Port information
192.168.136.14
Mongod shard1:27017 (master)
Mongod shard2:27018 (slave)
Mongod shard3:27019 (arbiter)
Mongod config:30000
Mongs:40000
192.168.136.15
Mongod shard1:27017 (arbiter)
Mongod shard2:27018 (master)
Mongod shard3:27019 (slave)
Mongod config:30000
Mongs:40000
192.168.136.16
Mongod shard1:27017 (slave)
Mongod shard2:27018 (arbiter)
Mongod shard3:27019 (master)
Mongod config:30000
Mongs:40000
192.168.136.26
Mongod shard1:27017 (slave)
Mongod shard2:27018 (slave)
Mongod shard3:27019 (slave)
192.168.136.29
Mongod shard1:27017 (arbiter)
Mongod shard2:27018 (arbiter)
Mongod shard3:27019 (arbiter)
2. Installation and deployment software preparation and directory
1. Download the mongodb program
Curl-O http://fastdl.mongodb.org/linux/mongodb-linux-x86_64-1.8.3.tgz
two。 Decompress mongodb-linux-x86_64-2.0.0.tgz
Tar zxvf mongodb-linux-x86_64-1.8.3.tgz
3. Rename mongodb-linux-x86_64-2.0.0.tgz to mongodb
Mv mongodb-linux-x86_64-2.0.0 mongodb
4. Enter the mongodb directory
Cd mongodb
5. Create a new folder data
Mkdir data
Mkdir logs
Configure Replica Sets,Config Server
"configuration profile conf
# 1.start_mongod Shard1.conf
Shardsvr=true
Port=27017
Dbpath=/data/database/shard1/
Logpath=/data/logs/shard1.log
Logappend=true
Fork=true
ReplSet=s1
Rest=true
Journal=true
# 2.start_mongod Shard2.conf
Shardsvr=true
Port=27018
Dbpath=/data/database/shard2/
Logpath=/data/logs/shard2.log
Logappend=true
Fork=true
ReplSet=s2
Rest=true
Journal=true
# 3.start_mongod Shard3.conf
Shardsvr=true
Port=27019
Dbpath=/data/database/shard3/
Logpath=/data/logs/shard3.log
Logappend=true
Fork=true
ReplSet=s3
Rest=true
Journal=true
192.168.136.14
1. Create the appropriate folder (note that the testadmin here is the client login name)
Mkdir-p / data/database/shard1
Mkdir-p / data/database/shard2
Mkdir-p / data/database/shard3
Mkdir-p / data/database/config
Mkdir-p / data/logs
two。 Configure mongod
. / mongod-- config / mongodb/shard1.conf
. / mongod-- config / mongodb/shard2.conf
. / mongod-- config / mongodb/shard3.conf
. / mongod-- config / mongodb/configsvr.conf
3. Check to see if the process of mongod starts
Ps aux | grep mongodb | grep-v grep
4. Initialize replica sets (where ip is the corresponding private network IP)
/ testadmin/mongodb/bin/mongo-- port 27017
Config = {_ id:'s 1century, members: [{_ id: 0MagneHost: '192.168.136.14 members 27017 priority5}, {_ id: 1, host:' 192.168.136.26purr 27017 parallel priorityconfig 2}, {_ id: 2, host: '192.168.136.16 purge 27017 parallel priorityriding .5}, {_ id: 3, host:' 192.168.136.15 members 27017priorityriding: true}, {_ id: 4 Host: '192.168.136.29 true 27017 host, true:]}
Rs.initiate (config)
Rs.status ()
192.168.136.15
1. Create the appropriate folder (note that the testadmin here is the client login name)
Mkdir-p / data/database/shard1
Mkdir-p / data/database/shard2
Mkdir-p / data/database/shard3
Mkdir-p / data/database/config
Mkdir-p / data/logs
two。 Configure mongod
. / mongod-- config / mongodb/shard1.conf
. / mongod-- config / mongodb/shard2.conf
. / mongod-- config / mongodb/shard3.conf
. / mongod-- config / mongodb/configsvr.conf
3. Check to see if the process of mongod starts
Ps aux | grep mongodb | grep-v grep
4. Initialize replica sets
/ testadmin/mongodb/bin/mongo-- port 27018
Config= {_ id:'s 2 minutes, members: [{_ id: 0, host: '192.168.136.15 members 27018 priority5}, {_ id: 1, host:' 192.168.136.26 arbiterOnly: 27018 parallel priority2}, {_ id: 2, host: '192.168.136.14 purge 27018 priorityarbiterOnly. 5}, {_ id: 3, host:' 192.168.136.16 arbiterOnly: true}, {_ id: 4 Host: '192.168.136.29 true 27018 arbiterOnly, true}]}
Rs.initiate (config)
192.168.136.16
1. Create the appropriate folder (note that the testadmin here is the client login name)
Mkdir-p / data/shard1
Mkdir-p / data/shard2
Mkdir-p / data/shard3
Mkdir-p / data/config
Mkdir-p / data/logs
two。 Configure mongod
. / mongod-- config / mongodb/shard1.conf
. / mongod-- config / mongodb/shard2.conf
. / mongod-- config / mongodb/shard3.conf
. / mongod-- config / mongodb/configsvr.conf
3. Check to see if the process of mongod starts
Ps aux | grep mongodb | grep-v grep
4. Initialize replica sets
/ testadmin/mongodb/bin/mongo-- port 27019
Config= {_ id:'s3 prioritypedagogical members: [{_ id: 0, host: '192.168.136.16 host 27019 priority5}, {_ id: 1, host:' 192.168.136.26 freed 27019 parallel priority2}, {_ id: 2, host: '192.168.136.15 purge 27019 prioritylane .5}, {_ id: 3, host:' 192.168.136.14 arbiterOnly: true}, {_ id: 4 Host: '192.168.136.29 true 27019, arbiterOnly:]}
Rs.status ()
192.168.136.26192.168.136.29, respectively:
1. Create the appropriate folder (note that the testadmin here is the client login name)
Mkdir-p / data/shard1
Mkdir-p / data/shard2
Mkdir-p / data/shard3
Mkdir-p / data/config
Mkdir-p / data/logs
two。 Configure mongod
. / mongod-- config / mongodb/shard1.conf
. / mongod-- config / mongodb/shard2.conf
. / mongod-- config / mongodb/shard3.conf
3. Check to see if the process of mongod starts
Ps aux | grep mongodb | grep-v grep
Configure Mongos (establish routes on each machine)
/ mongodb/bin/
. / mongos-- fork-- port 40000-- logpath / data/logs/mongos.log-- configdb 192.168.136.14 port 30000192.168.136.15 port 30000192.168.136.1630000
Add shard
1 connect any one, and the others do not need to do this:
/ home/testadmin/bin/mongo-- port 40000
Use admin
Db.runCommand ({addshard:'s1/192.168.136.14:27017192.168.136.26:27017192.168.136.16:27017'}) db.runCommand ({addshard:'s2/192.168.136.15:27018192.168.136.26:27018192.168.136.14:27018'}) db.runCommand ({addshard:'s3/192.168.136.16:27019192.168.136.26:27019192.168.136.15:27019'})
Db.runCommand ({listshards:1})
Db.runCommand ({enablesharding:'weibo'})
Db.runCommand ({shardcollection:'weibo.test', key: {_ id:1}, unique:true}) printShardingStatus ()
Db.data.stats ()
Third, user authentication 1. Note that before 1.9.1, the replica set does not support user authentication and can only go through the keyFile key file. Fortunately, the official version 2.0 has come out these days, and many functional problems have been solved. He he
Note: user authentication, starting mongod must add-- auth
# set user name and password
> use test
> db.addUser ('mongo','456123')
> db.auth ('mongo','456123')
> db.system.users.find ()-- check whether the user has been added successfully
> db.system.users.remove ('mongo','456123')
Mongo database-u mongo-p
Note: the auth user right must be added at startup to take effect. Restart after the first configuration has no effect.
Fourth, what if the Yali pear is big? Add server, how to add it? 1. If you read the Yali pear, add the slave node and read the Yali pear separately.
After startup, add a node to the primary node
Such as: rs.add ("10.168.0.100 27017") when we see that it becomes secondary, everything is normal.
two。 If you write Yali pears, you can add a set of shard nodes to write Yali pears separately.
For example, start mongod as mentioned above and add it.
Backup recovery strategy incremental backup (add delayed backup node)
1. Use another secondary machine to transfer data
2. Add fastsync=true configuration to the configuration file on the new machine (when you need to start a node from existing data, you must add fastsync=true, otherwise startup will report an error. If you are synchronizing all data directly from the main database, you do not need to add this parameter)
3. After startup, add a node to the primary node
For example, rs.add ("10.168.0.102 27017") when we see that it becomes secondary, everything is normal and we can provide online services normally.
4. View the current node information through the rs.conf () command (admin library password permission is required)
5. Rs.remove ("10.168.0.102pur27017") delete the node
6. Add arbiter node: rs.addArb ("10.73.24.171pur19003")
7. Add a delayed backup machine:
Rs.add ({_ id:5,host: "10.168.0.102 priority:0,slaveDelay:300 27017")
Rs.add ({_ id:5,host: "10.168.0.102 priority:0,slaveDelay:300 27018")
Rs.add ({_ id:5,host: "10.168.0.102 priority:0,slaveDelay:300 27019")
Note: slaveDelay unit second.
8. When this error occurs: replSet error RS102 too stale to catch up, we can db.printReplicationInfo () to check the oplog information of the master and slave libraries.
Using delayed backup nodes to restore data
1. First, back up the data of the delayed backup node to the master machine of each node. Such as:
#. / mongodump-h 192.168.136.14 data/mongoback/ 27017-d weibo-o / data/mongoback/
#. / mongodump-h 192.168.136.15 data/mongoback/ 27018-d weibo-o / data/mongoback/
#. / mongodump-h 192.168.136.16 data/mongoback/ 27019-d weibo-o / data/mongoback/
two。 Import the backed-up data into the master of each node. Such as:
It is recommended to fix it first and compress the space.
Db.repairDatabase (); repair data (and compress (delete data) space)
. / mongorestore-h 127.0.0.1 drop 27017-- directoryperdb / data/mongoback-- drop-- indexesLast
. / mongorestore-h 127.0.0.1 drop 27018-- directoryperdb / data/mongoback-- drop-- indexesLast
. / mongorestore-h 127.0.0.1 drop 27019-- directoryperdb / data/mongoback-- drop-- indexesLast
Full backup (add delayed backup node)
1. Write a script to back up data regularly in the early morning, such as:
. / mongodump-h 10.168.0.187 data/mongoback/ 40000-d weibo-o / data/mongoback/
two。 Recover data
3. It is recommended to fix it first and compress the space.
4. Db.repairDatabase (); repair data (and compress (delete data) space)
. / mongorestore-h 10.168.0.187 data/mongoback 40000-directoryperdb / data/mongoback-- drop-- indexesLast
6. Other questions ~ 1. If the startup is unsuccessful, try to fix it. Such as:
. / mongod-- port 27017-- repair-- dbpath / data/database/shard1/
2 if the master node kill, then after getting up through rs.stepDown (100) to give up the master position.
3. Please feel free to ask me any other questions, please contact me (Li Hang), Weibo: http://weibo.com/lidaohang~
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.