In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Problem description:
In a production environment, when the read pressure of a secondary replica member is high, the pressure can be alleviated by adding a new secondary replica member.
In order to achieve non-downtime of the primary replica members and reduce the pressure on the primary replica members, you can mongodump backup data on the secondary replica members.
In order to achieve fast recovery of new secondary replica members, the secondary replica members can be mounted directly to the secondary replica members doing backup operations by NFS.
To ensure data consistency, use the-oplog parameter when mongodump data and the-oplogReplay parameter when mongorestore
In order to meet the expansion of the space in the later stage, the database is stored in directories through the-directoryperdb parameter.
Solution: step 1: hang the new machine as NFS disk
See: configuring the NFS network file system on CentOS Linux and client usage
Step 2: back up the secondary copy members
Local databases will not be backed up, and other libraries, including admin, will be backed up.
Mongodump-- host=192.168.11.1:27017-- oplog-o / mnt/mongo/mongodata-u xucy-p Passw0rd-- authenticationDatabase admin > mongodump.log 2 > & 1 &
You can observe the progress of the backup by viewing the log data in real time.
Tail-f mongodump.log step 3: restore the database on the new instance
To run mongorestore without mongod starting, it writes directly to the file.
Mongorestore-host=192.168.11.2:27017-oplogReplay-dbpath / data/var/lib/mongodb-directoryperdb / nfspool/mongodata > mongorestore.log 2 > & 1 &
You can observe the recovery progress by viewing the log data in real time.
Tail-f mongorestore.log step 4: rebuild oplog on the new instance
1. View the maintenance window and oplog size of the master copy:
Rs_main:PRIMARY > db.printReplicationInfo ()
Configured oplog size: 23862.404296875MB
Log length start to end: 39405secs (10.95hrs)
Oplog first event time: Sun Feb 08 2015 10:34:07 GMT-0600 (CST)
Oplog last event time: Sun Feb 08 2015 21:30:52 GMT-0600 (CST)
Now: Sun Feb 08 2015 21:30:53 GMT-0600 (CST)
two。 Rebuild oplog on the new machine:
Start as standalone, and execute the following delete and create scripts:
> use local > db.oplog.rs.drop () > db.createCollection ("oplog.rs", {"capped": true, "size": 23 * 1024 * 1024 * 1024}) or > db.runCommand ({create: "oplog.rs", capped: true, size: (23 * 1024 * 1024 * 1024)})
Step 5: restore oplog on the new instance
This oplog is the oplog.bson exported when mongodump.
Mongorestore-d local-c oplog.rs / nfspool/mongodata/oplog.bson step 6: start the new instance with a replica set
The new instance configuration and the source replica set add the same-- replSet and-- keyFile parameters to start with the replica set
Step 7: add this node to the replication set cluster > rs.add ("192.168.11.2 bank 27017") {"ok": 1}
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.