Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

MongoDB replication set configuration steps

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

1. Configure the configuration file for three nodes (/ etc/28001.conf)

28002.conf

# bind_ip=192.168.20.144

Port=28001

Logpath=/data/db/mongodb_log/28001.log

Logappend=true

Pidfilepath=/data/db/mongodb_data28001/28001.pid

OplogSize=500

Dbpath=/data/db/mongodb_data28001

ReplSet=imooc

Fork=true

28002.conf

# bind_ip=192.168.20.144

Port=28002

Logpath=/data/db/mongodb_log/28002.log

Logappend=true

Pidfilepath=/data/db/mongodb_data28002/28002.pid

OplogSize=500

Dbpath=/data/db/28002

ReplSet=imooc

Fork=true

28003.conf

# bind_ip=192.168.20.144

Port=28003

Logpath=/data/db/mongodb_log/28003.log

Logappend=true

Pidfilepath=/data/db/mongodb_data28003/28003.pid

OplogSize=500

Dbpath=/data/db/28003

ReplSet=imooc

Fork=true

two。 Start the mongod service using the configuration file

/ usr/local/mongodb/bin/mongod-f / usr/local/mongodb/conf/28001.conf

/ usr/local/mongodb/bin/mongod-f / usr/local/mongodb/conf/28002.conf

/ usr/local/mongodb/bin/mongod-f / usr/local/mongodb/conf/28003.conf

3. Log in to the running monogd machine

/ usr/local/mongodb/bin/mongo 127.0.0.1:28001/admin

4. Configure replication set node profile

Config= {_ id: "imooc", members: [{_ id: 1, host: "127.0.0.1 id 28001"}, {_ id: 2, host: "127.0.0.1 imooc 28002"}, {_ id: 3, host: "127.0.0.1id 28003"}]}

5. View node members

Config.members

6. Set 3 nodes as arbitration nodes

Config.members [3] = {"_ id": 3, "host": "127.0.0.1 host 28003", arbiterOnly:true}

Or enter in the primary node:

Rs.addArb ("127.0.0.1 purl 28003")

7. Initialize replica set

Rs.initiate (config) (if initialization has been reported to have been initialized, you need to recreate the master node after backing up the data. )

8. Reload the config parameter (reconfig once after each modification of config, but reconfig temporarily disconnects the replication set, which requires a maintenance time window to avoid random execution in a production environment)

Rs.reconfig (cnf, {force:true})

(this command must be run on the primary node. If the primary is not displayed, it may be other, then you need to add the parameter of force.)

9. View replication set status

Rs.status ()

10. The front-end reference program of mongodb distinguishes between master and slave nodes through the isMaster command (hidden nodes can see their information through rs.status (), but rs.isMaster cannot)

Rs.isMaster ()

Maintenance:

Add a copy and enter under logging in to the primary node:

Rs.add ("ip:port")

Delete copy

Rs.remove ("ip:port")

(provided that the ip:port must be an instance of mongodb using the same relpSet name)

After importing the original data, import the command

Mongorestore-h 127.0.0.1 directoryperdb 12345-d syt / mnt/mongo_data/

Where / mnt/mongo_data is the json file to be imported, and it is found that all the slave nodes become recovering.

Cause of the problem

The main reason for this problem is that the speed of synchronizing oplog of secondary nodes can not catch up with the speed of primary, resulting in a state of recovering all the time.

Solution:

First stop the mongod process of the slave node, then delete all the data under the directory (rs), and then restart the mongod process. It should be noted here that if there is an arbiter mongod process, it also needs to be stopped. When starting, start the mongod process of replSet first, and then start the mongod process of arbiter. After starting, it will automatically change from recovering state to startup2 state, and finally to secondary state.

The second way is to delete the data directory under the recovering node, then copy all the data in the primary to the node, and then restart it! Be sure to stop all mongodb databases before you operate!

As you can see, using show tables to report an error in the slave node shows that "this node is not a master node, and the ok status of the slave node is failed." If you need to read data from the node, you need to set slave:

Rs.slaveOk (true)

If you execute show tables again, you can succeed.

Simulated downtime:

Execute on the primary node: db.shutdownServer ()

Error "errmsg": "shutdown must run from localhost when running db without auth"

This error indicates that if verification is not enabled, it needs to be executed under localhost. If bin_ip is specified in the launch configuration file, it can only be executed through bin_ip. Here, we cannot log in to localhost and can only execute it through kill.

Find the corresponding process number by viewing ps-ef | grep *

Use kill-2 * (it is not recommended to use kill to shut down the process instance until it is a last resort. Even if you want to use it, use kill-2 {SIGINT} to shut down the process instance. Kill-2 will shut down all queued processes in the instance before shutting down the instance.)

Test after shutdown:

Check the original master node, an error has been reported when you hit enter.

Looking at the original slave node, you can find that one of the two slave nodes has become PRIMARY when you tap enter among them.

The switching of nodes can be seen through show log rs.

MongoDB Enterprise imooc:PRIMARY > show log rs

2017-10-30T01:53:15.617-0700 I REPL [replExecDBWorker-0] New replica set config in use: {_ id: "imooc", version: 1, protocolVersion: 1, members: [{_ id: 1, host: "127.0.0.1 arbiterOnly 28001", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1}, {_ id: 2, host: "127.0.0.1 arbiterOnly 28002" ArbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1}, {_ id: 3, host: "127.0.0.1 id 28003", arbiterOnly: false, buildIndexes: true, h

2017-10-30T01:53:15.617-0700 I REPL [replExecDBWorker-0] transition to STARTUP2

2017-10-30T01:53:15.618-0700 I REPL [rsSync] transition to RECOVERING

2017-10-30T01:53:15.619-0700 I REPL [rsSync] transition to SECONDARY

2017-10-30T01:53:15.621-0700 I REPL [ReplicationExecutor] Member 127.0.0.1 Member 28001 is now in state SECONDARY

2017-10-30T01:53:20.624-0700 I REPL [ReplicationExecutor] Member 127.0.0.1 Member 28003 is now in state SECONDARY

2017-10-30T01:53:25.625-0700 I REPL [ReplicationExecutor] Member 127.0.0.1 Member 28001 is now in state PRIMARY

2017-10-30T02:13:53.329-0700 I REPL [rsBackgroundSync] could not find member to sync from

2017-10-30T02:14:01.816-0700 I REPL [ReplicationExecutor] transition to PRIMARY

2017-10-30T02:14:03.033-0700 I REPL [rsSync] transition to primary complete; database writes are now permitted

(to be verified)

If a node cannot find the oplog (as seen in the log), you can copy the oplog of the master node to the slave node

1. Back it up:

. / mongodump-- port 28011-d local-c oplog.rs-o / opt/backup/0706local/

two。 Restore to another single-node MONGODB server

. / mongorestore-- port 28011-d temp_local-c shard1_oplog-- dir / opt/backup/0706local/local/oplog.rs.bson

If an error occurs during recovery, Failed: error connecting to db server: no reachable servers

... .

. ..

If there is a problem with a node, delete monogd.lock, add-repair at startup, and start again cannot be started, and only as an arbitration node that does not store data, the simplest and most rude way is to delete all the data data under the node, and then restart, you can start successfully.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report