Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What if the MongoDB replica set cluster reports an error 10061 from the node console

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces the MongoDB replica set cluster from the node console error 10061 how to do, the article is very detailed, has a certain reference value, interested friends must read it!

- -

First check the console logs of the three nodes in the cluster

1. Console logs of three servers in the cluster

192.168.72.33

2018-01-05T09:46:24.281+0800 I STORAGE [initandlisten] Placing a marker at optime Jan 05 05:16:28:3e9

2018-01-05T09:46:24.432+0800 I NETWORK [HostnameCanonicalizationWorker] Starting hostname canonicalization worker

2018-01-05T09:46:24.432+0800 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory 'd:/mongodata/rs0-2 spot stick.data`

2018-01-05T09:46:24.443+0800 I NETWORK [initandlisten] waiting for connections on port 27013

2018-01-05T09:46:25.485+0800 W NETWORK [ReplicationExecutor] Failed to connect

To 192.168.72.31 errno:10061 27011, reason: unable to connect because the target computer actively refused.

2018-01-05T09:46:25.533+0800 I REPL [ReplicationExecutor] New replica set co

Nfig in use: {_ id: "rs0", version: 8, protocolVersion: 1, members: [{_ id: 0

Host: "mongodb-rs0-0 false 27011", arbiterOnly: false, buildIndexes: true, hidden: fal

Se, priority: 100.0, tags: {}, slaveDelay: 0, votes: 1}, {_ id: 1, host: "mongo

Db-rs0-1 false 27012 ", arbiterOnly: false, buildIndexes: true, hidden: false, priority

: 1.0, tags: {}, slaveDelay: 0, votes: 1}, {_ id: 2, host: "mongodb-rs0-2 votes 27013

", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {

}, slaveDelay: 0, votes: 1}], settings: {chainingAllowed: true, heartbeatInte

RvalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLas

TErrorModes: {}, getLastErrorDefaults: {w: 1, wtimeout: 0}, replicaSetId: Obje

CtId ('59365592734d0747ee26e2a6')}}

2018-01-05T09:46:25.534+0800 I REPL [ReplicationExecutor] This node is mongo db-rs0-2 ReplicationExecutor 27013 in the config

192.168.72.32

2018-01-05T09:46:17.064+0800 I NETWORK [HostnameCanonicalizationWorker] Starting hostname canonicalization worker

2018-01-05T09:46:17.064+0800 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory 'd:/mongodata/rs0-1 effective stick.data'

2018-01-05T09:46:17.076+0800 I NETWORK [initandlisten] waiting for connections on port 27012

2018-01-05T09:46:18.102+0800 W NETWORK [ReplicationExecutor] Failed to connect

To 192.168.72.31 errno:10061 27011, reason: unable to connect because the target computer actively refused.

2018-01-05T09:46:19.149+0800 W NETWORK [ReplicationExecutor] Failed to connect

To 192.168.72.33 errno:10061 27013, reason: unable to connect because the target computer actively refused.

2018-01-05T09:46:19.150+0800 I REPL [ReplicationExecutor] New replica set co

Nfig in use: {_ id: "rs0", version: 8, protocolVersion: 1, members: [{_ id: 0

Host: "mongodb-rs0-0 false 27011", arbiterOnly: false, buildIndexes: true, hidden: fal

Se, priority: 100.0, tags: {}, slaveDelay: 0, votes: 1}, {_ id: 1, host: "mongo

Db-rs0-1 false 27012 ", arbiterOnly: false, buildIndexes: true, hidden: false, priority

: 1.0, tags: {}, slaveDelay: 0, votes: 1}, {_ id: 2, host: "mongodb-rs0-2 votes 27013

", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {

}, slaveDelay: 0, votes: 1}], settings: {chainingAllowed: true, heartbeatInte

RvalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLas

TErrorModes: {}, getLastErrorDefaults: {w: 1, wtimeout: 0}, replicaSetId: Obje

CtId ('59365592734d0747ee26e2a6')}}

2018-01-05T09:46:19.150+0800 I REPL [ReplicationExecutor] This node is mongo db-rs0-1 purl 27012 in the config

192.168.72.31

2018-01-05T15:56:42.999+0800 I STORAGE [initandlisten] Placing a marker at optime Jan 05 05:12:59:b4a

2018-01-05T15:56:43.000+0800 I STORAGE [initandlisten] Placing a marker at optime Jan 05 05:13:08:8df

2018-01-05T15:56:43.000+0800 I STORAGE [initandlisten] Placing a marker at optime Jan 05 05 purl 14 purl 05 purl 329

2018-01-05T15:56:43.001+0800 I STORAGE [initandlisten] Placing a marker at optime Jan 05 05 purl 15 purl 30 purl 25f

2018-01-05T15:56:43.002+0800 I STORAGE [initandlisten] Placing a marker at optime Jan 05 05:15:39:4b1

According to the above log information, it is inferred that the storage type wait event occurred in the cluster master node 192.168.72.31, which caused the master node 192.168.72.31 to reject the TCP connection of two slave nodes 192.168.72.32Univer 33.

2. Follow the prompts in step 1 to view the log of the mongo service at the operating system level. The operating system log has been alerted from 2018-1-5 4:59:25 seconds to indicate that disk D is full.

3. Check the storage situation of 192.168.72.31. As indicated by the operating system log, disk D only has free space for 58MB.

4. From the above information, it can be concluded that because the storage space of the master node 192.168.72.31 of the Mongo cluster is full, the Mongo process of the master node 192.168.72.31 cannot complete the write operation, thus rejecting the connection of two slave nodes, resulting in the interruption of the service of the entire mongo cluster. Through communication, we learned that the prefecture and city technology backed up the data of the current Mongo master node 192.168.72.31, and did not pay attention to the storage of D disk.

After that, the prefecture and city technology immediately deleted the redundant data backup of node 192.168.72.31 to release D disk space. Because the scheduler was in a dead state, the prefecture and city technology decided to restart the entire mongo cluster server 192.168.72.31x32x33.

5. After the restart, the mongo cluster returns to normal, and the mongo console of the master node 192.168.72.31 prompts the scheduler bmi to be accepted to connect to the admin library of the mongo cluster.

The above is all the contents of the article "what to do if the MongoDB replica set cluster reports an error 10061 from the node console". Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report