In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces the mongodb import data caused by the disintegration of the replication set how to do, the article is very detailed, has a certain reference value, interested friends must read it!
Recently, the company's MONGO DB has been launched to store a large number of logs in application operations, or some data that can not be easily stored in traditional databases. As the NO.1 MONGO DB in NO SQL, it is obvious to all in terms of stability and data giant throughput. After several power outages in the test library, the cluster can still start working immediately after the MONGO process is started, which shows that MONGODB cluster is much more "awesome" than other traditional databases in terms of robustness. Of course, it should be in principle. Non-transactional operations do not seek the uniqueness of data in a certain period of time. In addition, MONGODB stores logs. Compared with Elasticsearch, MONGODB also has its own advantages. MONGODB lies in its continuity and relevance to log operations, which Elasticsearch cannot give, so some enterprises still use MONGODB instead of Elasticsearch for important system logs.
Back to the truth, recently, MONGO has been busy. Although the performance analyzer OPS has been online, the monitoring operation and maintenance staff are still working on the status, and there is still the construction of the MYSQL MGR system on hand. Today, there is a need to import less than 20 gigabytes of data from the traditional database, which is not enough for MONGO DB. On the test library, (the configuration is not high), it only takes about 10 minutes to export the data. When importing to the production database, because the brain was put on the MYSQL MGR, I forgot that the OPLOG setting on the MONGO DB side is only 20g, and I also imported the production informal table first, so that the developer verified the accuracy of the data first. The import speed is very fast in less than 10 minutes, and the data less than 20g is properly stored in MONGO DB.
What I overlooked is that the OPLOG setting size and has quickly imported 20 gigabytes of data into MONGO DB, although I am a little alert, when I import the data again, I already have a speed limit, thinking that nothing will happen. I took a look at oplog windows, 7 DAYS, and it's still long.
The speed of import was very slow due to the speed limit. I took an occasional look at OPLOG WINDOWS, then dropped to 3 DAYS, then dropped to 1 DAYS, and I began to notice that the OPLOG window was getting smaller and smaller.
Here is a popular knowledge of what OPLOG is. When Primary writes, it will write these write records to the Oplog of Primary, and then Secondary will copy Oplog to the local machine and apply these operations, thus realizing the function of Replication. At the same time, because it records write operations on the Primary, it can also be used for data recovery. You can simply think of it as binlog in Mysql, but some of the principles are different.
If this is put into MONGO DB 3.4.It is estimated that there will only be a dead share, but when we choose the model, we choose MONGO DB 3.6.We can expand the capacity of OPLOG online, which can save lives at this moment.
Immediately expand OPLOG and directly change the original 20g to 45G to operate on all nodes
By this time, OPLOG WINDOWS had given me less than 40 minutes.
With the adjustment of OPLOG WINDOWS, the time window of OPLOG increases a little, the situation improves and the alarm goes off.
Continue to observe through orders.
With each refresh, the distance between OPLOG first event time and last event time gets farther and farther. So far, a crisis has passed. Whew
After inquiry, Zhang Youdong actually proposed an immediate amendment to the government as early as MONGO 3.2, but it was only then that 3.6 was applied.
The above is all the contents of the article "what to do if the replication set is disintegrated due to mongodb import data". Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.