In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)05/31 Report--
Most people do not understand the knowledge points of this article "how to migrate mongodb data blocks", so the editor summarizes the following content, detailed content, clear steps, and has a certain reference value. I hope you can get something after reading this article. Let's take a look at this "how to migrate mongodb data blocks" article.
1. Basic Concepts 1.1 Chunk (data blocks)
It is a logical concept to represent a set of data contained in a continuous range of sharded key values on a particular server.
For example, a data block is recorded as follows:
{"_ id": "chunk-a", / / data block Id "ns": "user.address", / / the database name and table name "min" corresponding to the data block: {/ / the starting value of the shard key value corresponding to the data block (inclusive) Is "Shi Jiazhuang"city": "Shi Jiazhuang"}, "max": {/ / the end value of the fragment key value corresponding to this data block (not included) Is "Nanjjing"city": "Nan Jing"}, "shard": "repa" / / the data block is stored in the repa sharding server} / / that is, the data block record represents the "city" field in the table address in the database user, whose value ranges from "Shi Jiazhuang" (inclusive) to "Nan Jing" (excluding). Are stored on a sharding server named repa. 1.2 Chunk Size (block size)
If the data corresponding to a data block exceeds 64m (default), it will be automatically divided into two data by the system, that is, the data block will be split from one block to two, as shown below:
1.3 Migration (Block Migration)
Mongodb has a background balancer process, which monitors the number of data blocks on each sharding server. If the number of data blocks on different sharding servers is found to be different and exceeds the threshold, the block migration task will be started.
Until the difference in the number of blocks between different sharding servers falls within the threshold, as shown below:
1.4 Migration Thresholds (Migration threshold)
The migration threshold for blocks is related to the total number of blocks in the table, as follows:
The threshold for total number of blocks is less than 20220 and greater than or equal to 8082. Migration proc
Block migration is transparent to users and application layer, of course, there may be some performance loss. The whole migration process has seven steps, as shown below.
The contents of each step are as follows:
1. The balancer sends a migration command to the source node.
two。 The source node initiates an internal block migration command to the target node, and during the block migration, requests for the block are still routed to the source node.
3. The target node first creates the missing index on the data block, if necessary.
4. The target node pulls data from the source node.
5. The target node needs to go to the source node to request incremental change data (add, update, and delete) during the execution of step 4, and if so, jump to step 4 until there is no incremental data.
6. After all the data has been migrated successfully, the source node sends a request to the configuration server (config server) to update the value of "sharding server (shard)" in the metadata of the data block as the target node.
7. The source node deletes the local data corresponding to the data block.
3. Best practic
Some of the basic concepts and processes of block and block migration are shared above, and here are some best practices.
3.1 selection of block size
The size of the data block, which is 64m by default, usually does not need to be modified, but sometimes the size of this value will have different effects according to different business scenarios, and you need to integrate various factors to set this value.
Data block size is too small: usually, smaller data block size will lead to more frequent data block migration, and data distribution among clusters will be more balanced. However, if the sharding key setting is not reasonable, it will result in many big data chunks that cannot be split (split). Too large data blocks cannot be migrated between shards, resulting in uneven data distribution. In this case, you need to increase the data block size.
The data block size is too large: a larger data block means fewer data blocks are migrated, and the distribution of data among clusters is prone to imbalance. At the same time, it is also easy to generate read and write hotspots (which can be split manually). In this case, the data block size needs to be reduced.
3.2 about the impact of data block migration on cluster performance
In addition to consuming the bandwidth and disk read-write resources of the target node and source node, step 6 of the migration process will temporarily block access to the data block and affect the access of the application. Therefore, it is recommended to set the active time window of the balancer when the business is underestimated. The steps are as follows:
1. Connect to the mongos.
two。 Switch to config database
Use config
3. Start the balancer
If the balancer is off, setting the active time window will not do data migration, as follows:
Sh.startBalancer ()
4. Modify the active time window
Db.settings.updateOne ({_ id: "balancer"}, {$set: {activeWindow: {start: "01:00", stop: "06:00"}, / / start and stop are in the format of "HH:MM", where HH ranges from 0 to 23 upsert MM and from 00 to 59 {upsert: true}) these are the contents of this article on "how to migrate mongodb blocks" I believe we all have a certain understanding. I hope the content shared by the editor will be helpful to you. If you want to know more about the relevant knowledge, please pay attention to the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.