In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Hadoop equalizer
During the operation of Hadoop, the blocks of its datanode will become more and more unbalanced, and the unbalanced cluster will cause some datanode to be relatively busier.
Hadoop's equalizer is a daemon. It reassigns the block, moving the block from the busy datanode to the relatively idle datanode. At the same time, adhere to the copy strategy, the copy will be distributed to different racks, in order to reduce the rate of data damage.
Cluster balance standard: the utilization rate of each datanode is very close to that of the cluster, and the difference does not exceed the given threshold.
Datanode usage: the ratio of space used to the total amount of space on this node
Cluster utilization: the ratio of the space used in the cluster to the total space in the cluster.
In order to reduce the load of the cluster and avoid disturbing other users, the equalizer runs in the background and limits the bandwidth of replicated data between different nodes. The default value is 1MB/s.
The equalizer of Hadoop is off by default.% start-balancer.sh is required to start. After cluster equalization, the equalizer will stop automatically.
HBase load balancing
HBase's equalizer is executed every five minutes by default and is set by the hbase.balancer.period property. After the equalizer runs, it attempts to distribute the region evenly to all region servers.
First determine a region allocation plan, and then start moving the region.
The on and off states are controlled by the switch class of the equalizer.
HBase's equalizer is turned on by default and runs periodically.
The following is an explanation on the official website:
Hadoop Balancer
Over time, the distribution of blocks across datanodes can become unbalanced. An unbalanced cluster can affect locality for MapReduce, and it puts a greater strain on the highly utilized datanodes, so it's best avoided.
The balancer program is a Hadoop daemon that redistributes blocks by moving them from overutilized datanodes to underutilized datanodes, while adhering to the block replica placement policy that makes data loss unlikely by placing block replicas on different racks (see Replica Placement). It moves blocks until the cluster is deemed to be balanced, which means that the utilization of every datanode (ratio of used space on the node to total capacity of the node) differs from the utilization of the cluster (ratio of used space on the cluster tot otal capacity of the cluster) by no more than a given threshold percentage. You can start the balancer with:
% start-balancer.sh
The-threshold argument specifies the threshold percentage that defines what it means for the cluster to be balanced. The flag is optional; if omitted, the threshold is 10%. At any one time, only one balancer may be running on the cluster.
The balancer runs until the cluster is balanced, it cannot move any more blocks, or it loses contact with the namenode. It produces a logfile in the standard log directory, where it writes a line for every iteration of redistribution that it carries out. Here is the output from a short run on a small cluster (slightly reformatted to fit the page):
Time Stamp Iteration# Bytes Already Moved... Left To Move... Being Moved
Mar 18, 2009 5:23:42 PM 00 KB 219.21 MB 150.29 MB
Mar 18, 2009 5:27:14 PM 1 195.24 MB 22.45 MB 150.29 MB
The cluster is balanced. Exiting...
Balancing took 6.072933333333333 minutes
The balancer is designed to run in the background without unduly taxing the cluster or interfering with other clients using the cluster. It limits the bandwidth that it uses to copy a block from one node to another. The default is a modest 1 MB/s, but this can be changed by setting the dfs.datanode.balance.bandwidthPerSec property in hdfs-site.xml, specified in bytes.
HBase Load Balancing
The master has a built-in feature, called the balancer. By default, the balancer runs every five minutes, and it is configured by the hbase.balancer.period property. Once the balancer is started, it will attempt to equal out the number of assigned regions per region server so that they are within one region of the average number per server. The call first determines a new assignment plan, which describes which regions should be moved where. Then it starts the process of moving the regions by calling the unassign () method of the administrative API iteratively.
The balancer has an upper limit on how long it is allowed to run, which is configured using the hbase.balancer.max.balancing property anddefaults to half of the balancer period value, or two and a half minutes.
You can control the balancer by means of the balancer switch: either use the shell's balance_switch command to toggle the balancer status between enabled and disabled, or use the balanceSwitch () API method to do the same. When you disable the balancer, it no longer runs as expected.
The balancer can be explicitly started using the shell's balancer command, or using the balancer () API method. The time-controlled invocation mentioned previously calls this method implicitly. It will determine if there is any work to be done and return true if that is the case. The return value of false means that it was not able to run the balancer, because either it was switched off, there was no work to be done (all is balanced), or something else was prohibiting the process. One example for this is the region in transition list (see Main page): if there is a region currently in transition, the balancer will be skipped.
Instead of relying on the balancer to do its work properly, you can use the move command and API method to assign regions to other servers. This is useful when you want to control where the regions of a particular table are assigned. See Region Hotspotting for an example.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.