In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly shows you "what is the use of HBase compaction", the content is easy to understand, clear, hope to help you solve your doubts, the following let the editor lead you to study and learn "what is the use of HBase compaction" this article?
Compaction is a combination of multiple HFile into a single HFile operation
Compaction has the following functions:
Reduce the number of HFile files
HFile reduction may improve read performance
Clear expired and deleted data.
There are two types of compaction: minor and major
1) the Minor operation is only used to merge some files, including minVersion=0 and set up the out-of-date version cleanup of ttl, and does not do any cleaning work of deleting data or multi-version data.
2) Major operation is to merge all StoreFile under HStore under Region, delete data, clean up multi-version data, and the final result is to sort out and merge a file.
When HRegionServer starts, it starts the compactionChecker thread, and compactionChecker detects whether region requires compaction.
The logic of the main execution is as follows:
Protected void chore () {for (HRegion r: this.instance.onlineRegions.values ()) {if (r = = null) continue; for (Stores: r.getStores (). Values ()) {try {long multiplier = s.getCompactionCheckMultiplier (); assert multiplier > 0; if (iteration% multiplier! = 0) continue If (s.needsCompaction ()) {/ / Queue a compaction. Will recognize if major is needed. This.instance.compactSplitThread.requestSystemCompaction (r, s, getName () + "requests compaction");} else if (s.isMajorCompaction ()) {if (majorCompactPriority = = DEFAULT_PRIORITY | | majorCompactPriority > r.getCompactPriority ()) {this.instance.compactSplitThread.requestCompaction (r, s, getName () + "requests major compaction; use default priority", null) } else {this.instance.compactSplitThread.requestCompaction (r, s, getName () + "requests major compaction; use configured priority", this.majorCompactPriority, null);} catch (IOException e) {LOG.warn ("Failed major compaction check on" + r, e) } iteration = (iteration = = Long.MAX_VALUE)? 0: (iteration + 1);}}
For example, iterate through the onlineRegions and get the Store of each region to judge. The needsCompaction logic is as follows:
Public boolean needsCompaction (final Collection storeFiles, final List filesCompacting) {int numCandidates = storeFiles.size ()-filesCompacting.size (); return numCandidates > = comConf.getMinFilesToCompact ();
MinFilesToCompact is controlled by hbase.hstore.compaction.min (previous version: hbase.hstore.compactionThreshold). The default value is 3, that is, the number of storeFiles under store minus the number of compaction in progress > = 3 Yes, you need to do compaction.
When needsCompaction is true, the compactSplitThread.requestSystemCompaction method is called to send a compaction request, which is analyzed in the compactSplitThread thread.
When needsCompaction is false, it will determine whether it is isMajorCompaction. The specific logic is as follows:
/ * @ param filesToCompact Files to compact. Can be null. * @ return True if we should run a major compaction. * / public boolean isMajorCompaction (final Collection filesToCompact) throws IOException {boolean result = false; long mcTime = getNextMajorCompactTime (filesToCompact); if (filesToCompact = = null | | filesToCompact.isEmpty () | | mcTime = = 0) {return result;} / / TODO: Use better method for determining stamp of last major (HBASE-2990) long lowTimestamp = StoreUtils.getLowestTimestamp (filesToCompact); long now = System.currentTimeMillis (); if (lowTimestamp > 0l & & lowTimestamp
< (now - mcTime)) { // Major compaction time has elapsed. long cfTtl = this.storeConfigInfo.getStoreFileTtl(); if (filesToCompact.size() == 1) { // Single file StoreFile sf = filesToCompact.iterator().next(); Long minTimestamp = sf.getMinimumTimestamp(); long oldest = (minTimestamp == null) ? Long.MIN_VALUE : now - minTimestamp.longValue(); if (sf.isMajorCompaction() && (cfTtl == HConstants.FOREVER || oldest < cfTtl)) { if (LOG.isDebugEnabled()) { LOG.debug("Skipping major compaction of " + this + " because one (major) compacted file only and oldestTime " + oldest + "ms is < ttl=" + cfTtl); } } else if (cfTtl != HConstants.FOREVER && oldest >CfTtl) {LOG.debug ("Major compaction triggered on store" + this + ", because keyvalues outdated; time since last major compaction" + (now-lowTimestamp) + "ms"); result = true;}} else {if (LOG.isDebugEnabled ()) {LOG.debug ("Major compaction triggered on store" + this + ") Time since last major compaction "+ (now-lowTimestamp) +" ms ");} result = true;}} return result;} above is all the content of this article" what's the use of HBase compaction? "Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 204
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.