In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article will explain in detail what the slicing principle of Mapper in MapReducer is, and the content of the article is of high quality, so the editor will share it with you for reference. I hope you will have a certain understanding of the relevant knowledge after reading this article.
1. First of all, we can see that different files start different map tasks when Map is running, but there is no configuration for the number of map in JOB. In fact, the MapTask that MRAppMaster requests RM resources to run during map runtime is determined by the file slice before map (although split is equal to blocksize by default, it is by no means equal to blocksize).
Second, the principle: the mapTask distributed to each node processes the file according to each slice.
As shown in the figure, the default InputFormat is TextInputFormat and TextInputFormat inherits from FileInputFormat
@ InterfaceAudience.Public
@ InterfaceStability.Stable
Public class TextInputFormat extends FileInputFormat
Let's take a look at how FileInputFormat slices the file.
In FileInputformat, there are the issplit () method (which sets whether to split the file) and the getsplits method, which is called in getsplits
ComputeSplitSize () method through return Math.max (minSize, Math.min (goalSize, blockSize)) to get splits this source code to see the attached picture. So when we want to change the split size (that is, to change the number of mapTask), we need to add parameters to the configuration file.
Mapreduce.input.fileinputformat.split.minsize and
Mapreduce.input.fileinputformat.split.maxsize
To change the splits
IsSplitable () in the source code:
Protected boolean isSplitable (FileSystem fs, Path filename) {
Return true
}
The default is to cut files. If you customize InputFormat, you can inherit FileInputFormat override isSplitable method and return false.
The main getsplits code snippets in the source code:
Public InputSplit [] getSplits (JobConf job, int numSplits)
Throws IOException {
.
Long blockSize = file.getBlockSize ()
Long splitSize = computeSplitSize (goalSize, minSize, blockSize)
}
As shown in the figure, the computeSplitSize () method is called to get the splitsize
Finally, take a look at the computeSplitSize source code:
Protected long computeSplitSize (long goalSize, long minSize
Long blockSize) {
Return Math.max (minSize, Math.min (goalSize, blockSize))
}
so you can see that Math.max (minSize, Math.min (goalSize, blockSize))
Determines the size of the splitsize
The configuration file can be configured to:
Mapreduce.input.fileinputformat.split.minsize and
Mapreduce.input.fileinputformat.split.maxsize
To change the splits, thereby changing the number of mapTask:
Number of MapTask = filesize/splitsize+1
About what the slicing principle of Mapper in MapReducer is shared here, I hope the above content can be of some help to you, can learn more knowledge. If you think the article is good, you can share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.