In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces the example analysis of MapReduce in Hadoop, which has a certain reference value, and interested friends can refer to it. I hope you will gain a lot after reading this article.
MapReduce design concept
Mobile computing, not moving data.
Helloworld (Word Count) process of MapReduce
Split size of MapReduce-max.split (200m)-min.split (50m)-block (128m)-max (min.split,min (max.split,block)) = 128m
Mapper
Map-reducede 's idea is "divide and rule".
Mapper is responsible for "dividing", that is, breaking down miscellaneous tasks into several "simple tasks" for execution.
"simple task" has several meanings:
The scale of data or computing is much smaller than that of the original task.
Nearby calculation, that is, it will be assigned to the node where the required data is stored for calculation.
These small tasks can be calculated in parallel and have little dependence on each other.
Reduce
Summarize the results of the map phase
The number of reducer is determined by the project mapred.reduce.tasks in the mapred-site.xml configuration file. The default value is 1, which can be overridden (generally adjusted in the program without changing the default value of xml)
Shuffler (the most complex link)
Reference: MapReduce: explain the Shuffle process in detail
A step between mapper and reduce
The output of mapper can be re-divided and combined into N parts according to a certain key value, and the group whose key value meets a certain range can be sent to a specific reduce for processing.
The reduce process can be simplified
Attached: WordCount of Helloworld
/ / WCJob.javaimport java.io.IOException;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.FileSystem;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.LongWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Job;import org.apache.hadoop.mapreduce.Mapper;import org.apache.hadoop.mapreduce.Reducer;import org.apache.hadoop.mapreduce.lib.input.FileInputFormat Import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;import org.apache.hadoop.util.StringUtils;/** * MapReduce_Helloworld program * * WCJob * @ since V1.0.0 * Created by SET on 2016-09-11 11:35:15 * @ see * / public class WCJob {public static void main (String [] args) throws Exception {Configuration config = new Configuration (); config.set ("fs.defaultFS", "hdfs://master:8020") Config.set ("yarn-resourcemanager.hostname", "slave2"); FileSystem fs = FileSystem.newInstance (config); Job job = new Job (config); job.setJobName ("word count"); job.setJarByClass (WCJob.class); job.setMapOutputKeyClass (Text.class); job.setMapOutputValueClass (IntWritable.class); job.setMapperClass (WCMapper.class); job.setReducerClass (WCReducer.class) Job.setCombinerClass (WCReducer.class); FileInputFormat.addInputPath (job, new Path ("/ user/wc/wc")); Path outputpath = new Path ("/ user/wc/output"); if (fs.exists (outputpath)) {fs.delete (outputpath, true);} FileOutputFormat.setOutputPath (job, outputpath); boolean flag = job.waitForCompletion (true) If (flag) {System.out.println ("Job customers!") }} private static class WCMapper extends Mapper {@ Override protected void map (LongWritable key, Text value, Context context) throws IOException, InterruptedException {/ * format: hadoop hello world * map gets each line of data split * / String [] strs = StringUtils.split (value.toString (),'') For (String word: strs) {context.write (new Text (word), new IntWritable (1));} private static class WCReducer extends Reducer {@ Override protected void reduce (Text key, Iterable values, Context context) throws IOException, InterruptedException {int sum = 0 For (IntWritable intWritable: values) {sum + = intWritable.get ();} context.write (new Text (key), new IntWritable (sum)) } Thank you for reading this article carefully. I hope the article "sample Analysis of MapReduce in Hadoop" shared by the editor will be helpful to you. At the same time, I also hope you will support us and pay attention to the industry information channel. More related knowledge is waiting for you to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.