In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
MapReduce Classic case Code (wordcount) takes classic wordcount as an example to implement word counting package com.fwmagic.mapreduce;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.LongWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Job;import org.apache.hadoop.mapreduce.Mapper;import org.apache.hadoop.mapreduce.Reducer through custom mapper and reducer. Import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;import java.io.IOException / * MapReduce word Statistics * / public class WordCountDemo {/ * Custom Mapper inheritance: org.apache.hadoop.mapreduce.Mapper Implement the map method * / public static class WordCountMapper extends Mapper {@ Override protected void map (LongWritable key, Text value, Mapper.Context context) throws IOException, InterruptedException {String [] words = value.toString () .split ("") For (String word: words) {context.write (new Text (word), new IntWritable (1)) }} / * Custom Reducer inheritance: org.apache.hadoop.mapreduce.Reducer, implement reduce method * / public static class WordCountReducer extends Reducer {@ Override protected void reduce (Text key, Iterable values, Reducer.Context context) throws IOException, InterruptedException {int count = 0 For (IntWritable writable: values) {count + = writable.get ();} context.write (key, new IntWritable (count)) }} / * job startup class, set parameters and submit job * @ param args * @ throws Exception * / public static void main (String [] args) throws Exception {Configuration conf = new Configuration (); Job job = Job.getInstance (conf); job.setJarByClass (WordCountDemo.class); job.setMapperClass (WordCountMapper.class); job.setReducerClass (WordCountReducer.class) in the cluster Job.setMapOutputKeyClass (Text.class); job.setMapOutputValueClass (IntWritable.class); job.setOutputKeyClass (Text.class); job.setOutputValueClass (IntWritable.class); FileInputFormat.setInputPaths (job, new Path ("/ wordcount/input")); FileOutputFormat.setOutputPath (job, new Path ("/ wordcount/output")); boolean b = job.waitForCompletion (true); System.exit (b? 0: 1) }} data content under the / wordcount/input directory in the cluster
Package the project and execute the jobhadoop jar fwmagic-wordcount.jar execution to output the results
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.