In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-09 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article introduces how to achieve a MapReduce read data stored in HBase, the content is very detailed, interested friends can refer to, hope to be helpful to you.
Vehicle location data file, format: vehicle id speed: fuel consumption: current mileage.
Calculate the average speed, fuel consumption and mileage of each car through MapReduce.
Vid1 78:8:120vid1 56:11:124vid1 98:5:130vid1 72:6:131vid2 78:4:281vid2 58:9:298vid2 67:15:309
Create Map classes and map functions
Import java.io.IOException;import java.util.StringTokenizer;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Mapper;public class VehicleMapper extends Mapper {@ Override public void map (Object key, Text value, Context context) throws IOException, InterruptedException {String vehicle = value.toString () / / convert the input plain text data to String / / split the input data first by line StringTokenizer tokenizerArticle = new StringTokenizer (vehicle, "\ n") / / process each line separately while (tokenizerArticle.hasMoreTokens ()) {/ / each line is divided by spaces StringTokenizer tokenizer = new StringTokenizer (tokenizerArticle.nextToken ()); String vehicleId = tokenizer.nextToken (); / / vid String vehicleInfo = tokenizer.nextToken () / / vehicle information Text vid = new Text (vehicleId); Text info = new Text (vehicleInfo); context.write (vid, info);}
Create a Reduce class
Import java.io.IOException;import org.apache.hadoop.hbase.client.Put;import org.apache.hadoop.hbase.io.ImmutableBytesWritable;import org.apache.hadoop.hbase.mapreduce.TableReducer;import org.apache.hadoop.hbase.util.Bytes;import org.apache.hadoop.io.Text;public class VehicleReduce extends TableReducer {@ Override public void reduce (Text key, Iterable values, Context context) throws IOException, InterruptedException {int speed = 0; int oil = 0 Int mile = 0; int count = 0; for (Text val: values) {String str = val.toString (); String [] arr = str.split (":"); speed + = Integer.valueOf (arr [0]) Oil + = Integer.valueOf (arr [1]); mile + = Integer.valueOf (arr [2])-mile; / / Cumulative mileage count++;} speed = (int) speed / count; / / average oil = (int) oil / count Mile = (int) mile / count; String result = speed + ":" + oil + ":" + mile; Put put = new Put (key.getBytes ()); put.add (Bytes.toBytes ("info"), Bytes.toBytes ("property"), Bytes.toBytes (result); ImmutableBytesWritable keys = new ImmutableBytesWritable (key.getBytes ()) Context.write (keys, put);}}
Run the task
Import java.io.IOException;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.Path;import org.apache.hadoop.hbase.HBaseConfiguration;import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Job;import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;public class VehicleMapReduceJob {public static void main (String [] args) throws IOException, InterruptedException, ClassNotFoundException {Configuration conf = new Configuration () Conf = HBaseConfiguration.create (conf); Job job = new Job (conf, "HBase_VehicleInfo"); job.setJarByClass (VehicleMapReduceJob.class); job.setMapperClass (VehicleMapper.class); job.setMapOutputKeyClass (Text.class); job.setMapOutputValueClass (Text.class) FileInputFormat.addInputPath (job, new Path (args [0])); / / set the input file path TableMapReduceUtil.initTableReducerJob ("vehicle", VehicleReduce.class, job); System.exit (job.waitForCompletion (true)? 0: 1);}}
Export the code to vehicle.jar, put it in the hadoop-1.2.1 directory, and enter the command
. / bin/hadoop jar vehicle.jar com/xh/vehicle/VehicleMapReduceJob input/vehicle.txt
HBase result query:
On how to achieve a MapReduce read data stored in HBase to share here, I hope the above content can be of some help to you, can learn more knowledge. If you think the article is good, you can share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.