In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains "how to write the data into the HDFS file in HBase". The explanation in the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "how to write the data to the HDFS file in HBase".
Select a table from the HBase, such as blog, and then limit certain columns in the table, such as the nickname nickname, the label tags, and write the data contents of these columns to the HDFS file.
/ * Select a table in HBase, define the columns, and then write its content data to the HDFS file. * * / public class HBaseAndMapReduce2 {public static void main (String [] args) throws Exception {System.exit (run ());} public static int run () throws Exception {Configuration conf = new Configuration (); conf = HBaseConfiguration.create (conf); conf.set ("hbase.zookeeper.quorum", "192.168.226.129") Job job = Job.getInstance (conf, "findFriend"); job.setJarByClass (HBaseAndMapReduce2.class); Scan scan = new Scan (); / / tags, nickname scan.addColumn (Bytes.toBytes ("article"), Bytes.toBytes ("tags") that are useful to the business Scan.addColumn (Bytes.toBytes ("author"), Bytes.toBytes ("nickname")) / / ImmutableBytesWritable types from hbase data / * public static void initTableMapperJob (String table, Scan scan, * Class outputKeyClass, * Class outputValueClass, Job job) * * / / ensure that the blog table exists And the structure of the table is the same as in this article. TableMapReduceUtil.initTableMapperJob ("blog", scan, FindFriendMapper.class, Text.class, Text.class, job); DateFormat df = new SimpleDateFormat ("yyyyMMddHHmmssS"); FileOutputFormat.setOutputPath (job, new Path ("hdfs://192.168.226.129:9000/hbasemapreduce1/" + df.format (new Date () Job.setReducerClass (FindFriendReducer.class); return job.waitForCompletion (true)? 0: 1 } / / input and output keys public static class FindFriendMapper extends TableMapper {/ / key is the row key in hbase / / value is all the data @ Override protected void map (ImmutableBytesWritable key, Result value) of all row keys in hbase Mapper.Context context) throws IOException, InterruptedException {Text v = null String [] kStrs = null; List cs = value.listCells (); for (Cell cell: cs) {System.out.println ("Cell--- >:" + cell) If ("tags" .equals (Bytes.toString (CellUtil.cloneQualifier (cell) {kStrs = Bytes.toString (CellUtil.cloneValue (cell)) .split (",") } else if ("nickname" .equals (Bytes.toString (CellUtil.cloneQualifier (cell) {v = new Text (CellUtil.cloneValue (cell)) }} for (String kStr: kStrs) {context.write (new Text (kStr.toLowerCase ()), v) } public static class FindFriendReducer extends Reducer {@ Override protected void reduce (Text key, Iterable values, Reducer.Context context) throws IOException InterruptedException {System.out.println ("key--- >" + key) StringBuilder sb = new StringBuilder (); for (Text text: values) {System.out.println ("value-- >" + text); sb.append ((sb.length () > 0? ",": ") + text.toString ()) } context.write (key, new Text (sb.toString ());}
Output structure:
Hadoop Berg-OSChina,BergBerghbase OSChina,BergBergzookeeper OSChina,BergBerg thank you for your reading, the above is "how to write the data in HBase to the HDFS file", after the study of this article, I believe you have a deeper understanding of how to write the data to the HDFS file in HBase, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.