In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "how to connect and use hbase with spark". Interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn "how to connect and use hbase in spark".
I. Environmental preparation
1. Copy the lib file under the HBase directory to the spark directory / lib/hbase. Spark depends on this lib
The list is as follows: the three jar of guava-12.0.1.jar htrace-core-3.1.0-incubating.jar protobuf-java-2.5.0.jar plus all the jar starting with hbase, nothing else is needed. Copying all of them will cause an error.
2. Modify the spark configuration file (spark-env.sh) by adding a line at the end
Export SPARK_CLASSPATH=/usr/local/spark-1.5.1-bin-hadoop2.4/lib/hbase/*
3. Restart the spark cluster
II. Code
Package com.xx;import org.apache.commons.logging.Log;import org.apache.commons.logging.LogFactory;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.hbase.HBaseConfiguration;import org.apache.hadoop.hbase.client.Result;import org.apache.hadoop.hbase.client.Scan;import org.apache.hadoop.hbase.io.ImmutableBytesWritable;import org.apache.hadoop.hbase.mapreduce.TableInputFormat;import org.apache.hadoop.hbase.protobuf.ProtobufUtil;import org.apache.hadoop.hbase.protobuf.generated.ClientProtos Import org.apache.hadoop.hbase.util.Base64;import org.apache.hadoop.hbase.util.Bytes;import org.apache.spark.SparkConf;import org.apache.spark.api.java.JavaPairRDD;import org.apache.spark.api.java.JavaSparkContext;import java.io.IOException;/** * spark reads HBase data * @ author Chenj * / public class ReadHBase {private static final Log LOG = LogFactory.getLog (ErrorCount.class); private static final String appName = "hbase test" Private static final String master = "spark://192.168.1.21:7077"; public static void main (String [] avgs) {SparkConf conf = new SparkConf () SetAppName (appName). SetMaster (master). SetSparkHome (System.getenv ("SPARK_HOME"). SetJars (new String [] {System.getenv ("jars")}); Configuration configuration = HBaseConfiguration.create (); configuration.set ("hbase.zookeeper.property.clientPort", "2181"); / / set zookeeper client port configuration.set ("hbase.zookeeper.quorum", "192.168.1.19") / / set zookeeper quorum configuration.addResource ("/ usr/local/hbase-1.0.1.1/conf/hbase-site.xml"); / / load the configuration of hbase into configuration.set (TableInputFormat.INPUT_TABLE, "heartSocket"); JavaSparkContext sc = new JavaSparkContext (conf); Scan scan = new Scan (); scan.addFamily (Bytes.toBytes ("d")) Scan.addColumn (Bytes.toBytes ("d"), Bytes.toBytes ("consumeTime")); try {ClientProtos.Scan proto = ProtobufUtil.toScan (scan); String scanToString = Base64.encodeBytes (proto.toByteArray ()); configuration.set (TableInputFormat.SCAN, scanToString);} catch (IOException e) {e.printStackTrace () } JavaPairRDD rdd = sc.newAPIHadoopRDD (configuration, TableInputFormat.class, ImmutableBytesWritable.class, Result.class); LOG.info ("total number:" + rdd.count ());}}
3. Submit and run
. / spark-submit-- class com.xx.ReadHBase-- master spark://ser21:7077 / usr/local/spark-1.0-SNAPSHOT.jar here, I believe you have a deeper understanding of "how spark connects and uses hbase". You might as well do it in practice! Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.