In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
I. Environment
1. Zk cluster
10.10.103.144:2181,10.10.103.246:2181,10.10.103.62:2181
2. Metastore database
10.10.103.246:3306
II. Installation
1. Install the configuration database
Yum-y install mysql55-server mysql55GRANT ALL PRIVILEGES ON metastore.* TO 'hive'@'localhost' IDENTIFIED BY' hive';GRANT ALL PRIVILEGES ON metastore.* TO 'hive'@'10.10.103.246' IDENTIFIED BY' hive';GRANT ALL PRIVILEGES ON metastore.* TO 'hive'@'127.0.0.1' IDENTIFIED BY' hive';CREATE DATABASE IF NOT EXISTS metastore;USE metastore;SOURCE / usr/lib/hive/scripts/metastore/upgrade/mysql/hive-schema-1.1.0.mysql.sql # execute this will report an error, and then execute the following sqlsource / usr/lib/hive/scripts/metastore/upgrade/mysql/hive-txn-schema-0.13.0.mysql.sql
2. Install hive
Yum-y install hive hive-jdbc hive-metastore hive-server2
3. Configuration
Vim / etc/hive/conf/hive-site.xml hive.execution.engine spark hive.enable.spark.execution.engine true spark.master yarn-client spark.enentLog.enabled true spark.enentLog.dir hdfs://mycluster:8020/spark-log spark.serializer org.apache.spark.serializer.KryoSerializer spark.executor.memeory 1g spark.driver.memeory 1g spark.executor.extraJavaOptions-XX:+PrintGCDetails-Dkey=value-Dnumbers= "one two three" hive.metastore.uris thrift://10.10.103.246:9083 Hive.metastore.local false javax.jdo.option.ConnectionURL jdbc:mysql://10.10.103.246/metastore javax.jdo.option.ConnectionDriverName com.mysql.jdbc.Driver javax.jdo.option.ConnectionUserName hive javax.jdo.option.ConnectionPassword hive datanucleus.autoCreateSchema false datanucleus.fixedDatastore true datanucleus.autoStartMechanism SchemaTable hive.support.concurrency true hive.zookeeper.quorum 10.10.103.144:2181,10.10.103.246:2181,10.10.103.62:2181 hive.aux.jars.path file:// / usr/lib/hive/lib/zookeeper.jar hive.metastore.schema.verificationfalse
4. Start the metastore service
/ etc/init.d/hive-metastore start
5. Verification
[root@ip-10-10-103246 conf] # hiveJava HotSpot (TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0Java HotSpot (TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m Support was removed in 8.017 secondshive 05 WARN conf.HiveConf 12 15:04:47 WARN conf.HiveConf: HiveConf of name hive.metastore.local does not exist17/05/12 15:04:47 WARN conf.HiveConf: HiveConf of name hive.enable.spark.execution.engine does not existLogging initialized using configuration in file:/etc/hive/conf.dist/hive-log4j.propertiesWARNING: Hive CLI is deprecated and migration to Beeline is recommended.hive > create table navy1 (ts BIGINT,line STRING); OKTime taken: 0.925 secondshive > select count (*) from navy1 Query ID = root_20170512150505_8f7fb28e-cf32-4efc-bb95-6add37f13fb6Total jobs = 1Launching Job 1 out of 1In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=In order to limit the maximum number of reducers: set hive.exec.reducers.max=In order to set a constant number of reducers: set mapreduce.job.reduces=Starting Spark Job = f045ab15-baaa-40e7-9641-d821fa313abeRunning with YARN Application = application_1494472050574_0014Kill Command = / usr/lib/hadoop/bin/yarn application- Kill application_1494472050574_0014Query Hive on Spark job [0] stages:01Status: Running (Hive on Spark job [0]) Job Progress FormatCurrentTime StageId_StageAttemptId: SucceededTasksCount (+ RunningTasksCount-FailedTasksCount) / TotalTasksCount [StageCost] 2017-05-12 15 Running 05Stage-0_0 30835 Stage-0_0: 0 (+ 1) / 1 Stage-1_0: 0 Stage-0_0: 0 (+ 1) / 1 Stage-1_0: 0 Finished Stage-1_0: 1 Finished Stage-1_0: 1 FinishedStatus: Finished successfully in 16.05 secondsOK0Time taken: 19.325 seconds Fetched: 1 row (s) hive >
6. Problems encountered
Error report:
Hive > select count (*) from test Query ID = root_20170512143232_48d9f363-7b60-4414-9310-e6348104f476Total jobs = 1Launching Job 1 out of 1In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=In order to limit the maximum number of reducers: set hive.exec.reducers.max=In order to set a constant number of reducers: set mapreduce.job.reduces=java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/HBaseConfiguration at org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.initiateSparkConf (HiveSparkClientFactory.java:74) at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.setup (SparkSessionManagerImpl.java:81) at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession (SparkSessionManagerImpl.java:102) at org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession (SparkUtilities.java:111) at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute (SparkTask.java:99) at org.apache.hadoop.hive.ql.exec.Task.executeTask (Task.java:214) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential (TaskRunner.java:100) at org.apache.hadoop.hive.ql.Driver.launchTask (Driver.java:1979) at org.apache.hadoop.hive.ql.Driver.execute (Driver.java:1692) at org.apache. Hadoop.hive.ql.Driver.runInternal (Driver.java:1424) at org.apache.hadoop.hive.ql.Driver.run (Driver.java:1208) at org.apache.hadoop.hive.ql.Driver.run (Driver.java:1198) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd (CliDriver.java:220) at org.apache.hadoop.hive.cli.CliDriver.processCmd (CliDriver.java:172) At org.apache.hadoop.hive.cli.CliDriver.processLine (CliDriver.java:383) at org.apache.hadoop.hive.cli.CliDriver.executeDriver (CliDriver.java:775) at org.apache.hadoop.hive.cli.CliDriver.run (CliDriver.java:693) at org.apache.hadoop.hive.cli.CliDriver.main (CliDriver.java:628) at sun.reflect.NativeMethodAccessorImpl.invoke0 (NativeMethod) at sun.reflect .NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke (Method.java:498) at org.apache.hadoop.util.RunJar.run (RunJar.java:221) at org.apache.hadoop.util.RunJar.main (RunJar.java:136) Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hbase.HBaseConfiguration at java .net.URLClassLoader.findClass (URLClassLoader.java:381) at java.lang.ClassLoader.loadClass (ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass (Launcher.java:331) at java.lang.ClassLoader.loadClass (ClassLoader.java:357)... 24 moreFAILED: Execution Error Return code-101 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Org/apache/hadoop/hbase/HBaseConfiguration
Resolve:
Yum-y install hbase
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.