In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "the installation steps of hive-1.2.1". Interested friends may wish to have a look at it. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn the installation steps of hive-1.2.1.
Environment introduction IP hostname deployment 192.168.2.10bi10
Hadoop-2.6.2,hive-1.2.1,hive metastore192.168.2.12bi12hadoop-2.6.2,hive-1.2.1,hive metastore192.168.2.13bi13hadoop-2.6.2,hive-1.2.1mysql configuration
Create a new user hive, password hive, and then authorize. Database IP:192.168.2.11 port: 3306
CREATE USER 'hive'@'%' IDENTIFIED BY' hive';GRANT ALL PRIVILEGES ON *. * TO 'hive'@'%' WITH GRANT OPTION;FLUSH PRIVILEGES;CREATE DATABASE hive;alter database hive character set latin1;Hive configuration
Extract hive-1.2.1 to the / home/hadoop/work/hive-1.2.1 directory, and then modify the configuration file
Modify hive-site.xml and enter the conf directory of hive
[hadoop@bi13 conf] $mv hive-env.sh.template hive-site.xml [hadoop@bi13 conf] $vim hive-site.xml
Hive-site.xml, parameter description:
The location of hive.metastore.warehouse.dirhive data warehouse in hdfs, because the hadoop cluster uses ha, so hdfs://masters/user/hive/warehouse is used here, but there is no specific namenode host + port hive.metastore.uris, which uses hive to use metastore server port configuration, we use the default port 9083 hive.exec.scratchdir similarly, due to the configuration of ha We use hdfs://masters/user/hive/tmpjavax.jdo.option.ConnectionPasswordmysql database password javax.jdo.option.ConnectionDriverNamemysql database driver javax.jdo.option.ConnectionURLmysql database URLjavax.jdo.option.ConnectionUserNamemysql database user name
Hive.querylog.location
Hive.server2.logging.operation.log.location
Hive.exec.local.scratchdir
Hive.downloaded.resources.dir
The value of these configuration items must be written as a specific path, whether there will be a problem hive.metastore.warehouse.dir hdfs://masters/user/hive/warehouse location of default database for the warehouse hive.metastore.uris thrift://bi10:9083,thrift://bi12:9083 Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore. Hive.exec.scratchdir hdfs://masters/user/hive/tmp HDFS root scratchdir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratchdir: ${hive.exec.scratchdir} / is created, with ${hive.scratch.dir.permission}. Javax.jdo.option.ConnectionPassword hive password to use against metastore database javax.jdo.option.ConnectionDriverName com.mysql.jdbc.Driver Driver class name for a JDBC metastore javax.jdo.option.ConnectionURL jdbc:mysql://192.168.2.11:3306/hive?createDatabaseIfNotExist=true JDBC connect string for a JDBC metastore javax.jdo.option.ConnectionUserName hive Username to use against metastore database hive.querylog.location / home/hadoop/work/hive-1.2.1/tmp/iotmp Location of Hive run time structured log file hive. Server2.logging.operation.log.location / home/hadoop/work/hive-1.2.1/tmp/operation_logs Top level directory where operation logs are stored if logging functionality is enabled hive.exec.local.scratchdir / home/hadoop/work/hive-1.2.1/tmp/$ {system:user.name} Local scratch space for Hive jobs hive.downloaded.resources.dir / home/hadoop/work/hive-1.2.1/tmp/$ {hive.session.id} _ resources Temporary local directory for added resources in the remote file system.
Replace jline-0.9.94.jar under hadoop with jline-2.12.jar under hive
Mv hive-1.2.1/lib/jline-2.12.jar hadoop-2.6.2/share/hadoop/yarn/lib/mv hadoop-2.6.2/share/hadoop/yarn/lib/jline-0.9.94.jar hadoop-2.6.2/share/hadoop/yarn/lib/jline-0.9.94.jar.bak
Copy the above operations to all machines where hive is deployed
Start metastore service
Start metastore service under bi10 and bi12, respectively
Nohup hive-- service metastore > null 2 > & 1 & Test
Enter hive and see if there is a problem. If you have a problem starting with hive-- hiveconf hive.root.logger=DEBUG,console, you can see the specific log.
[hadoop@bi13 work] $hiveSLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/home/hadoop/work/hadoop-2.6.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jarAccord slf4jimplazStaticLoggerBinder.class] SLF4J: Found binding in [Jar Juan fileWrig HadoopThat HadoopThat HadoopMet 1.5.1GUBUSPAR assemblyMet 1.5.1Muhadoop2.4.0.jarAccording org / Slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] SLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/home/hadoop/work/hadoop-2.6.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jarring] SLF4J: Found binding in [jar] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] Logging initialized using configuration in jar:file:/home/hadoop/work/hive-1.2.1/lib/hive-common-1 .2.1.jar! / hive-log4j.propertieshive > so far I believe that you have a deeper understanding of the "hive-1.2.1 installation steps", might as well come to the actual operation of it! Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.