In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces "the installation steps of hive-0.12". In the daily operation, I believe many people have doubts about the installation steps of hive-0.12. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts of "installation steps of hive-0.12". Next, please follow the editor to study!
Hive installation and deployment
(there will be a version problem with the installation. If the hadoop1.0 version is above, please install hive-0.90 test.
Please install hive-0.12.0 or the latest version of test above hadoop2.0)
Hive-0.9.0 download address: http://pan.baidu.com/s/1rj6f8
Hive-0.12.0 download address: http://mirrors.hust.edu.cn/apache/hive/hive-0.12.0/
1. Copy hive-0.12.0.tar.gz to / home/hadoop2. Decompress hive-0. 12.0.tar.gz and renaming
# cd / usr/local
# tar-zxvf hive-0. 12.0.tar.gz
# mv hive-0. 12.0 hive
3. Modify environment variabl
Modify the / etc/profile file.
# vi / etc/profile
Increase
Export HIVE_HOME=/hadoop/hadoop/hive
Modify
ExportPATH=$JAVA_HOME/bin:$PATH:$HADOOP_HOME/bin:$HIVE_HOME/bin
Save exit
# source / etc/profile
4. Configure hive system files 4.1: modify template files in the conf directory
L cd $HIVE_HOME/conf
L mv hive-env.sh.template hive-env.sh
L mv hive-default.xml.template hive-site.xml
4.2. Modify the hive-config.sh of $HIVE_HOME/bin by adding the following three lines
Export JAVA_HOME=/usr/local/jdk
Export HIVE_HOME=/home/hadoop/hive
Export HADOOP_HOME=/home/hadoop/hadoop2
4.3. Start hive
When starting hive, remember to start hadoop first (because hive is to manipulate the data in hdfs)
Jps command to view the currently launched java program
# hive
4.4. Error reported-Please modify hive-site.xml: (vi Editor: / auth)
[FatalError] hive-site.xml:2002:16: The element type "value" must beterminated by the matching end-tag "
2002 auth
(on line 2002 at character 16: auth)
Hive > show tables
An error was reported at this time:
FAILED:Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask.java.lang.RuntimeException: Unable to instantiateorg.apache.hadoop.hive.metastore.HiveMetaStoreClient
Finally, after changing the value of hive.metastore.schema.verification in hive-site.xml to false, there is no error.
Hive > create table test (idint,name string)
Hive > quit
5. Verify that the table was created successfully
Method 1: observe: # hadoop fs-ls / user/hive
Parameter: hive.metastore.warehouse.dir
Method 2:
If http://cloud4:50070 is configured successfully, check / user/hive exists.
Metastore of 6.Hive
L metastore is the centralized storage of hive metadata. Metastore uses the embedded derby database as the storage engine by default
Drawback of the Derby engine: only one session can be opened at a time
L use Mysql as the external storage engine to access multiple users at the same time
So we usually recommend using mysql, but we need to configure
6.1. Configure metastore6.1.1 for MySQL. Upload mysql-connector-java-5.1.10.jar to $HIVE_HOME/lib6.1.2. Log in to MYSQL and create a database hive
L # mysql-uroot-padmin
L mysql > create database hive
L mysql > GRANT all ON hive.* TO root@'%' IDENTIFIED BY 'admin'
L mysql > flush privileges
L mysql > set global binlog_format='MIXED'; (never mind if you report an error)
6.1.3. Change the database character type of mysql to latin1 (alertdatabase)
Method 1: manual command: (here hive is the database name)
Alter database hive character set latin1
6.1.4. Modify $HIVE_HOME/conf/hive-site.xml
Javax.jdo.option.ConnectionURL
/ / hadoop0 is the machine where hive is located, or gateway ip cloud4: native; 192.168.56.1 gateway ip
If you use the hive machine, that is, the local machine, you need to install mysql on linux. Please see below to supplement the msql installation process.
But it seems that if cloud4 makes show tables error, it would be nice to change it to localhost!
Jdbc:mysql://hadoop0:3306/hive?createDatabaseIfNotExist=true
Javax.jdo.option.ConnectionDriverName
Com.mysql.jdbc.Driver
Javax.jdo.option.ConnectionUserName
Root
Javax.jdo.option.ConnectionPassword
Admin
The running mode of 7.Hive, which is the execution environment of the task, starts hive command line mode:
1: directly enter the executive program of # / hive/bin/hive
2: or enter # hive-- service cli
L is divided into two types: local and cluster
We can specify through mapred.job.tracker
Setting method:
Hive > SET mapred.job.tracker=local
2.hive verifies the method of startup
L 1. Startup mode of hive web interface (port number 9999)
# hive-- service hwi &
Used to access hive through a browser
Http://hadoop0:9999/hwi/
L 2. Startup method of hive remote service (port number 10000)
# hive-- service hiveserver &
8.Hive and traditional Database
Data type of 9.Hive
L basic data types
Tinyint/smallint/int/bigint
Float/double
Boolean
String
Complex data type
Array/Map/Struct
No date/datetime.
Data Storage in 10.Hive: features
L Hive's data storage is based on Hadoop HDFS
L Hive does not have a special data storage format
L storage structure mainly includes: database, file, table, view
L Hive can load text files (TextFile) directly by default, and also supports sequence file
When creating a table, specify the column and row delimiters of the Hive data, and Hive can parse the data
Data Model of 11.Hive-Database
L similar to DataBase of traditional database
L default database "default" >
After using the # hive command, do not use hive > use, the default database of the system.
You can explicitly use hive > use default
Create a new library
Hive > create database test_dw
11.1. Modify the warehouse directory: / hive/conf/hive-site.xml
Warehouse is the directory of the data warehouse specified by ${hive.metastore.warehouse.dir} in hive-site.xml
We can change value to: / hive
Each Table has a corresponding directory to store data in Hive. For example, a table test whose path in HDFS is: / warehouse/test.
All Table data (excluding ExternalTable) is stored in this directory.
When you delete a table, both metadata and data are deleted
L common operations
L create a data file t1.dat
L create a table
L hive > create table T1 (key string)
L load data
L hive > load data local inpath'/ root/inner_table.dat' into table T1
L View data
L select * from T1
L select count (*) from T1
L delete table drop table T1
At this point, the study on the "installation steps of hive-0.12" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.