In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article introduces the relevant knowledge of "Hive installation and configuration method". In the actual case operation process, many people will encounter such difficulties. Next, let Xiaobian lead you to learn how to deal with these situations! I hope you can read carefully and learn something!
About Hive
(1)hive does not support OLTP processing
(2)Hive 1.2 and later requires java 1.7 or later
Hive Installation
(1)Hive can be installed on any machine, provided that this machine must have hadoop software (you can not start hdfs,yarn and other processes), because Hive needs to use some jar packages under hadoop software.
(2)hive1.x will create a directory metastore_db in which to store user-generated metadata, which is inconvenient to use and will cause each user to see different content, so mysql can be used to store metadata.
Download link:
http://mirror.olnevhost.net/pub/apache/hive/
[root@Darren2 local]# tar -zxvf apache-hive-1.2.2-bin.tar.gz
[root@Darren2 apache-hive-1.2.2-bin]# bin/hive
hive> create database testdb;
hive> show databases;
hive> use testdb;
hive> create table t1(c1 int,c2 string)
> row format delimited
> fields terminated by ',';
[root@Darren2 hive]# hdfs dfs -ls -R /user/hive/warehouse/
drwxr-xr-x - root supergroup 0 2017-11-25 14:25 /user/hive/warehouse/testdb.db
drwxr-xr-x - root supergroup 0 2017-11-25 14:25 /user/hive/warehouse/testdb.db/t1
[root@Darren2 hive]# cat /tmp/t1.data
1,aaa
2,bbb
3,ccc
hive>select * from t1;
1 aaa
2 bbb
3 ccc
hive> select * from t1 where c2 = 'bbb';
2 bbb
hive> select count(*) from t1 group by c1;
Query ID = root_20171125143038_249cc07f-270b-422c-a165-4da49e05e6c7
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=
In order to set a constant number of reducers:
set mapreduce.job.reduces=
Starting Job = job_1511577448141_0003, Tracking URL =http://Darren2:8088/proxy/application_1511577448141_0003/
Kill Command = /usr/local/hadoop/bin/hadoop job -kill job_1511577448141_0003
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2017-11-25 14:30:51,203 Stage-1 map = 0%, reduce = 0%
2017-11-25 14:31:00,747 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.52 sec
2017-11-25 14:31:09,057 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 2.86 sec
MapReduce Total cumulative CPU time: 2 seconds 860 msec
Ended Job = job_1511577448141_0003
MapReduce Jobs Launched:
Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 2.86 sec HDFS Read: 6747 HDFS Write: 6 SUCCESS
Total MapReduce CPU Time Spent: 2 seconds 860 msec
OK
1
1
1
Time taken: 31.234 seconds, Fetched: 3 row(s)
You can also view the progress of job execution through browser access: 192.168.163.102:8088/
Metadata stored in mysql Configuration method
Create the corresponding hive library. When you start hive, many corresponding tables will be generated under it.
(1) Create a configuration file hive-site.xml that connects to mysql
[root@Darren2 conf]# vim /usr/local/hive-1.2.2/conf/hive-site.xml
javax.jdo.option.ConnectionURL
jdbc:mysql://localhos:3306/hive? create=true
javax.jdo.option.ConnectionDriverName
com.mysql.jdbc.Driver
javax.jdo.option.ConnectionUserName
root
javax.jdo.option.ConnectionPassword
147258
(2) Download java mysql driver mysql-connector-java-5.1.45.tar.gz
https://dev.mysql.com/downloads/file/? id=474257
After decompression, put the jar package mysql-connector-java-5.1.45-bin.jar in hive-1.2.2/lib directory
(3) Create Hive Library in MySQL
root@localhost [(none)]>create database hive;
(4) Testing
[root@Darren2 conf]# cd /usr/local/hive-1.2.2/bin/
[root@Darren2 bin]# ./ hive
How to connect to hiveserver2 using beeline client
Start hiveserver2 service on one node, check whether port 10000 is detected to determine whether it can be started successfully, and then use beeline client to connect hiveserver2 on another node. The user uses root, password None
#Start hiveserver2
[root@Darren2 bin]# ./ hiveserver2
#Connect with another node:
[root@Darren2 bin]# ./ beeline
beeline> ! connect jdbc:hive2://192.168.163.102:10000
Connecting to jdbc:hive2://192.168.163.102:10000
Enter username for jdbc:hive2://192.168.163.102:10000: root
Enter password for jdbc:hive2://192.168.163.102:10000:
Connected to: Apache Hive (version 1.2.2)
Driver: Hive JDBC (version 1.2.2)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://192.168.163.102:10000> show databases;
+----------------+--+
| database_name |
+----------------+--+
| default |
| testdb1 |
| testdb2 |
+----------------+--+
"Hive installation and configuration method" content is introduced here, thank you for reading. If you want to know more about industry-related knowledge, you can pay attention to the website. Xiaobian will output more high-quality practical articles for everyone!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.