Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Hive Remote schema building

2025-01-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

I. Experimental environment

1. Software version: apache-hive-2.3.0-bin.tar.gz, mysql-community-server-5.7.19

2. mysql JDBC driver package: mysql-connector-java-5.1.44.tar.gz

mysql is installed on hadoop5

4.. host planning

hadoop3

Remote:clienthadoop5Remote:server;mysql

II. Basic configuration

1. Unzip and move hive

[root@hadoop5 ~]# tar -zxf apache-hive-2.3.0-bin.tar.gz[root@hadoop5 ~]# cp -r apache-hive-2.3.0-bin /usr/local/hive

2. modify environment variables

[root@hadoop5 ~]# vim /etc/profileexport HIVE_HOME=/usr/local/hiveexport PATH=$HIVE_HOME/bin:$PATH[root@hadoop5 ~]# source /etc/profile

3. Copy initial file

[root@hadoop5 ~]# cd /usr/local/hive/conf/[root@hadoop5 conf]# cp hive-env.sh.template hive-env.sh [root@hadoop5 conf]# cp hive-default.xml.template hive-site.xml [root@hadoop5 conf]# cp hive-log4j2.properties.template hive-log4j2.properties [root@hadoop5 conf]# cp hive-exec-log4j2.properties.template hive-exec-log4j2.properties

4. Modify hive-env.sh file

[root@hadoop5 conf]# vim hive-env.sh #Add export JAVA_HOME=/usr/local/jdkexport HADOOP_HOME=/usr/local/hadoopexport HIVE_HOME=/usr/local/hive at the end export HIVE_CONF_DIR=/usr/local/hive/conf

5. Copy mysql's JDBC driver package

[root@hadoop5 ~]# tar -zxf mysql-connector-java-5.1.44.tar.gz [root@hadoop5 ~]# cp mysql-connector-java-5.1.44/mysql-connector-java-5.1.44-bin.jar /usr/local/hive/lib/

6. Create a directory in hdfs and authorize it to store files

hdfs dfs -mkdir -p /user/hive/warehousehdfs dfs -mkdir -p /user/hive/tmphdfs dfs -mkdir -p /user/hive/loghdfs dfs -chmod -R 777 /user/hive/warehousehdfs dfs -chmod -R 777 /user/hive/tmphdfs dfs -chmod -R 777 /user/hive/log

7. Create related users and libraries in mysql

mysql> create database metastore;Query OK, 1 row affected (0.03 sec)mysql> set global validate_password_policy=0;Query OK, 0 rows affected (0.26 sec)mysql> grant all on metastore.* to hive@'%' identified by 'hive123456';Query OK, 0 rows affected, 1 warning (0.03 sec)mysql> flush privileges;Query OK, 0 rows affected (0.00 sec)

7. Use scp to copy hive to hadoop3

[root@hadoop5 ~]# scp -r /usr/local/hive root@hadoop3:/usr/local/

III. Modify the configuration file

1. Configuration of server-side hive-site.xml

hive.exec.scratchdir /user/hive/tmp hive.metastore.warehouse.dir /user/hive/warehouse hive.querylog.location /user/hive/log javax.jdo.option.ConnectionURL jdbc:mysql://hadoop5:3306/metastore? createDatabaseIfNotExist=true&characterEncoding=UTF-8&useSSL=false javax.jdo.option.ConnectionDriverName com.mysql.jdbc.Driver javax.jdo.option.ConnectionUserName hive javax.jdo.option.ConnectionPassword hive123456

2. Client hive-site.xml configuration

hive.metastore.uris thrift://hadoop5:9083 hive.exec.scratchdir /user/hive/tmp hive.metastore.warehouse.dir /user/hive/warehouse hive.querylog.location /user/hive/log hive.metastore.local false

Start Hive (two ways)

Format the database first

schematool --dbType mysql --initSchema

1. direct start

service:

[root@hadoop5 ~]# hive --service metastore

client:

[root@hadoop3 ~]# hivehive> show databases;OKdefaultTime taken: 1.599 seconds, Fetched: 1 row(s)hive> quit;

2. Beeline approach

You need to add configuration to hadoop's core-site.xml first

hadoop.proxyuser.root.groups * hadoop.proxyuser.root.hosts *

service:

[root@hadoop5 ~]# nohup hiveserver2 &[root@hadoop5 ~]# netstat -nptl | grep 10000tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN 3464/java

client:

[root@hadoop3 ~]# beeline Beeline version 1.2.1.spark2 by Apache Hivebeeline>beeline> ! connect jdbc:hive2://hadoop5:10000 hive hive123456Connecting to jdbc:hive2://hadoop5:1000017/09/21 09:47:31 INFO jdbc.Utils: Supplied authorities: hadoop5:1000017/09/21 09:47:31 INFO jdbc.Utils: Resolved authority: hadoop5:1000017/09/21 09:47:31 INFO jdbc.HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://hadoop5:10000Connected to: Apache Hive (version 2.3.0)Driver: Hive JDBC (version 1.2.1.spark2)Transaction isolation: TRANSACTION_REPEATABLE_READ0: jdbc:hive2://hadoop5:10000> show databases;+----------------+--+| database_name |+----------------+--+| default |+----------------+--+1 row selected (2.258 seconds)

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report