In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-11 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Because hive is based on hadoop, it must be supported by a hadoop platform:
Hadoop distributed Cluster Construction: https://blog.51cto.com/14048416/2341491
1. Embedded Derby version:
Installation steps:
Upload installation package: apache-hive-2.3.2-bin.tar.gz decompress installation package: tar-zxvf apache-hive-2.3.2-bin.tar.gz-C / application enter into bin directory, run hive script:. / hive initialize Metabase:. / schematool-dbType derby-initSchema after the final test enters hive: hive > show tables
Common errors during installation:
Error: FUNCTION 'NUCLEUS_ASCII' already exists. (state=X0Y68,code=30000), which means that metastore_db already exists, delete metastore_db in the corresponding directory that entered hive, and then re-initialize the metadata. Terminal initialization failed; falling back to unsupported . This is because the jline-0.9.94.jar package version of the hadoop (/ root/apps/hadoop-2.6.5/share/hadoop/yarn/lib) cluster is too low, just replace it with the jline-2.12.jar package in hive/lib. Remember: all hdfs nodes have to be replaced.
Note: for hive deployed using Derby, a derby.log metastore_db, metadata, will be created in the workspace in which you enter the hive. That is, the metadata runs with the space, and if you do not enter the workspace next time, you will not be able to access the uploaded data. So this is also the drawback of using hive deployed in the Derby way.
two。 External MySQL version:
First of all, you need the service of MySQL to configure MySQL on any node of the cluster.
Installation steps:
Modify the configuration file of hive: hive-site.xmljavax.jdo.option.ConnectionURLjdbc:mysql://hadoop02:3306/hivedb?createDatabaseIfNotExist=trueJDBC connect string for a JDBC metastorejavax.jdo.option.ConnectionDriverNameStay hungry Stay foolish-- http://blog.csdn.net/zhongqi2513com.mysql.jdbc.DriverDriver class name for a JDBC metastorejavax.jdo.option.ConnectionUserNamerootusername to use against metastore databasejavax.jdo.option.ConnectionPasswordrootpassword to use against metastore databasehive.metastore.warehouse.dir/user/hive/warehouse put the driver package of MySQL The environment variable export HIVE_HOME=/home/hadoop/apps/apache-hive-1.2.1-binexport PATH=$PATH:$HIVE_HOME/bin that is placed in hive/lib (mysql-connector-java-5.1.40-bin.jar) to configure hive verifies the installation of hive: hive- help initializes metadata: schematool-dbType mysql- initSchema launches the client of hive: hive
Note: when hive is configured, 57 tables will be generated in MySQL:
DBS: manage the tables of the library:
TBLS: manage tables of tables
COLUMNS_V2: table for managing fields
3. Connect to the client of hive: directly use the hive command, enter hive to operate, launch hive into a background service: nohup hiveserver2 1 > / var/log/hiveserver.log 2 > / var/log/hiveserver.err &, and then use beeline to connect. [hadoop hadoop01@~] $beeline # connection hivehadoop01 >! connect jdbc:hive2://hadoop01:10000 # connection client connects to hive repository or [hadoop hadoop01@~] $beeline-u jdbc:hive2://hadoop01:10000-n hadoop
But there are usually errors when using beeline connections: Connecting to jdbc:hive2://hadoop01:10000Enter username for jdbc:hive2://hadoop01:10000: hadoopEnter password for jdbc:hive2://hadoop01:10000: * 16:30:37 on 18-10-15 [main]: WARN jdbc.HiveConnection: Failed to connect to hadoop01:10000Error: Could not open client transport with JDBC Uri: jdbc:hive2://hadoop01:10000: Failed to open new session: java.lang.RuntimeException: org.apache. Hadoop.ipc.RemoteException (org.apache.hadoop.security.authorize.AuthorizationException): User: hadoop is not allowed to impersonate hadoop (state=08S01 Code=0
The reason for reporting this error is not hive itself, but the improved permission management of hdfs, which the underlying hive depends on, which is reported by the hadoop cluster. Authority authentication must be done.
Solution:
Stop the cluster: stop-dfs.sh & & stop-yarn.sh modify hadoop's hdfs-site.xml configuration file: join: dfs.webhdfs.enabledtrue modify hadoop's core-site.xml file, join: hadoop.proxyuser.hadoop.hosts* hadoop.proxyuser.hadoop.groups* restart hadoop cluster, restart hive's server, and then use beeline to connect. 4. Operation of user interface
hive commands fall into two broad categories:
-Action after entering the hive client:
-query: normal operation statement (query + DDL operation of database table)
-! Linux command: execute the Linux command in the hive client (only partially)
-execute hadoop related commands in the hive client
Hive >! Hadoop fs-ls /; (executed as Linux command) slow hive > dfs-ls /; use the jvm process of the current hive client to execute fast
Common actions are:
Hive > quit; # client hive > set key=value; # set hive parameters hive > add jar xxx.jar # temporarily add jar package hive > add file # to hive add filehive > list jars # to classpath in hive View added jarhive > source file # execute a script (this script is stored on linux)
-actions before entering the hive client:
[hadoop hadoop01@~] $hive-e 'hql' # execute hive statement under linux [hadoop hadoop01@~] $hive-f linux hql script # execute hql script [hadoop hadoop01@~] $hive-h hiveconf key=value # under Linux enter hive client and initialize parameters (only one can be set) [hadoop hadoop01@~] $hive-I linux parameter file # enter Hive and execute all parameter settings in the parameter file [hadoop hadoop01@~] $hive-v # output results are printed to the console [hadoop hadoop01@~] $hive-S # do not print logs, often and-e parameters use [hadoop hadoop01@~] $hive-S-e 'hql' > > file # to output the query results to the Linux file
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.