In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Pre-installation instructions:
The prerequisite for installing hive is to install the hadoop cluster first, and hive only needs to be installed in the namenode node cluster of hadoop (on all namenode), not on the machine of the datanode node. In addition, it should be noted that although modifying the configuration file does not require you to have hadoop running, this article uses the hadoop command (used when creating a folder). Before executing these commands, you must make sure that hadoop is running normally, and the premise of starting hive is that hadoop is running normally, so it is recommended that you first run hadoop in accordance with this article.
Premise:
This article if you have successfully installed Hadoop! If it is not installed, please refer to another blog post "installing distributed Hadoop 3.1.1 under centos".
In this article, if you have successfully installed and configured MySql data, if not, please refer to another blog post, "MySQL Database installation and configuration related".
I) installation environment
Centos 7.5
JDK 1.8.0_181
Hadoop 3.1.1
Hive 3.1.0
II) download HIVE
Download address: http://www.apache.org/dyn/closer.cgi/hive/
Open the URL, click the link in figure 1, and then select the Hive version. For example, select Hive 3.1.0 here (figure 2), and then download the packaged software (figure 3):
Figure 1
Figure 2
Figure 3
III) installation
3.1Unzip it to a specific directory, where it is extracted to the / usr/local directory (users install the software by themselves, it is recommended to put it in the / usr/local directory)
# tar-zxvf apache-hive-3.1.0-bin.tar.gz-C / usr/local
3.2 set the environment variable and edit / etc/profile to add the following red box:
After setting up, run # source / etc/profile to make the changes effective.
3.3 create 3 new directories to configure hive-site.xml files
Enter the bin directory of Hadoop and run the following command:
#. / hadoop fs-mkdir-p / var/hive/warehouse
#. / hadoop fs-mkdir-p / var/hive/tmp
#. / hadoop fs-mkdir-p / tmp/hive
Modify the permissions of 3 directories:
#. / hadoop fs-chmod 777 / var/hive/warehouse
#. / hadoop fs-chmod 777 / var/hive/tmp
#. / hadoop fs-chmod 777 / tmp/hive
Once created, run the #. / hadoop fs-ls / var/hive/ command to see if the creation is successful.
3.4 Editing the hive-site.xml file
3.4.1 create a new hive-site.xml file
Go to the / usr/local/apache-hive-3.1.0.bin/conf directory, make a copy of the hive-default.xml.template file, and name it hive-site.xml.
3.4.2 modify the hive-site.xml file
3.4.2.1 modify the name tag to the value value of hive.metastore.warehouse.dir, as follows:
3.4.2.2 modify the name tag to the value value of hive.exec.scratchdir, as follows:
3.4.2.3 replace "${system:java.io.tmpdir}" with "/ var/hive/tmp" in the values in all value tags in the hive-site.xml file, as shown in the following example:
3.4.2.4 replace "${system:user.name}" with "root" in the values in all value tags in the hive-site.xml file, as shown in the following example:
3.4.2.5 Hive Metabase configuration, take mysql as an example
3.4.2.6 transfer the MySql driver package to the lib directory of hive
Download address of MySql driver package: https://dev.mysql.com/downloads/connector/j/
To pay attention to the relationship between driver and version, please refer to the website:
Https://dev.mysql.com/doc/connector-j/5.1/en/connector-j-versions.html
Https://dev.mysql.com/doc/connector-j/8.0/en/connector-j-versions.html
3.4.2.7 create a new hive-env.sh file and edit
Go to the conf directory of hive, copy the hive-env.sh.template file to a hive-env.sh file, and add the following:
IV) start and test
4.1 start
Enter the bin directory of hive and execute the command:
#. / schematool-initSchema-dbType mysql / / run this command to initialize DB
#. / hive
4.2 Test
You can execute the following command to test:
# show functions; / / View supported functions
# create database DBName; / / create a database
# use DBName; / / Select a library
# create table TableName (id int, name string) row format delimited fields terminated by'\ name; / / create a table containing id and name columns, and split the fields with the Tab key
# load data local inpath 'File' into table DBName.TableName; / / load the data from the file File into the created table
The data of the file is as follows:
001 zhangsan
002 lisi
003 wangwu
004 zhaoliu
005 chenqi
Note: ID and name are directly TAB keys, not spaces, because terminated by'\ t'is used in the above statement to create the table, so the division of id and name in this text must use the TAB key (copy and paste if there is a problem, manually hit the TAB key), and there can be no blank lines between lines, otherwise load will be executed below, and the NULL will be stored in the table. The file should be in unix format. If you are editing with a txt text editor on windows and uploading to the server, you need to use a tool to convert the windows format to unix format, for example, you can use Notepad++ to convert it.
# select * from TableName; / / run in the hive command line window to view the data in the table
Finally, you can also view the created DB and Table in mysql, located in the DBS table and the TBLS table, respectively.
You can also view the data on the browser side through namenode's URL: http://NameNodeIP:50070/explorer.html#/var/hive/warehouse/DBName.db / / modify the IP of namenode and the warehouse path you configured and the database name you created.
Appendix:
You can also refer to the website https://blog.csdn.net/pucao_cug/article/details/71773665 to install HIVE.
For a quick understanding of the core basic concepts of HIVE, refer to the website: https://blog.csdn.net/freefish_yzx/article/details/77150248.
Get started with HIVE data warehouse quickly, refer to https://www.yiibai.com/hive/hive_partitioning.html.
About the concept and use of HIVE zoning, please refer to the website: https://blog.csdn.net/qq_36743482/article/details/78418343.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.