In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-14 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article introduces the relevant knowledge of "the method of installation and configuration of Hive2". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
I. Environmental dependence
Hive must run on Hadoop, then you need to install the Hadoop environment first
Http://my.oschina.net/u/204498/blog/519789
Second, install Hive
1. Download Hive
Http://apache.mirrors.ionfish.org/hive/
I installed apache-hive-1.2.1-bin.tar.gz.
[hadoop@hftclclw0001 ~] $pwd/home/hadoop [hadoop@hftclclw0001 ~] $wget http://apache.mirrors.ionfish.org/hive/hive-1.2.1/apache-hive-1.2.1-bin.tar.gz[hadoop@hftclclw0001 ~] $lltotal 637256drwx-10 hadoop root 4096 Oct 27 02:22 apache-hive-1.2.1-bin-rw- 1 hadoop root 92834839 Jun 26 18:34 apache-hive-1.2 .1-bin.tar.gzdrwx- 3 hadoop root 4096 Oct 27 09:05 datadrwx- 11 hadoop root 4096 Oct 21 03:20 hadoop-2.7.1-rw- 1 hadoop root 210606807 Oct 20 09:00 hadoop-2.7.1.tar.gzdrwx- 2 hadoop root 4096 Oct 23 02:08 install-sqoopdrwx- 13 hadoop root 4096 Oct 20 09:22 spark -1.5.1 sqoop-1.99.6-bin-hadoop200.tar.gz hadoop root 280901736 Oct 20 09:19 spark-1.5.1-bin-hadoop2.6.tgzdrwx- 22 hadoop root 4096 Oct 23 02:08 sqoop-1.99.6-bin-hadoop200-rw- 1 hadoop root 68177818 May 5 22:34 sqoop-1.99.6-bin-hadoop200.tar.gz [hadoop@hftclclw0001 ~] $cd apache-hive- 1.2.1-bin/conf/ [hadoop@hftclclw0001 conf] $pwd/home/hadoop/apache-hive-1.2.1-bin/conf [hadoop@hftclclw0001 conf] $vi hive-env.sh.HADOOP_HOME=/home/hadoop/hadoop-2.7.1 = > configure Hadoop_Homeexport HIVE_CONF_DIR=/home/hadoop/apache-hive-1.2.1-bin/conf = > configure HIVE_ Conf_homeexport HIVE_AUX_JARS_PATH=/home/hadoop/apache-hive-1.2.1-bin/lib/ # I used mysql as the metastore You need to add the mysql driver [hadoop@hftclclw0001 lib] $pwd/home/hadoop/apache-hive-1.2.1-bin/lib [hadoop@hftclclw0001 lib] $ll | grep mysql-rw- 1 hadoop root 848401 Oct 27 01:48 mysql-connector-java-5.1.25-bin.jar [hadoop@hftclclw0001 conf] $vi hive-site.xml [hadoop@hftclclw0001 conf] $cat hive-site.xml in the lib directory. Hive.metastore.local false = > metastore my mysql is not the ip and port number of the javax.jdo.option.ConnectionURL jdbc:mysql:// {ip:port} / {databases} = > mysql service on this server Javax.jdo.option.ConnectionDriveName com.mysql.jdbc.Driver javax.jdo.option.ConnectionUserName {username} javax.jdo.option.ConnectionPassword {password} hive.metastore.warehouse.dir / hive/warehouse = > hive warehouse directory You need to create it on HDFS and modify the permissions hive.metastore.uris thrift:// {ip}: {port} = > native ip and port number, start the configuration of the metastore service [hadoop@hftclclw0001 conf] $vi hive-log4j.properties = > Log4j, and modify the log directory.
two。 Start metastore
[hadoop@hftclclw0001 bin] $pwd/home/hadoop/apache-hive-1.2.1-bin/bin [hadoop@hftclclw0001 bin] $. / hive--service metastore & [hadoop@hftclclw0001 bin] $ps ax | grep metastore.
3. Start HiveServer2
[hadoop@hftclclw0001 bin] $pwd/home/hadoop/apache-hive-1.2.1-bin/bin [hadoop@hftclclw0001 bin] $. / hive--service hiveserver2 & [hadoop@hftclclw0001 bin] $ps ax | grep HiveServer2.
4. Start shell or beeline
[hadoop@hftclclw0001 bin] $. / hive shell.
III. Metastore
Http://www.cloudera.com/content/www/en-us/documentation/archive/cdh/4-x/4-2-0/CDH4-Installation-Guide/cdh5ig_topic_18_4.html
1. Built-in mode: saving data in a built-in Derby database is the easiest way, but Derby can only access one data file at a time.
Drive = > Metastore = = > Derby
two。 Local mode: save metadata in a local stand-alone database (such as mysql), etc.
Driver = = > Metastore
Driver = = > Metastore = > DB
Driver = = > Metastore
Each server needs to configure metastore and start the metastore service
3. Remote mode: using thrift to access metastore
Client1
Client2 = = > Metastore = > DB
Client3
4. Configuration:
As configured above, we have started the metastore service on the above hftclclw0001 machine. We installed hive on another server, such as the hftclcld0001 machine, and configured as above all the time, only to modify the hive-site.xml as follows: hive.metastore.uris thrift:// {ip}: {port} = > the ip and port number of the hftclclw0001 machine, that is, we use the thrift protocol Access metastore on hftclclw0001, and access hive metadata [root@hftclcld0001 apache-hive-1.2.1-bin] # pwd/home/hadoop/apache-hive-1.2.1-bin [root@hftclcld0001 apache-hive-1.2.1-bin] #. / bin/hive shellhive > hive > show databases OKdefaulthive = > access to hive's metastore, access to metadata (we created earlier) human_resourcesTime taken: 0.388 seconds, Fetched: 3 row (s) hive > "method of installation and configuration of Hive2". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.