Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to achieve fully distributed deployment with Hadoop+HBase+Hive

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces how to achieve fully distributed deployment of Hadoop+HBase+Hive, which has a certain reference value. Interested friends can refer to it. I hope you will learn a lot after reading this article. Let's take a look at it.

Some problems with the fully distributed Hadoop + HBase + Hive deployment process.

NameNode: 192.168.229.132

DataNode: 192.168.229.133/192.168.229.134

Configure Hadoop

No password SSH

First of all, NameNode needs to access DataNode through ssh and configure a ssh without password.

Generate public key and private key on NameNode

$ssh-keygen-t dsa-P''- f ~ / .ssh/id_dsa

Put the public key in the tmp directory and scp to each DataNode

$cp ~ / .ssh/id_dsa.pub / tmp

Perform this in both NameNode and DataNode to complete the configuration

$cat / tmp/id_dsa.pub > > ~ / .ssh/authorized_keys

Configuration file (the directories involved are automatically created)

Conf/hadoop-env.sh

# The Java implementation to use. Required.

Export JAVA_HOME=/usr/jdk1.6.0_25

Conf/core-site.xml (if you want to use Hive, you need to change the red IP to the hostname, otherwise there will be Wrong FS errors)

Fs.default.name

Hdfs://192.168.229.132:9000

Hadoop.logfile.size

ten

Conf/hdfs-site.xml

Dfs.name.dir

/ u01/app/data/dfs.name.dir

Dfs.data.dir

/ u01/app/data/dfs.data.dir

Conf/mapred-site.xml

Mapred.job.tracker

192.168.229.132:9001

Mapred.system.dir

/ u01/app/data/mapred.system.dir

Mapred.local.dir

/ u01/app/data/mapred.local.dir

Master-slave configuration

Conf/masters

192.168.229.132

Conf/slaves

192.168.229.133

192.168.229.134

After NameNode has done all the configuration, synchronize the hadoop installation directory to DataNode through scp. It is then formatted and started in NameNode.

Configure Hbase

$vi / etc/hosts (the machine name is used in HBase, and the machines in the cluster must be configured in hosts)

127.0.0.1 localhost

192.168.229.132 ubuntu02

192.168.229.133 ubuntu03

192.168.229.134 ubuntu04

Conf/hbase-env.sh (here)

# The Java implementation to use. Java 1.6 required.

Export JAVA_HOME=/usr/jdk1.6.0_25

# Extra Java CLASSPATH elements. Optional.

Export HBASE_CLASSPATH=/u01/app/hadoop/conf

# Tell HBase whether it should manage it's own instance of Zookeeper or not.

Export HBASE_MANAGES_ZK=true

Conf/hbase-site.xml (the red part must use the host name, and the other parts can use IP)

Hbase.rootdir

Hdfs://ubuntu02:9000/u01/app/data/hbase

Hbase.cluster.distributed

True

Hbase.master

Hdfs://192.168.229.132:60000

Hbase.zookeeper.quorum

192.168.229.132, 192.168.229.133, 192.168.229.134

Conf/regionservers (consistent with Hadoop's slaves file)

192.168.229.133

192.168.229.134

Synchronize the hbase installation directory to DataNode through scp

-

Configure Hive

MySQL stores metadata (see here for installation procedures)

When creating a database, you must use latin1 as the character set, otherwise there will be an error message Specified key was too long; max key length is 767 bytes

Mysql > create database hivedb default character set latin1

Mysql > create user 'hive'@'localhost' identified by' hive'

Mysql > grant all on hivedb.* to 'hive'@'localhost'

Hive configuration information (Hive only needs to install the configuration on the Master node)

Bin/hive-config.sh (you can set a .profile file to skip this step)

Export JAVA_HOME=/usr/jdk1.6.0_25

Export HIVE_HOME=/u01/app/hive

Export HADOOP_HOME=/u01/app/hadoop

Conf/hive-site.xml

Javax.jdo.option.ConnectionURL

Jdbc:mysql://localhost:3306/hivedb?createDatabaseIfNotExist=true

JDBC connect string FOR a JDBC metastore

Javax.jdo.option.ConnectionDriverName

Com.mysql.jdbc.Driver

Driver class name FOR a JDBC metastore

Javax.jdo.option.ConnectionUserName

Hive

Username TOUSE against metastore database

Javax.jdo.option.ConnectionPassword

Hive

Password TOUSE against metastore database

MySQL driver package

Download the mysql-connector-java-5.1.18-bin.jar file and put it in the $HIVE_HOME/lib directory

test

Hadoop@ubuntu02:/u01/app/hive$ bin/hive

Logging initialized using configuration in jar VAT fileVlue, U01 U1App, HiveMel, 0.8.1, LBN, lib, CommonMux, 0.8.1.jarbank, hiveMux, log4j.properties

Hive history file=/tmp/hadoop/hive_job_log_hadoop_201203201733_2122821776.txt

Hive > show tables

OK

Tb

Time taken: 2.458 seconds

Hive >

Thank you for reading this article carefully. I hope the article "how to achieve fully distributed deployment of Hadoop+HBase+Hive" shared by the editor will be helpful to everyone. At the same time, I also hope that you will support and pay attention to the industry information channel. More related knowledge is waiting for you to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report