Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Hadoop+Hbase installation and configuration record

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Production environment:

3 machines: master (192.168.0.61), slave1 (192.168.0.62), slave2 (192.168.0.63)

Note: hostname is set to master/slave1/slave2

Operating system: rhel5.4 x86x64

Master as namenonde, slave1 and slave2 as datanode

1. On master: (same as the following on slave1 and slave2)

Vi / etc/hosts

192.168.0.61 master

192.168.0.62 slave1

192.168.0.63 slave2

two。 Operate with root

3. Password-free login

# ssh-keygen-t rsa # create a ssh directory and hit enter to the end. This step needs to be performed on each machine first.

On master

# scp ~ / .ssh/id_rsa.pub root@slave1:/root/.ssh/id_rsa.pub_m transfers the key on master to / home/hadoop of slave1

On slave1

# cat / root/.ssh/id_rsa.pub_m > > ~ / .ssh/authorized_keys

# chmod 644 ~ / .ssh/authorized_keys

Repeat step 3 to complete the password-free login of masterslave1 masterslave2

In this way, the master node and the slave node can be accessed directly by ssh without a password. In addition, master ssh master is required when starting hadoop.

Therefore, under master ~ / .ssh, you can also execute cat id_rsa.pub > > authorized_keys.

4. Install JDK to / usr/local and name it jdk6

Then:

Edit the / etc/profile of the three machines and add the following

Export JAVA_HOME=/usr/local/jdk6

Export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib

Export HADOOP_HOME=/hadoop/hadoop

Export HBASE_HOME=/hadoop/hbase

PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HBASE_HOME/bin

# vi / root/.bashrc

Increase

Export HADOOP_CONF_DIR=/hadoop/hadoop-config

Export HBASE_CONF_DIR=/hadoop/hbase-config

5. Firewall

Each machine joins the iptables policy:

# iptables-I INPUT-s 192.168.0.0 Universe 255.255.255.0-j ACCPET

# service iptables save

-

Hadoop configuration:

1. Download and install

# cd / hadoop

# wget http://labs.renren.com/apache-mirror//hadoop/core/hadoop-0.20.2/hadoop-0.20.2.tar.gz

# tar-zxvf hadoop-0.20.2.tar.gz

# ln-s hadoop-0.20.2 hadoop

Since the configuration files of hadoop and the installation directory of hadoop are put together in the above directory, all configuration files will be overwritten once the hadoop version is upgraded in the future, so the configuration files will be separated from the installation directory.

A better way is to set up a directory where the configuration files are stored, / hadoop/hadoop-config/, and then change the core-site.xml,slaves,hadoop-env.sh,masters and hdfs- in the / hadoop/hadoop/conf/ directory

Site.xml,mapred-site.xml, copy the six files to the / hadoop/hadoop-config/ directory and specify the environment variable $HADOOP_CONF_DIR to point to that directory. The environment variable is set in / root/.bashrc.

# mkdir / hadoop/hadoop-config

# cd / hadoop/hadoop/conf/

# cp core-site.xml slaves hadoop-env.sh masters hdfs-site.xml mapred-site.xml / hadoop/hadoop-config/

two。 Modify 6 files

Masters:

Master

Slaves:

Slave1

Slave2

# do not create: / hadoop/hadoop/tmp

Hadoop-env.sh:

Export JAVA_HOME=/usr/local/jdk6

Export HADOOP_PID_DIR=/hadoop/hadoop/tmp

Core-site.xml:

Fs.default.name

Hdfs://master:54310

# do not create: / hadoop/hadoop/name

# mkdir / hadoop/hadoop/data

Hdfs-site.xml:

Dfs.name.dir

/ hadoop/hadoop/name/ # name directory path of hadoop

Dfs.data.dir

/ hadoop/hadoop/data/ # data directory path of hadoop

Dfs.replication

3 # specify the number of times each Block in HDFS is copied, which acts as a redundant backup of data. In a typical production system, this number is often set to 3

Mapred-site.xml:

Mapred.job.tracker

Hdfs://master:54311/

3. Format namenode

# cd / hadoop/hadoop/bin

#. / hadoop namenode-format

# cd / hadoop

# scp-r. / * root@slave1:/hadoop/

# scp-r. / * root@slave2:/hadoop/

Go to slave1 and slave2 and redo the soft links respectively

# cd / hadoop

# rm-rf hadoop

# ln-s hadoop-0.20.2 hadoop

4. Start all hadoop daemons

#. / start-all.sh

Description:

There are many startup scripts under bin/ that you can start according to your needs.

* start-all.sh starts all Hadoop daemons. Including namenode, datanode, jobtracker, tasktrack

* stop-all.sh stops all Hadoop

* start-mapred.sh starts the Map/Reduce daemon. Including Jobtracker and Tasktrack

* stop-mapred.sh stops Map/Reduce daemon

* start-dfs.sh launches Hadoop DFS daemon .Namenode and Datanode

* stop-dfs.sh stops DFS daemon

[root@master bin] # jps

6813 NameNode

7278 Jps

7164 JobTracker

7047 SecondaryNameNode

There are 4 such processes, and datanode does not have error, and accessing ui is considered successful at the same time.

Ui: http://masterip:50070-web UI for HDFS name node (s)

Http://masterip:50030-web UI for MapReduce job tracker (s)

Http://slaveip:50060-web UI for task tracker (s)

5. Hdfs simple test

# cd / hadoop/hadoop/bin

#. / hadoop dfs-mkdir testdir

#. / hadoop dfs-put / root/install.log testdir/install.log-dfs

Store / root/install.log in testdir in hdfs and rename it to install.log-dfs

#. / hadoop dfs-ls

#. / hadoop dfs-ls testdir

-

1. Hbase installation and deployment

# cd / hadoop

# wget http://apache.etoak.com//hbase/hbase-0.20.6/hbase-0.20.6.tar.gz

# tar-zxvf hbase-0.20.6.tar.gz

# ln-s hbase-0.20.6 hbase

# mkdir hbase-config

# cd / hadoop/hbase/conf/

# cp hbase-env.sh hbase-site.xml regionservers / hadoop/hbase-config/

2. Configuration file modification

# mkdir / hadoop/hbase/tmp

# vim / hadoop/hbase-config/hbase-env.sh

Increase

Export JAVA_HOME=/usr/local/jdk6

Export HBASE_MANAGES_ZK=true

Export HBASE_PID_DIR=/hadoop/hbase/tmp

# vim hbase-site.xml

Hbase.rootdir

Hdfs://master:54310/hbase pay attention to the host name and port number that correspond to the dfs name of hadoop

Hbase.cluster.distributed

True

Hbase.zookeeper.quorum

Master

Zookeeper.session.timeout

60000

Hbase.zookeeper.property.clientPort

2222

Configure the hbase server name

# vi regionservers

Slave1

Slave2

3. Copy the hbase file

On master

# cd / hadoop

# scp-r hbase-0.20.6 hbase-0.20.6.tar.gz hbase-config root@slave1:/hadoop/

# scp-r hbase-0.20.6 hbase-0.20.6.tar.gz hbase-config root@slave2:/hadoop/

Redo soft links on slave1 and slave2, respectively

# cd / hadoop

# ln-s hbase-0.20.6 hbase

4. Test

Start on master

# cd / hadoop/hbase/bin

#. / hbase shell

HBase Shell; enter 'help' for list of supported commands.

Version: 0.20.6, r965666, Mon Jul 19 16:54:48 PDT 2010

Hbase (main): 001create 0 > test','data'

0 row (s) in 1.1920 seconds

Hbase (main): 002purl 0 > list

Test

1 row (s) in 0.0200 seconds

Hbase (main): 003VR 0 > quit

-

Summary:

A: org.apache.hadoop.hbase.masternotrunningexception exception occurred during the installation of hadoop-0.21.0+hbase-0.20.6.

Use hadoop-0.20.2+hbase-0.20.6 to resolve the problem.

51cto.com/a/luyoujiaohuan/index.html

Http://www.net527.comlinux system Cisco Forum Cisco

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report