Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Hadoop installation configuration

2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

I. installation environment

Hardware: virtual Machin

Operating system: Centos 6.464 bit

IP:192.168.1.100

Hostname: admin

Installation user: root

Second, install JDK

Install JDK1.7 or above. Jdk1.7.0_79 is installed here.

Download address: http://www.oracle.com/technetwork/java/javase/downloads/index.html

1. Download jdk-7u79-linux-x64.gz and extract it to / usr/java/jdk1.7.0_79.

2, add the following configuration to / root/.bash_profile:

Export JAVA_HOME=/usr/java/jdk1.7.0_79

Export PATH=$JAVA_HOME/bin:$PATH

3, make the environment variable effective, # source ~ / .bash_profile

4. Installation verification # java-version

Java version "1.7.079"

Java (TM) SE Runtime Environment (build 1.7.0_79-b15)

Java HotSpot (TM) 64-Bit Server VM (build 24.79-b02, mixed mode)

Third, configure SSH to log in without password

$ssh-keygen-t dsa-P''- f ~ / .ssh/id_dsa

$cat ~ / .ssh/id_dsa.pub > > ~ / .ssh/authorized_keys

Verify ssh,# ssh localhost

You do not need to enter a password to log in.

If you are in the form of a cluster, you can refer to http://blog.csdn.net/se7en_q/article/details/47671425

Fourth, install Hadoop2.7

1. Download Hadoop2.7.1

Download address: http://mirrors.hust.edu.cn/apache/hadoop/common/stable2/hadoop-2.7.1.tar.gz

2, decompress and install

1), copy hadoop-2.7.1.tar.gz to the / usr/hadoop directory

Then # tar-xzvf hadoop-2.7.1.tar.gz is decompressed, and the decompressed directory is: / usr/hadoop/hadoop-2.7.1

2), under the / usr/hadoop/ directory, create the tmp, hdfs/name, hdfs/data directories and execute the following command

# mkdir / usr/hadoop/tmp

# mkdir / usr/hadoop/hdfs

# mkdir / usr/hadoop/hdfs/data

# mkdir / usr/hadoop/hdfs/name

3), set the environment variable, # vi ~ / .bash_profile

# set hadooppath

ExportHADOOP_HOME=/usr/hadoop/hadoop-2.7.1

Export PATH=$PATH:$HADOOP_HOME/bin

4) to make the environment variable effective, $source ~ / .bash_profile

3Gen Hadoop configuration

Enter the $HADOOP_HOME/etc/hadoop directory, configure hadoop-env.sh, and so on. The configuration files involved are as follows:

Hadoop-2.7.1/etc/hadoop/hadoop-env.sh

Hadoop-2.7.1/etc/hadoop/yarn-env.sh

Hadoop-2.7.1/etc/hadoop/core-site.xml

Hadoop-2.7.1/etc/hadoop/hdfs-site.xml

Hadoop-2.7.1/etc/hadoop/mapred-site.xml

Hadoop-2.7.1/etc/hadoop/yarn-site.xml

1) configure hadoop-env.sh

# The java implementation to use.

# export JAVA_HOME=$ {JAVA_HOME}

Export JAVA_HOME=/usr/java/jdk1.7.0_79

2) configure yarn-env.sh

# export JAVA_HOME=/home/y/libexec/jdk1.7.0/

Export JAVA_HOME=/usr/java/jdk1.7.0_79

3) configure core-site.xml

Add the following configuration:

Fs.default.name

Hdfs://localhost:9000

URI of HDFS, file system: / / namenode ID: Port number

Hadoop.tmp.dir

/ usr/hadoop/tmp

Local hadoop temporary folder on namenode

4), configure hdfs-site.xml

Add the following configuration

Dfs.name.dir

/ usr/hadoop/hdfs/name

Storing hdfs namespace metadata on namenode

Dfs.data.dir

/ usr/hadoop/hdfs/data

Physical storage location of data blocks on datanode

Dfs.replication

one

The number of copies. The default configuration is 3, which should be less than the number of datanode machines.

5), configure mapred-site.xml

Add the following configuration:

Mapreduce.framework.name

Yarn

6), configure yarn-site.xml

Add the following configuration:

Yarn.nodemanager.aux-services

Mapreduce_shuffle

Yarn.resourcemanager.webapp.address

192.168.1.100:8099

4Get Hadoop start

1) format namenode

$bin/hdfs namenode-format

2) start the NameNode and DataNode daemons

$sbin/start-dfs.sh

3) start the ResourceManager and NodeManager daemons

$sbin/start-yarn.sh

5, start the verification

1) execute the jps command with the following process, indicating that Hadoop starts normally

# jps

6097 NodeManager

11044 Jps

7497-process information unavailable

8256 Worker

5999 ResourceManager

5122 SecondaryNameNode

8106 Master

4836 NameNode

4957 DataNode

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report