Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to build Hadoop1.2.1 Cluster

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly shows you "how to build a Hadoop1.2.1 cluster". The content is easy to understand and clear. I hope it can help you solve your doubts. Let me lead you to study and learn this article "how to build a Hadoop1.2.1 cluster".

Hadoop cluster building (- v1.2.1)

1. Download the installation package (version: 1.2.1) 2. Installation dependency

Java and ssh

3. Extract the installation package and add the HADOOP_HOME variable to / etc/profile 4. Set up a cluster

Machine planning (three small clusters)

Hostname IPNodeTracker master192.168.10.1NameNodeJobTracker slave1192.168.10.1DateNodeTaskTracker slave2192.168.10.2DateNodeTaskTracker slave3192.168.10.3DateNodeTaskTracker

Create the same user on three machines: hadoop

Configure / etc/hosts on three machines

192.168.10.1 master slave1 192.168.10.2 slave2 192.168.10.3 slave3

Set up ssh-key on three machines and set up password-free login

$ssh-keygen-t dsa$ cat ~ / ssh/id_dsa.pub > > ~ / ssh/authorized_keys

Copy the contents of the authorized_keys file to the ~ / ssh/authorized_keys file of the other two hosts. Copy the Hadoop installation package to three machines and modify the configuration file in Hadoop-conf/Hadoop-env.sh export JAVA_HOME=path-to-jdk-conf/core-site.xml xml fs.default.name hdfs://master:9000 hadoop.tmp.dir / var/tmp/hadoop-conf/hdfs-site.xml xml dfs.repliation 3-conf/mapred-site.xml xml mapred.job.tracker hdfs://master:9001-conf/master master-conf/slave text slave1 slave2 slave3

5. Start the Hadoop service

$bin/hadoop namenode-format # format hdfs$ bin/start-all.sh # start all processes to view cluster status: http://localhost:50030 # MapReduce Web page http://localhost:50070 # HDFS Web page or $hadoop dfsadmin-report

6. Stop the Hadoop service bin/stop-all.sh

These are all the contents of the article "how to build Hadoop1.2.1 clusters". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report