Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Simple installation and deployment process for hadoop

2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

Simple installation and deployment process of hadoop

In order to do some experiments, so I installed a virtual machine on my laptop, the system is CentOS 6.2, JDK 1.7, hadoop-1.0.1

For simplicity, deploy pseudo-distributed, i.e., with only one node that is both Master and Slave, NameNode and DataNode, JobTracker and TaskTracker.

Overall deployment description:

Pseudo-distributed deployment is relatively simple, only need to get 4 configuration files, respectively:

1.hadoop-env.sh specify the location of JDK

2. core-site. xml//Core configuration to specify HDFS address and port number

3. hdfs-site. xml//HDFS configuration, you can specify the number of backups, the default is 3, pseudo-distributed needs to be configured as 1

4. mapred-site. xml//address and port for configuring JboTracker

After configuring the above files, there are still two steps left:

1. Format HDFS file system

2. Start and verify

Official start:

1. Configure hadoop-env.sh

Because I forgot the location of JDK, I looked it up with java -verbose and found/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.85.x86_64/jre

Therefore, write the following line in hadoop-env.sh (you can actually find the specified location, readers can find it for themselves)

export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.85.x86_64/jre

2. Configure core-site.xml

fs.default.name

hdfs://localhost:9000

//Note: In fact, other parts are already available, just type in the black bold part yourself

3. Configure hdfs-site.xml

dfs.replication

1

4. Configure mapred-site.xml

mapred.job.tracker

localhost:9001

Go to the last two steps:

1. Format hdfs

[root@wjz hadoop]# cd /usr/local/hadoop/bin //Enter the bin directory of hadoop executables

[root@wjz bin]# ./ hadoop namenode -format //Execute format command

2. Start and verify

[root@wjz bin]# ./ start-all.sh

Open browser verification and enter the following URLs:

http://localhost:50030 (MapReduce WEB page)

http://localhost:50070 (WEB page of HDFS)

- Done. OK.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report