Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to configure Hadoop0.20.2 Cluster

2025-03-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly introduces how to configure the Hadoop 0.20.2 cluster. The introduction in this article is very detailed and has certain reference value. Interested friends must read it!

I. Hadoop Start

Format a new distributed file system first

$cdhadoop-0.20.2

$bin/hadoopnamenode-format

View the output to ensure that the distributed file system is formatted successfully.

Hadoop 0.20.2 During cluster configuration, start hadoop on the master node, which will start hadoop on all slave nodes.

$bin/start-all.sh

The slave master turns off hadoop, and the master turns off hadoop for all slave nodes.

$bin/stop-all.sh

Hadoop daemons write logs to the ${HADOOP_LOG_DIR} directory (${HADOOP_HOME}/logs by default).

Browse the network interfaces for NameNode and JobTracker, whose addresses default to:

NameNode-http://master:50070/

JobTracker-http://master:50030/

Copy the input file to the distributed file system: $bin/hadoopfs-putconfinput

Run the example program provided with the distribution: $bin/hadoopjarhadoop-0.20.2-examples.jargrepinputoutput'dfs[a-z.]+ '

View the output file:

Copy the output file from the distributed file system to the local file system View:

$bin/hadoopfs-getoutputoutput

$catoutput/*

or

View output file on distributed file system: $bin/hadoopfs-catoutput/*

After successful startup, you can familiarize yourself with the hadoopdfs command, such as:

hadoopdfs-ls View the contents of the/usr/root directory. By default, if you do not fill in the path, this is the current user path.

hadoodfs-rmrxxxxxx is delete directory;

hadoopdfsadmin-report This command allows you to view the DataNode globally;

The parameters added after hadoopjob-list are operations for the currently running Job, such as list,kill, etc.;

hadoop balancer A command that balances disk loads.

The above is "How to configure Hadoop 0.20.2 cluster" all the content of this article, thank you for reading! Hope to share the content to help everyone, more relevant knowledge, welcome to pay attention to the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report