Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

HDFS experiment (4) Cluster operation

2025-02-21 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

Cluster setup

Http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html

user's manual

Http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html

Daemon

Environment Variable

NameNode

HDFS_NAMENODE_OPTS

DataNode

HDFS_DATANODE_OPTS

Secondary NameNode

HDFS_SECONDARYNAMENODE_OPTS

ResourceManager

YARN_RESOURCEMANAGER_OPTS

NodeManager

YARN_NODEMANAGER_OPTS

WebAppProxy

YARN_PROXYSERVER_OPTS

Map Reduce Job History Server

MAPRED_HISTORYSERVER_OPTS

To start a Hadoop cluster you will need to start both the HDFS and YARN cluster.

The first time you bring up HDFS, it must be formatted. Format a new distributed filesystem as hdfs:

[hdfs] $$HADOOP_HOME/bin/hdfs namenode-format

Start the HDFS NameNode with the following command on the designated node as hdfs:

[hdfs] $$HADOOP_HOME/bin/hdfs-- daemon start namenode

Start a HDFS DataNode with the following command on each designated node as hdfs:

[hdfs] $$HADOOP_HOME/bin/hdfs-- daemon start datanode

If etc/hadoop/workers and ssh trusted access is configured (see Single Node Setup), all of the HDFS processes can be started with a utility script. As hdfs:

[hdfs] $$HADOOP_HOME/sbin/start-dfs.sh

Start the YARN with the following command, run on the designated ResourceManager as yarn:

[yarn] $$HADOOP_HOME/bin/yarn-- daemon start resourcemanager

Run a script to start a NodeManager on each designated host as yarn:

[yarn] $$HADOOP_HOME/bin/yarn-- daemon start nodemanager

Start a standalone WebAppProxy server. Run on the WebAppProxy server as yarn. If multiple servers are used with load balancing it should be run on each of them:

[yarn] $$HADOOP_HOME/bin/yarn-- daemon start proxyserver

If etc/hadoop/workers and ssh trusted access is configured (see Single Node Setup), all of the YARN processes can be started with a utility script. As yarn:

[yarn] $$HADOOP_HOME/sbin/start-yarn.sh

Start the MapReduce JobHistory Server with the following command, run on the designated server as mapred:

[mapred] $$HADOOP_HOME/bin/mapred-- daemon start historyserver

Hadoop Shutdown

Stop the NameNode with the following command, run on the designated NameNode as hdfs:

[hdfs] $$HADOOP_HOME/bin/hdfs-- daemon stop namenode

Run a script to stop a DataNode as hdfs:

[hdfs] $$HADOOP_HOME/bin/hdfs-- daemon stop datanode

If etc/hadoop/workers and ssh trusted access is configured (see Single Node Setup), all of the HDFS processes may be stopped with a utility script. As hdfs:

[hdfs] $$HADOOP_HOME/sbin/stop-dfs.sh

Stop the ResourceManager with the following command, run on the designated ResourceManager as yarn:

[yarn] $$HADOOP_HOME/bin/yarn-- daemon stop resourcemanager

Run a script to stop a NodeManager on a worker as yarn:

[yarn] $$HADOOP_HOME/bin/yarn-- daemon stop nodemanager

If etc/hadoop/workers and ssh trusted access is configured (see Single Node Setup), all of the YARN processes can be stopped with a utility script. As yarn:

[yarn] $$HADOOP_HOME/sbin/stop-yarn.sh

Stop the WebAppProxy server. Run on the WebAppProxy server as yarn. If multiple servers are used with load balancing it should be run on each of them:

[yarn] $$HADOOP_HOME/bin/yarn stop proxyserver

Stop the MapReduce JobHistory Server with the following command, run on the designated server as mapred:

[mapred] $$HADOOP_HOME/bin/mapred-- daemon stop historyserver

Web Interfaces

Once the Hadoop cluster is up and running check the web-ui of the components as described below:

Daemon

Web Interface

Notes

NameNode

Http://nn_host:port/

Default HTTP port is 9870.

ResourceManager

Http://rm_host:port/

Default HTTP port is 8088.

MapReduce JobHistory Server

Http://jhs_host:port/

Default HTTP port is 19888.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report