Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of hdfs namenode-format in HDFS2.7.0

2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article will explain in detail the example analysis of hdfs namenode-format in HDFS2.7.0. The editor thinks it is very practical, so I share it with you for reference. I hope you can get something after reading this article.

After executing hadoop namenode-format

Actually, it's execution.

/ root/hadoop-2.7.0-bin/bin/hdfs namenode-format

Let's analyze the script.

-

Bin= `which $0`bin = `dirname ${bin} `bin= `cd "$bin" > / dev/null; pwd`

Printing

Bin=/root/hadoop-2.7.0-bin/bin

-

DEFAULT_LIBEXEC_DIR= "$bin" /.. / libexec

Print'

DEFAULT_LIBEXEC_DIR=/root/hadoop-2.7.0-bin/bin/../libexec

-

Cygwin=falsecase "$(uname)" inCYGWIN*) cygwin=true;;esac

This won't be executed. Filter.

-

Next, execute a script.

HADOOP_LIBEXEC_DIR=$ {HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}. $HADOOP_LIBEXEC_DIR/hdfs-config.sh

What is actually executed is

/ root/hadoop-2.7.0-bin/libexec/hdfs-config.sh

This script actually calls another script, which script is called? Readers can explore for themselves:)

-go back to the hdfs script

Function print_usage () {echo "Usage: hdfs [--config confdir] [--loglevel loglevel] COMMAND" echo "where COMMAND is one of:" echo "dfs run a filesystem command on the file systems supported in Hadoop." Echo "classpath prints the classpath" echo "namenode-format format the DFS filesystem" echo "secondarynamenode run the DFS secondarynamenode" echo "namenode run the DFS namenode" echo "journalnode run the DFS journalnode" echo "zkfc run the ZK Failover Controller daemon" echo "datanode run a DFS datanode" echo "dfsadmin run a DFS admin Client "echo" haadmin run a DFS HA admin client "echo" fsck run a DFS filesystem checking utility "echo" balancer run a cluster balancing utility "echo" jmxget get JMX exported values from NameNode or DataNode. " The NameNode "echo" getconf get config values from configuration "echo" groups get the groups which users belong to "echo" snapshotDiff diff two snapshots of a directory or diff the "echo" current directory contents with a snapshot "echo" lsSnapshottableDir list all snapshottable dirs owned by the current user "echo" Use-help to see options "echo" portmap Run a portmap service "echo" nfs3 run an NFS version 3 gateway "echo" cacheadmin configure the HDFS cache "echo" crypto configure HDFS encryption zones "echo" storagepolicies list/get/set block storagepolicies "echo" version print the version "echo"echo" Most commands print help when invoked w parameters. "# There are also debug commands But they don't show up in this listing.} if [$# = 0] Then print_usage exitfi

This is too simple, it's just a function, tell the purpose.

-

Then came the most critical moment, that is, to carry out orders.

If ["$COMMAND" = "namenode"]; then CLASS='org.apache.hadoop.hdfs.server.namenode.NameNode' HADOOP_OPTS= "$HADOOP_OPTS $HADOOP_NAMENODE_OPTS"

Among them

HADOOP_OPTS=-Djava.net.preferIPv4Stack=true-Dhadoop.log.dir=/root/hadoop-2.7.0-bin/logs-Dhadoop.log.file=hadoop.log-Dhadoop.home.dir=/root/hadoop-2.7.0-bin-Dhadoop.id.str=root-Dhadoop.root.logger=INFO,console-Djava.library.path=/root/hadoop-2.7.0-bin/lib/native-Dhadoop.policy.file=hadoop-policy.xml-Djava.net.preferIPv4Stack=true-Dhadoop.security.logger=INFO,RFAS-Dhdfs.audit.logger=INFO,NullAppender

-

The rest is cgwin, ignore it.

-

The assignment statement does not say much.

-

The next if-else statement actually executes the last branch

Else # run it exec "$JAVA"-Dproc_$COMMAND $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS "$@" fi

The true face of Lushan is coming out. Print and execute the statement.

/ usr/java/jdk1.8.0_45/bin/java-Dproc_namenode-Xmx1000m-Djava.net.preferIPv4Stack=true-Dhadoop.log.dir=/root/hadoop-2.7.0-bin/logs-Dhadoop.log.file=hadoop.log-Dhadoop.home.dir=/root/hadoop-2.7.0-bin-Dhadoop.id.str=root-Dhadoop.root.logger=INFO Console-Djava.library.path=/root/hadoop-2.7.0-bin/lib/native-Dhadoop.policy.file=hadoop-policy.xml-Djava.net.preferIPv4Stack=true-Dhadoop.security.logger=INFO,RFAS-Dhdfs.audit.logger=INFO,NullAppender-Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.hdfs.server.namenode.NameNode-format on "sample Analysis of hdfs namenode-format in HDFS2.7.0" ends here. Hope that the above content can be helpful to you, so that you can learn more knowledge, if you think the article is good, please share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report