In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces the HDFS PID example analysis, has a certain reference value, interested friends can refer to, I hope you can learn a lot after reading this article, the following let the editor take you to understand it.
Foreword:
In Linux, you can find the corresponding process number and process directory through PID. Ps-ef | grep pid can separately kill the desired pid; of kill. If you want to kill `process-ef | grep service name directly through the shell script on production, all multiple processes and other processes under the service will be kill. For example, hadoop can read the process number in the file to specify kill'.
The default PID storage path of Hodoop is / tmp,PID. The content is the process number.
Why is the PID content the process number:
[hadoop@hadoop001 hadoop] $jps
7857 DataNode
7748 NameNode
8774 Jps
8061 SecondaryNameNode
[hadoop@hadoop001 tmp] $pwd
/ tmp
[hadoop@hadoop001 tmp] $cat hadoop-hadoop-datanode.pid
7857
[hadoop@hadoop001 tmp] $cat hadoop-hadoop-namenode.pid
7748
[hadoop@hadoop001 tmp] $cat hadoop-hadoop-secondarynamenode.pid
8061
Observe the script stopped by hadoop and find that the script is stopped by calling hadoop-daemons.sh
[hadoop@hadoop001 sbin] $cat stop-dfs.sh
#
# namenodes
NAMENODES=$ ($HADOOP_PREFIX/bin/hdfs getconf-namenodes)
Echo "Stopping namenodes on [$NAMENODES]"
"$HADOOP_PREFIX/sbin/hadoop-daemons.sh"\
-config "$HADOOP_CONF_DIR"\
-hostnames "$NAMENODES"\
Script "$bin/hdfs" stop namenode
#
# datanodes (using default slaves file)
If [- n "$HADOOP_SECURE_DN_USER"]; then
Echo\
"Attempting to stop secure cluster, skipping datanodes."\
"Run stop-secure-dns.sh as root to complete shutdown."
Else
"$HADOOP_PREFIX/sbin/hadoop-daemons.sh"\
-config "$HADOOP_CONF_DIR"\
Script "$bin/hdfs" stop datanode
Fi
#
# go to view the hadoop-daemons.sh script
[hadoop@hadoop001 sbin] $cat hadoop-daemon.sh | grep pid
# HADOOP_PID_DIR The pid files are stored. / tmp by default.
Pid=$HADOOP_PID_DIR/hadoop-$HADOOP_IDENT_STRING-$command.pid
If [- f $pid]; then
If kill-0 `cat $pid` > / dev/null 2 > & 1; then
Echo $command running as process `cat $pid`. Stop it first.
Echo $! > $pid
If [- f $pid]; then
TARGET_PID= `cat $pid`
Rm-f $pid
Thank you for reading this article carefully. I hope the article "sample Analysis of PID in HDFS" shared by the editor will be helpful to you. At the same time, I also hope you will support us and pay attention to the industry information channel. More related knowledge is waiting for you to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.