In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces how to solve the no namenode to stop exception when restarting the Hadoop cluster, which has certain reference value. Interested friends can refer to it. I hope you will learn a lot after reading this article. Let's take a look at it.
The configuration file of the hadoop cluster has been modified and the cluster needs to be restarted, but the error is as follows:
[hadoop@master ~] # stop-dfs.shStopping namenodes on [master] master1: no namenode to stopmaster2: no namenode to stopslave2: no datanode to stopslave1: no datanode to stop
The cause of the problem is that hadoop is based on journalnode on datanode and pid on dfs when stop. The default process number is saved under / tmp, and linux deletes files in this directory by default (usually a month or 7 days or so).
So after deleting the hadoop-hadoop-journalnode.pid and hadoop-hadoop-datanode.pid files, namenode naturally can't find the two processes on datanode.
Configuring export HADOOP_PID_DIR in the configuration file hadoop_env.sh can solve this problem, or you can modify it in hadoop-deamon.sh, which calls hadoop_env.sh. Change the path of HADOOP_PID_DIR to "/ var/hadoop_pid". Remember to manually create a hadoop_pid folder in the "/ var" directory and assign owner permissions to the hadoop user.
[hadoop@slave3 ~] $ls / var/hadoop_pid/hadoop-hadoop-datanode.pid hadoop-hadoop-journalnode.pid
Then manually kill the Datanode process (kill-9 pid) on the faulty Slave, rerun start-dfs..sh and find that there is no "no datanode to stop" and "no namenode to stop", and the problem is solved.
[hadoop@master1 ~] $start-dfs.sh16/04/13 17:20:26 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... Using builtin-java classes where applicableStarting namenodes on [master1 master2] master1: starting namenode, logging to / data/usr/hadoop/logs/hadoop-hadoop-namenode-master1.outmaster2: starting namenode, logging to / data/usr/hadoop/logs/hadoop-hadoop-namenode-master2.outslave4: starting datanode, logging to / data/usr/hadoop/logs/hadoop-hadoop-datanode-slave4.outslave3: starting datanode, logging to / data/usr/hadoop/logs/hadoop-hadoop-datanode-slave3.outslave2: starting datanode Logging to / data/usr/hadoop/logs/hadoop-hadoop-datanode-slave2.outslave1: starting datanode, logging to / data/usr/hadoop/logs/hadoop-hadoop-datanode-slave1.outStarting journal nodes [master1 master2 slave1 slave2 slave3] slave3: starting journalnode, logging to / data/usr/hadoop/logs/hadoop-hadoop-journalnode-slave3.outmaster1: starting journalnode, logging to / data/usr/hadoop/logs/hadoop-hadoop-journalnode-master1.outslave1: starting journalnode, logging to / data/usr/hadoop/logs/hadoop-hadoop-journalnode-slave1.outmaster2: starting journalnode Logging to / data/usr/hadoop/logs/hadoop-hadoop-journalnode-master2.outslave2: starting journalnode, logging to / data/usr/hadoop/logs/hadoop-hadoop-journalnode-slave2.out16/04/13 17:20:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... Using builtin-java classes where applicableStarting ZK Failover Controllers on NN hosts [master1 master2] master1: starting zkfc, logging to / data/usr/hadoop/logs/hadoop-hadoop-zkfc-master1.outmaster2: starting zkfc, logging to / data/usr/hadoop/logs/hadoop-hadoop-zkfc-master2.out Thank you for reading this article carefully. I hope the article "how to solve the no namenode to stop exception when restarting the Hadoop cluster" shared by the editor will be helpful to you. At the same time, I hope you will support it. Pay attention to the industry information channel, more related knowledge is waiting for you to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.