In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains "what to do if DataNode is not started". The content in the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "what to do if DataNode is not started".
Problem description: start HDFS on master node, check NameNode,SecondaryNameNode through JPS, etc., check that there are no DataNode nodes on slave nodes, and check that there are no DataNode nodes through http://localhost:50070.
Analyze the reason: both NameNode and DataNode will create a log under $HADOOP_HOME/logs at startup, and record the startup process and exception errors. Open the log generated on the DataNode node, and you will find an error, as shown below:
2015-05-30 19 Datanode Uuid unassigned 04 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool (Datanode Uuid unassigned) service to hadoop.master/192.168.1.200:9000. Exiting.java.io.IOException: Incompatible clusterIDs in / root/app/hadoop/tmp/dfs/data: namenode clusterID = CID-aee19086-0039-4a5c-a7de-cb5f4355262c Datanode clusterID = CID-8fab07df-65df-48a0-862c-b3489783618d at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition (DataStorage.java:477) at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead (DataStorage.java:226) at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead (DataStorage.java:254) at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage (DataNode.java:974 ) at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool (DataNode.java:945) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo (BPOfferService.java:278) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake (BPServiceActor.java:220) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run (BPServiceActor.java:816) at java.lang.Thread.run (Thread.java:745)
The main mistakes are:
Mp/dfs/data: namenode clusterID = CID-aee19086-0039-4a5cMura7demurcb5f4355262c; datanode clusterID = CID-8fab07df-65df-48a0-862c-b3489783618d at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition (DataStorage.java:477)
DataNode clusterID is inconsistent with NameNode clusterID
Solution:
1. Check the clusterID:$HADOOP_HOME/tmp/dfs/name/current/VERSION of NameNode to see if the DataNode CID is inconsistent.
2. If it is inconsistent, delete the tmp file on the DataNode node, and then restart the DataNode., through sbin/hadoop-daemon.sh datanode start. If the configuration is correct, NameNode will re-establish the tmp file on the DataNode and include it in the cluster. Here two situations occur:
2.1.A new slave node appears in production. You can join the cluster by copying the configuration environment of the master node and then deleting the tmp directly.
2.2.When the child node dies in production, remember not to go directly to hadoop namenode-format (some students almost dried because of this problem), replace or restart the failed child node directly, and delete the tmp file at this time. Some students worry that deleting the tmp file of the child node will lead to file corruption or cannot be found. This does not matter, because there are multiple backups of the data, so there is no need to worry.
Thank you for your reading, the above is the "DataNode did not start what to do" content, after the study of this article, I believe you have a deeper understanding of how to start DataNode, the specific use of the situation also needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.