Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What if hadoop-001- starts hadoop 2.5.2 and fails to start datanode?

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly shows you the "hadoop-001- startup hadoop 2.5.2 encountered datanode startup failure how to do", the content is easy to understand, clear, hope to help you solve your doubts, the following let the editor lead you to study and learn "hadoop-001- startup hadoop 2.5.2 encountered datanode startup failure how to do" this article.

Open the log file

Localhost: starting datanode, logging to / opt/hadoop/logs/hadoop-root-datanode-localhost.out

Find an error report

2016-01-17 11 starting 4315 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting

2016-01-17 11 and name-node layout version 43 14 INFO org.apache.hadoop.hdfs.server.common.Storage: Data-node version:-55 and name-node layout version:-57

2016-01-17 11 opt/hadoop/data/hadoop-root/dfs/data/in_use.lock acquired by nodename 4816@localhost 4315 INFO org.apache.hadoop.hdfs.server.common.Storage: 54570

2016-01-17 11 Datanode Uuid unassigned 4315 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000. Exiting.

Java.io.IOException: Incompatible clusterIDs in / opt/hadoop/data/hadoop-root/dfs/data: namenode clusterID = CID-6f1b2a1b-2b93-4282-ac26-9aca48ec99ea; datanode clusterID = CID-5a313ef8-c96f-47bf-a7f9-3945ff5d9835

At org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition (DataStorage.java:477)

At org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead (DataStorage.java:226)

At org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead (DataStorage.java:254)

At org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage (DataNode.java:975)

At org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool (DataNode.java:946)

At org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo (BPOfferService.java:278)

At org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake (BPServiceActor.java:220)

At org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run (BPServiceActor.java:812)

At java.lang.Thread.run (Thread.java:662)

2016-01-17 11 Block pool 4315 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000

2016-01-17 11 14 14 43 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool (Datanode Uuid unassigned)

2016-01-17 11 14 14 43 14 56 734 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode

2016-01-17 11 43 56 738 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0

2016-01-17 11 4314 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:

Judging from the log, the bold part illustrates the problem.

The clusterID of datanode does not match the clusterID of namenode.

Solution:

According to the path in the log, / opt/hadoop/data/hadoop-root/dfs/

You can see the data and name folders

As you can see from the hdfs-site.xml default configuration file, the data and name file directories depend on the hadoop.tmp.dir parameter of core-site.xml

Dfs.namenode.name.dir file://${hadoop.tmp.dir}/dfs/namedfs.datanode.data.dirfile://${hadoop.tmp.dir}/dfs/data

The hadoop.tmp.dir configuration of this system is as follows

Hadoop.tmp.dir

/ opt/hadoop/data/hadoop-$ {user.name}

Copy the clusterID in VERSION under name/current to VERSION under data/current, overwriting the original clusterID

Keep the two in line

Then restart, execute jps after startup, and view the process

[root@localhost hadoop] # jps

9518 SecondaryNameNode

9291 DataNode

9912 Jps

9138 NameNode

7626 ResourceManager

7797 NodeManager

The cause of the problem: after formatting dfs for the first time, hadoop is started and used, and then the format command (hdfs namenode-format) is re-executed, where the clusterID of namenode is regenerated and the clusterID of datanode remains unchanged.

The above is all the contents of the article "what to do when hadoop-001- starts hadoop 2.5.2 when datanode fails to start?" Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report