Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to solve the abnormal start of hadoop datanode

2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly explains "how to solve the abnormal startup of hadoop datanode". Interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn "how to solve hadoop datanode startup anomalies".

When starting the Hadoop cluster, it was found that only two of the three DataNode started successfully, and the following exception occurred in the log of the one that did not start successfully:

2012-09-07 23 org.apache.hadoop.ipc.RemoteException 5840 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is shutting down: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.protocol.UnregisteredDatanodeException: Datanode 192.168.100.11 Vera 50010 is attempting to report storage ID DS-1282452139-218.196.207.181-50010-1344220553439. Node 192.168.100.12:50010 is expected to serve this storage.

At org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDatanode (FSNamesystem.java:4608)

At org.apache.hadoop.hdfs.server.namenode.FSNamesystem.processReport (FSNamesystem.java:3460)

At org.apache.hadoop.hdfs.server.namenode.NameNode.blockReport (NameNode.java:1001)

At sun.reflect.NativeMethodAccessorImpl.invoke0 (NativeMethod)

At sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:57)

At sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)

At java.lang.reflect.Method.invoke (Method.java:601)

At org.apache.hadoop.ipc.RPC$Server.call (RPC.java:563)

At org.apache.hadoop.ipc.Server$Handler$1.run (Server.java:1388)

At org.apache.hadoop.ipc.Server$Handler$1.run (Server.java:1384)

At java.security.AccessController.doPrivileged (Native Method)

At javax.security.auth.Subject.doAs (Subject.java:415)

At org.apache.hadoop.security.UserGroupInformation.doAs (UserGroupInformation.java:1121)

At org.apache.hadoop.ipc.Server$Handler.run (Server.java:1382)

At org.apache.hadoop.ipc.Client.call (Client.java:1070)

At org.apache.hadoop.ipc.RPC$Invoker.invoke (RPC.java:225)

At $Proxy5.blockReport (Unknown Source)

At org.apache.hadoop.hdfs.server.datanode.DataNode.offerService (DataNode.java:958)

At org.apache.hadoop.hdfs.server.datanode.DataNode.run (DataNode.java:1458)

At java.lang.Thread.run (Thread.java:722)

This exception is due to the conflict between the storageID of the two DataNode

The solution is to directly delete the data directory of the machine on which the exception occurred! (delete the data directory in hadoop/tmp/dfs)

At this point, I believe you have a deeper understanding of "how to solve hadoop datanode startup anomalies". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report