In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
hadoop second namesode exception Inconsistent checkpoint fields
No access, namenode process: cpu 100% ; memory usage is excessive; no error log;
secondarynamenode error:
java.io.IOException: Inconsistent checkpoint fields.LV = -57 namespaceID = 371613059 cTime = 0 ; clusterId = CID-b8a5f273-515a-434c-87c0-4446d4794c85 ; blockpoolId = BP-1082677108-127.0.0.1-1433842542163.Expecting respectively: -57; 1687946377; 0; CID-603ff285-de5a-41a0-85e8-f033ea1916fc; BP-2591078-127.0.0.1-1433770362761. at org.apache.hadoop.hdfs.server.namenode.CheckpointSignature.validateStorageInfo(CheckpointSignature.java:134) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:531) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:395) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$1.run(SecondaryNameNode.java:361) at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:411) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:357) at java.lang.Thread.run(Thread.java:662)
There are many reasons for the above exception, one of which is: edit log in the data directory of second namenode is inconsistent with the current data version
Solution:
Manually delete the files in the second node directory and restart hadoop:
The edit log under second namenode is actually a long time ago:
/opt/hadoop-2.5.1/dfs/tmp/dfs/namesecondary/current
[root@hbase current]# lltotal 116-rw-r--r-- 1 root root 42 Jun 8 2015 edits_0000000000000000001-0000000000000000002-rw-r--r-- 1 root root 8991 Jun 8 2015 edits_0000000000000000003-0000000000000000089-rw-r--r-- 1 root root 4370 Jun 8 2015 edits_0000000000000000090-0000000000000000123-rw-r--r-- 1 root root 3817 Jun 9 2015 edits_0000000000000000124-0000000000000000152-rw-r--r-- 1 root root 2466 Jun 9 2015 edits_0000000000000000153-0000000000000000172-rw-r--r-- 1 root root 2466 Jun 9 2015 edits_0000000000000000173-0000000000000000192-rw-r--r-- 1 root root 2466 Jun 9 2015 edits_0000000000000000193-0000000000000000212-rw-r--r-- 1 root root 2466 Jun 9 2015 edits_0000000000000000213-0000000000000000232-rw-r--r-- 1 root root 2466 Jun 9 2015 edits_0000000000000000233-0000000000000000252-rw-r--r-- 1 root root 2466 Jun 9 2015 edits_0000000000000000253-0000000000000000272-rw-r--r-- 1 root root 2466 Jun 9 2015 edits_0000000000000000273-0000000000000000292-rw-r--r-- 1 root root 2466 Jun 9 2015 edits_0000000000000000293-0000000000000000312-rw-r--r-- 1 root root 2466 Jun 9 2015 edits_0000000000000000313-0000000000000000332-rw-r--r-- 1 root root 2466 Jun 9 2015 edits_0000000000000000333-0000000000000000352-rw-r--r-- 1 root root 2466 Jun 9 2015 edits_0000000000000000353-0000000000000000372-rw-r--r-- 1 root root 2466 Jun 9 2015 edits_0000000000000000373-0000000000000000392-rw-r--r-- 1 root root 2466 Jun 9 2015 edits_0000000000000000393-0000000000000000412-rw-r--r-- 1 root root 6732 Jun 9 2015 edits_0000000000000000413-0000000000000000468-rw-r--r-- 1 root root 4819 Jun 9 2015 edits_0000000000000000469-0000000000000000504-rw-r--r-- 1 root root 2839 Jun 9 2015 fsp_w_picpath_0000000000000000468-rw-r--r-- 1 root root 62 Jun 9 2015 fsp_w_picpath_0000000000000000468.md5-rw-r--r-- 1 root root 2547 Jun 9 2015 fsp_w_picpath_0000000000000000504-rw-r--r-- 1 root root 62 Jun 9 2015 fsp_w_picpath_0000000000000000504.md5-rw-r--r-- 1 root root 199 Jun 9 2015 VERSION
The solution to the above problem is to configure hadoop.tmp.dir. If there is no configuration, you cannot find the edit log file. You need to configure it in hdfs-site.xml or core-site.xml.
The hadoop.tmp.dir configuration parameter specifies the default temporary path of hdfs. This is best configured. If the new node or the inexplicable DataNode cannot be started under other circumstances, delete the tmp directory in this file. However, if you delete this directory from the NameNode machine, you will need to re-execute the NameNode formatted command.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.