In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces the example analysis of the pit in the process of building the hbase environment, which is very detailed and has a certain reference value. Interested friends must read it!
Recently set up a fully distributed environment for hbase
According to the official documentation, step by step, it went smoothly, and hadoop+hbase will be ready soon.
However, when you look at it with shell, there will always be errors, and the error message has forgotten that copy came down.
Basically, the search order is like this.
Hbase, hdfs, master node, slave node.
Finally, such a paragraph is found in the log of the slave node.
2015-09-02 01 WARN org.apache.hadoop.hdfs.server.common.Storage 1715 37962 WARN org.apache.hadoop.hdfs.server.common.Storage: java.io.IOException: Incompatible clusterIDs in / usr/local/hadoop/data1: namenode clusterID = CID-c7970b3b-e127-4054 Muhamba 7bMel 7736183904d2; datanode clusterID = CID-4b42cd9e-35ec-4194-b516-d4de4055c35b
2015-09-02 01 INFO org.apache.hadoop.hdfs.server.common.Storage 1715 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on / usr/local/hadoop/data2/in_use.lock acquired by nodename 6889@ubuntu
2015-09-02 01 WARN org.apache.hadoop.hdfs.server.common.Storage: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory / usr/local/hadoop/data2 is in an inconsistent state: cluster Id is incompatible with others.
2015-09-02 01 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode 17V 37980: Initialization failed for Block pool (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000. Exiting.
Java.io.IOException: All specified directories are failed to load.
At org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead (DataStorage.java:477)
At org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage (DataNode.java:1387)
At org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool (DataNode.java:1352)
At org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo (BPOfferService.java:316)
At org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake (BPServiceActor.java:228)
At org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run (BPServiceActor.java:852)
At java.lang.Thread.run (Thread.java:745)
Unexpectedly, it is because the formatted namenode is used, which leads to the cluster id inconsistency between datanode and namenode, and manually modifies the cluster id. Okay!
The above is all the contents of the article "sample Analysis of pits in the process of building a hbase environment". Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.