Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Hadoop+hbase+zookeeper+spark+p

2025-04-09 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Troubleshooting log:

Solution: it may be caused by changing the machine name, modify hosts, write hostname and IP, and then, try it agin!

Solution:

When the hadoop-common-2.2.0.jar package is introduced for secondary development, such as reading and writing HDFS files, the error is reported for the first time.

Java.io.IOException: No FileSystem for scheme: hdfsat org.apache.hadoop.fs.FileSystem.getFileSystemClass (FileSystem.java:2421) at org.apache.hadoop.fs.FileSystem.createFileSystem (FileSystem.java:2428)

At org.apache.hadoop.fs.FileSystem.access$200 (FileSystem.java:88)

At org.apache.hadoop.fs.FileSystem$Cache.getInternal (FileSystem.java:2467)

At org.apache.hadoop.fs.FileSystem$Cache.get (FileSystem.java:2449)

At org.apache.hadoop.fs.FileSystem.get (FileSystem.java:367)

At FileCopyToHdfs.readFromHdfs (FileCopyToHdfs.java:65)

At FileCopyToHdfs.main (FileCopyToHdfs.java:26)

This is because the default core-default.xml under this package is not configured with the following attributes:

Fs.hdfs.implorg.apache.hadoop.hdfs.DistributedFileSystemThe FileSystem for hdfs: uris.

< /property>

After adding, the problem is solved. It is recommended to download the hadoop-2.2.0 source code, compile and package the core-default.xml file after the source code is modified, and then introduce a new jar package into the secondary development project.

Http://www.cnblogs.com/tangtianfly/p/3491133.html

Http://www.cnblogs.com/tangtianfly/p/3491133.html

Http://blog.csdn.net/u013281331/article/details/17992077

The property above specifies the implementation class of the fs.hdfs.impl.

Solution:

Time out of sync

Su root

Ntpdate 133.100.11.8

Cd / usr/local/hbase/bin/

. / hbase-daemon.sh start regionserver

Solution:

Open the directories corresponding to datanode and namenode configured in hdfs-site.xml, and open the VERSION in the current folder respectively. You can see that the clusterID entry is indeed inconsistent as recorded in the log. Modify the clusterID of the VERSION file in datanode is consistent with that in namenode, and then restart dfs (execute start-dfs.sh) and then execute the jps command to see that datanode has been started normally.

The cause of the problem: after formatting dfs for the first time, hadoop is started and used, and then the format command (hdfs namenode-format) is re-executed, where the clusterID of namenode is regenerated and the clusterID of datanode remains unchanged.

Solution:

1. Check the firewall and selinux

There should be no parsing of 127.0.0.1 pointing to the machine name in 2.hosts, such as "127.0.0.1 localhost".

Solution:

Because this jar package is available in both hbase and hadoop, you can choose to remove it.

Solution:

It turns out that when Hadoop was first started, it was still in safe mode.

[coder@h2 hadoop-0.20.2] $bin/hadoop dfsadmin-safemode getSafe mode is ON [coder@h2 hadoop-0.20.2] $

You can wait for Hadoop to exit safe mode before executing the HBase command, or manually exit the safe mode of Hadoop

[coder@h2 hadoop-0.20.2] $bin/hadoop dfsadmin-safemode leaveSafe mode is OFF [coder@h2 hadoop-0.20.2] $

Cd / usr/local/hadoop2/bin

. / hadoop dfsadmin-safemode leave

Solution:

Zookeeper.znode.parent

/ usr/local/hbase/hbase_tmp/hbase

Solution:

. / stop-all.sh

Hadoop namenode-format

Rm-rf / home/hadoop/tmp/dfs

. / start-all.sh

Rm-rf / home/hadoop/tmp

Rm-rf / home/hadoop/dfs_data

Rm-rf / home/hadoop/pids

Rm-rf / home/hadoop/dfs_name

Cd / usr/local/hadoop2/bin/

. / hadoop namenode-format

Closing ipc connection to master.kaiser.com/192.168.0.60:8020: Connection refused

Call From master.kaiser.com/192.168.0.60 to master.kaiser.com:8020 failed on connection exception: java.net.ConnectException:Connection refused

Log:

2014-09-03 13 purl 50 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on / home/hadoop/dfs_name/in_use.lock acquired by nodename 7582@master.kaiser.com

2014-09-03 13 50 39032 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsp_w_picpath

Java.io.IOException: NameNode is not formatted.

2014-09-03 13 50 39141 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join

Java.io.IOException: NameNode is not formatted.

Hadoop.hbase.MasterNotRunningException: The node / hbase is not in ZooKeeper. It should have been written by the master.Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.

2013-04-13 17 INFO org.apache.hadoop.hbase.util.FSUtils 13 INFO org.apache.hadoop.hbase.util.FSUtils: INFO org.apache.hadoop.hbase.util.FSUtils: Waiting for dfs to exit safe mode...2013-04-13 17 INFO org.apache.hadoop.hbase.util.FSUtils: 1337 386 INFO org.apache.hadoop.hbase.util.FSUtils: Waiting for dfs to exit safe mode...2013-04-13 17 17 Waiting for dfs to exit safe mode...2013-04-13 17 INFO org.apache.hadoop.hbase.util.FSUtils: 27373 INFO org.apache.hadoop.hbase.util. FSUtils: Waiting for dfs to exit safe mode...2013-04-13 17 INFO org.apache.hadoop.hbase.util.FSUtils: Waiting for dfs to exit safe mode...2013-04-13 17 INFO org.apache.hadoop.hbase.util.FSUtils: 07409 INFO org.apache.hadoop.hbase.util.FSUtils: Waiting for dfs to exit safe mode...

Execute the hbase program orshell command with the following prompt (. / hbase shell):

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/usr/local/hbase-0.92.1/lib/slf4j-log4j12-1.5.8.jarr _ r _ r]

SLF4J: Found binding in [jar:file:/usr/local/hadoop2-1.0.3/lib/slf4j-log4j12-1.4.3.jarring plump orgUnibank slf4jUniplicStaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: tmaster.kaiser.com/192.168.0.63:9000

2014-06-18 20 Datanode Uuid unassigned 34 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode 59622: Initialization failed for block pool Block pool (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000

Java.io.IOException: Incompatible clusterIDs in / usr/local/hadoop/hdfs/data: namenode clusterID = CID-af6f15aa-efdd-479b-bf55-77270058e4f7; datanode clusterID = CID-736d1968-8fd1-4bc4-afef-5c72354c39ce

At org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition (DataStorage.java:472)

At org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead (DataStorage.java:225)

At org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead (DataStorage.java:249)

At org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage (DataNode.java:929)

At org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool (DataNode.java:900)

At org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo (BPOfferService.java:274)

At org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake (BPServiceActor.java:220)

At org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run (BPServiceActor.java:815)

At java.lang.Thread.run (Thread.java:744)

As you can see from the log, the reason is that the clusterID of datanode does not match the clusterID of namenode.

Regionserver.HRegionServer: Failed deleting my ephemeral node

Java.io.IOException: No FileSystem for scheme: hdfs

Hadoop:pache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Unresolved datanode registration: hostname cannot be resolved

Hbase:Will not attempt to authenticate using SASL (unknown error)

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report