In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article is about how to solve the java.io.IOException: Bad connect ack with firstBad problem in hadoop. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.
[hadoop @ master ~] $touch bigfile.tar
[hadoop @ master ~] $cat hadoop-2.5.2.tar.gz > > bigfile.tar
[hadoop @ master ~] $touch bigfile.tarcat
[hadoop @ master ~] $
[hadoop @ master ~] $
[hadoop@master ~] $cat hadoop-2.5.2.tar.gz > > bigfile.tar
[hadoop@master ~] $hadoop fs-put bigfile.tar / upload files to remote directory (/)
15-12-03 00:57:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... Using builtin-java classes where applicable
15-12-03 00:57:26 INFO hdfs.DFSClient: Exception in createBlockOutputStream
Java.io.IOException: Bad connect ack with firstBadLink as 192.168.209.102:50010
At org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream (DFSOutputStream.java:1377)
At org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream (DFSOutputStream.java:1281)
At org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run (DFSOutputStream.java:526)
00:57:26 on 15-12-03 INFO hdfs.DFSClient: Abandoning BP-2062059271-192.168.209.100-1448384244888:blk_1073741864_1040
15-12-03 00:57:26 INFO hdfs.DFSClient: Excluding datanode 192.168.209.102:50010
15-12-03 00:57:26 INFO hdfs.DFSClient: Exception in createBlockOutputStream
Java.io.IOException: Bad connect ack with firstBadLink as 192.168.209.101:50010
At org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream (DFSOutputStream.java:1377)
At org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream (DFSOutputStream.java:1281)
At org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run (DFSOutputStream.java:526)
00:57:26 on 15-12-03 INFO hdfs.DFSClient: Abandoning BP-2062059271-192.168.209.100-1448384244888:blk_1073741865_1041
15-12-03 00:57:26 INFO hdfs.DFSClient: Excluding datanode 192.168.209.101:50010
Reason:
1. A node machine suddenly turns on the firewall, which makes it impossible to connect
2. Force kill to drop a node (it is said)
3. If a certain machine is pawned directly, isn't it finished? What does the design of data redundancy disaster recovery do? (speculate)
Solution:
Root users turn off firewall service iptables stop
It may be that some node caused the iptables to rise again because of restart.
Chkconfig iptables off can be used to solve the problems caused by restart.
If you don't restart, use service iptables stop to turn it off.
Thank you for reading! This is the end of the article on "how to solve the java.io.IOException: Bad connect ack with firstBad problem in hadoop". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it for more people to see!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.