Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Hadoop fault handling

2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

Fault description:

In the past, pictures and files were uploaded and downloaded through the configuration of nginx, and the file storage used Hadoop as file storage. During the simulation last night, Hadoop could not be connected in the process of uploading pictures, and many problems were found: after processing connection errors, pictures still could not be uploaded, and all kinds of errors were reported.

How to deal with it:

The general fault information is obtained from the log. From the whole process of uploading pictures: excluding the log error information of instances that access tomcat to upload pictures from the front-end nginx agent.

Previously, the port of hdfs in the configuration of Hadoop was modified by 9100, but there was a problem with the address configuration of the configuration file of application.properties in the instance of uploading files in tomcat. These are the problems that occurred before troubleshooting, and there are still problems after modification, and then continue with the following troubleshooting!

If the configuration of nginx is normal, an error is reported:

Actual log error message:

When the log error log of nginx is opened first, the connection tomcat timeout is reported:

[root@web01 server] # tail-100f / webserver/nginx/nginx-upload/logs/error.log

11:26:51 on 2016-07-10 [error] 1844440: * 17007 upstream timed out (Connection timed out) while reading response header from upstream, client: 123.125.138.174, server: online.898china.com, request: "POST / innospace-file-server/file/upload.json?x=100&y=100&w=200&h=200&userId=87 HTTP/1.1", upstream:

It is found that the nginx connection tomcat timed out. There is a problem with the tomcat instance of the uploaded file. Navigate directly to the log of innospace-file-server to see the error message:

[root@web01 server] # tail-1000F / webserver/tomcat/innospace-file-server/logs/catalina.out

[ERROR] 2016-07-10 1111 13 49958 com.inno.innospace.utils.HDFSUtil-failed to upload file

Java.io.IOException: Bad connect ack with firstBadLink as 10.24.198.117:50010

At org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream (DFSOutputStream.java:1460) ~ [hadoop-hdfs-2.6.3.jar:?]

At org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream (DFSOutputStream.java:1361) ~ [hadoop-hdfs-2.6.3.jar:?]

At org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run (DFSOutputStream.java:588) ~ [hadoop-hdfs-2.6.3.jar:?]

I really don't know where the problem appeared here, so I had better have a look at Baidu:

It is found that related to the firewall, what the data is stored on the Hadoop is on the node of the slave slave. The process of connecting the Hadoop master requires a connection to port 50010, and finally opens port 50010 at the firewall.

Iptables-An INPUT-p tcp-- dport 50010-j ACCEPT

After opening it, the problem was solved.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report