In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-11 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article is to share with you about how to solve the problem of HDFS permissions after Hadoop restart. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.
Restart the Hadoop cluster and report an error when debugging hdfs api with Eclipse:
[WARNING] java.lang.NullPointerException at org.conan.kafka.HdfsUtil.batchWrite (HdfsUtil.java:50) at org.conan.kafka.SingleTopicConsumer.run (SingleTopicConsumer.java:144) at java.lang.Thread.run (Thread.java:745) at java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:617) at java.lang.Thread.run (Thread.java:745)
Find line 50 of HdfsUtil.java
Os.write (buff, 0, buff.length)
I found that I reported an error when I was writing, and then I messed around for a while, but I didn't find the reason. Later, I wondered if it might be a permission problem (the computer is not an administrator, but the hadoop cluster runs as root), but I have previously changed the permissions of the hdfs folder to 777. Anyway, try again, hdfs dfs-chmod-R 777 / input, and then debug again, OK again.
So the question is, does the Hadoop cluster have to modify the permissions of files (folders) after each restart (without namenode format)?
According to practice, it should be.
Thank you for reading! This is the end of this article on "how to solve the problem of HDFS permissions after Hadoop restart". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it out for more people to see!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.