Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Spark-local mode prompts the problem of insufficient permissions in tmp/hive hdfs

2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

Spark version 2.0

When spark starts in local mode, it sometimes reports the problem of insufficient / tmp/hive hdfs permissions, but we did not put the hdfs-site.xml configuration file in our project. The spark file should be stored on the local computer, but why report this error? This question is very strange, some colleagues will report this error, some will not. It's all the same configuration.

This was the error reported on 2016-10-31. I never reported this error in the local test before, but two of my colleagues could not report the project on this morning. I couldn't get up even if I tried it myself.

In the end, it is found that the error reported by spark is not accurate. If in the windows environment, spark will go to a folder under a disk to create a "tmp/hive/ user name" and store some temporary files in this folder. Note that this is not a data file. The address of the data file is set separately. We found that colleagues who reported errors only created a "tmp/hive" folder, and the following user name folder was not created successfully, so it can be inferred that spark made a mistake in creating this folder the next time it was started, so we delete the "tmp/hive" folder, start it again, and spark will run normally. After running successfully, it will create a "tmp/hive/ user name" folder.

It is worth mentioning that this problem did not occur before the local test, and it was created under the E disk before.

When Monday, 2016-10-31 came, spark chose disk C, during which the code, configuration, environment and so on remained unchanged. This reason has not been solved so far.

The folder of this xiaokan must be created after the first spark startup failure (created tmp/hive), and then delete the tmp/hive before it can be created. The specific principle is not clear yet.

I would like to add here that after we used the spark2.1 version, we never had this problem again. It may be a spark2.0 trap.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report