In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article introduces the relevant knowledge of "how to remove Name node is in safe mode from Hadoop". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
When you run the hadoop program, the following error is sometimes reported:
Org.apache.hadoop.dfs.SafeModeException: Cannot delete / user/hadoop/input. Name node is in safe mode
This mistake should be quite common (at least when I run it)
Let's analyze this error and take it literally:
Name node is in safe mode
Indicates that the NameNode of Hadoop is in safe mode.
So what is the security mode of Hadoop?
When the distributed file system starts, there will be a safe mode at the beginning. when the distributed file system is in safe mode, the contents in the file system are not allowed to be modified or deleted until the safe mode ends. The main purpose of the security mode is to check the validity of the data blocks on each DataNode when the system is started, and to copy or delete some data blocks according to the policy. The runtime can also enter safe mode through commands. In practice, when the system starts to modify and delete files, there will also be an error prompt that the safe mode does not allow modification, you only need to wait for a while.
It is clear now, so now to solve this problem, I want to make Hadoop not in safe mode mode, can we solve it directly without waiting?
The answer is yes, just type in the directory of Hadoop:
Bin/hadoop dfsadmin-safemode leave
That is, turn off the safe mode of Hadoop, and the problem is solved.
The "ctrl+c" operation was previously used during hadoop execution
The "Name node is in safe mode" prompt appears when you use hadoop again:
Root@v-jiwan-ubuntu-0:~/hadoop/hadoop-0.20.2# bin/hadoop fs-put conf input
Put: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory / user/root/input/conf. Name node is in safe mode.
--
I have been testing hadoop recently, but I didn't expect to get a point in reduce.
The only way to solve the problem is to Ctrl+c, but the problem comes with XD.
Stop hadoop before starting hadoop
Then when you want to delete the data in DFS
When name node is in safe mode is found, there is no way to delete data!
It took a long time to find the answer.
Bin/hadoop dfsadmin-safemode leave can remove the safemode. For this problem, Orz--safemode mode NameNode first enters safe mode when it starts. If the block lost by datanode reaches a certain proportion (1-dfs.safemode.threshold.pct), the system will always be in safe mode, that is, read-only state. Dfs.safemode.threshold.pct (default value 0.999f) means that when HDFS starts, you can only leave safe mode if the number of block reported by DataNode reaches 0.999 times the number of block recorded in metadata, otherwise it will always be read-only. If set to 1, HDFS is always in SafeMode. The following line is excerpted from NameNode startup logs (block reporting ratio 1 reaches the threshold of 0.9990) The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 18 seconds.hadoop dfsadmin-safemode leave has two ways to leave this safe mode 1. Change dfs.safemode.threshold.pct to a smaller value, which defaults to 0.999. 2. The hadoop dfsadmin-safemode leave command forcibly leaves http://bbs.hadoopor.com/viewthread.php?tid=61&extra=page%3D1 Mustang safe mode is exited when the minimal replication condition is reached, plus an extensiontime of 30 seconds. The minimal replication condition is when 99.9% of the blocks inthe whole filesystem meet their minimum replication level (which defaults to one, andis set by dfs.replication.min). Exit prerequisite for safe mode-99.9% of the Blocks in the entire file system (the default is 99.9%, which can be set through dfs.safemode.threshold.pct) reaches the minimum backup level (the default is 1, which can be set through dfs.replication.min). Dfs.safemode.threshold.pct float 0.999 The proportion of blocks in the system that must meet the minimumreplication level defined by dfs.rep lication.min before the namenodewill exit safemode. Settingthis value to 0 or less forces the name-node not to start in safe mode.Setting this value to more than 1 means the namenode never exits safemode.-- users can operate safe mode through dfsadmin-safemode value Parameter value is described as follows: enter-enter safe mode leave-force NameNode to leave safe mode get-return information about whether safe mode is enabled or not wait-wait until the end of safe mode. " This is the end of how to remove Name node is in safe mode from Hadoop. Thank you for your reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.