In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly shows you "hadoop2.6.4 can not automatically switch namenode after building a HA cluster", the content is easy to understand, clear, hope to help you solve your doubts, the following let the editor lead you to study and learn "hadoop2.6.4 build HA cluster can not automatically switch namenode after the article" this article.
After setting up the HA cluster, you want to test the high availability of the cluster, so stop the namenode of active:
Hadoop-daemon.sh stop namenode
Or you can directly kill the corresponding process of the node namenode.
However, through the hdfs haadmin-getServiceState master1 check, I found that the namenode of standby is not automatically switched to active, and it will not be switched until I manually start the namenode that has been dropped by kill, but this will not achieve the goal of high availability.
After searching for a long time on the Internet, I found out that the reason was realized through the parameter dfs.ha.fencing.methods in hdfs-site.xml, which way to log in to another namenode to take over work in case of failure. If you use the default value of sshfence, setting the cluster cannot be automatically switched (explained separately below). The log message is the namenode that cannot connect to the standby.
Dfs.ha.fencing.methods shell (/ bin/true)
After changing to the above value, the problem is solved. After the namenode of active is stopped, it is cut to the namenode of standby.
Extended reading: dfs.ha.fencing.methods parameter
Only one namenode node of the system is in the active state at any one time. When switching between master and slave, standby namenode will become active, and the original active namenode can no longer be in active state, otherwise it will be a problem for two namenode to be in active state at the same time. So in failover, you need to set a method to prevent both namenode from being in the active state, which can be a java class or a script.
There are two methods of fencing, sshfence and shell.
The sshfence method refers to logging in to the active namenode node through ssh to kill the namenode process, so you need to set ssh login without a password and make sure you have the permission to kill the namenode process.
The shell method is to run a shell script / command to prevent two namenode from being in active at the same time, and the script needs to be written yourself.
Note that the QJM mode itself has a fencing function, which ensures that only one namenode can write edits files to the journalnode, so there is no need to set the fencing method. However, when the failover occurs, the original active namenode may still be accepting read requests from the client, so the client is likely to read some outdated data (because the data for the new active namenode has been updated in real time). Therefore, it is recommended that you set the fencing method. If you really don't want to set the fencing method, you can set a method that returns success (without the fencing effect), such as "shell (/ bin/true)". This is purely for the successful return of the fencing method and does not need to have a real fencing effect. This improves the availability of the system and maintains its availability even if the fencing mechanism fails.
The above is all the content of this article "hadoop2.6.4 can't switch namenode automatically after setting up HA cluster". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.