In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly shows you "what are the common mistakes in hadoop", the content is simple and clear, and I hope it can help you solve your doubts. Let the editor lead you to study and learn this article "what are the common mistakes in hadoop?"
1.Got too many exceptions to achieve quorum size 2/3. 3 exceptions thrown:
2016-01-05 23 FATAL org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: recoverUnfinalizedSegments failed for required journal (mgr=QJM to [192.168.10.31, 192.168.10.32, 192.168.10.33], stream=null))
Org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many exceptions to achieve quorum size 2/3. 3 exceptions thrown:
192.168.10.31 java.net.ConnectException 8485: Call From bdata4/192.168.10.34 to bdata1:8485 failed on connection exception: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
192.168.10.33 http://wiki.apache.org/hadoop/ConnectionRefused 8485: Call From bdata4/192.168.10.34 to bdata3:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:
192.168.10.32 http://wiki.apache.org/hadoop/ConnectionRefused 8485: Call From bdata4/192.168.10.34 to bdata2:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:
At org.apache.hadoop.hdfs.qjournal.client.QuorumException.create (QuorumException.java:81)
At org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException (QuorumCall.java:223)
At org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum (AsyncLoggerSet.java:142)
At org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createNewUniqueEpoch (QuorumJournalManager.java:182)
At org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.recoverUnfinalizedSegments (QuorumJournalManager.java:436)
At org.apache.hadoop.hdfs.server.namenode.JournalSet$8.apply (JournalSet.java:624)
At org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors (JournalSet.java:393)
At org.apache.hadoop.hdfs.server.namenode.JournalSet.recoverUnfinalizedSegments (JournalSet.java:621)
At org.apache.hadoop.hdfs.server.namenode.FSEditLog.recoverUnclosedStreams (FSEditLog.java:1394)
At org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices (FSNamesystem.java:1151)
At org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices (NameNode.java:1658)
At org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState (ActiveState.java:61)
At org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal (HAState.java:63)
At org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState (StandbyState.java:49)
At org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive (NameNode.java:1536)
At org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive (NameNodeRpcServer.java:1335)
At org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive (HAServiceProtocolServerSideTranslatorPB.java:107)
At org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod (HAServiceProtocolProtos.java:4460)
At org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call (ProtobufRpcEngine.java:619)
At org.apache.hadoop.ipc.RPC$Server.call (RPC.java:962)
At org.apache.hadoop.ipc.Server$Handler$1.run (Server.java:2040)
At org.apache.hadoop.ipc.Server$Handler$1.run (Server.java:2036)
At java.security.AccessController.doPrivileged (Native Method)
At javax.security.auth.Subject.doAs (Subject.java:415)
At org.apache.hadoop.security.UserGroupInformation.doAs (UserGroupInformation.java:1656)
At org.apache.hadoop.ipc.Server$Handler.run (Server.java:2034)
2016-01-05 23 0315 32968 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
Cause of error:
When we execute start-dfs.sh, the default startup order is namenode > datanode > journalnode > zkfc. If journalnode and namenode are not started on the same machine, it is easy to cause NN unable to connect to JN and unable to achieve election because of network delay. Finally, the newly started namenode will suddenly hang up a master, leaving a standy. Although there is a retry mechanism waiting for JN to start when NN starts, the network may not be in good condition due to the limit of the number of retries. As a result, the number of retries was used up and the startup was not successful.
A: at this point, you need to start the main namenode manually, avoiding the network delay waiting for journalnode. Once the two namenode are connected to the journalnode and the election is realized, there will be no failure.
B: start JournalNode and then run start-dfs.sh
C: adjust the fault tolerance times or time of nn to jn to a larger value to ensure fault tolerance for normal startup delay and network delay.
Added to hdfs-site.xml, the number of retries for jn detection by nn is 10 times by default, each time 1000ms, so the poor network condition needs to be increased. Here, it is set to 30 times.
Ipc.client.connect.max.retries
thirty
2 、 org.apache.hadoop.security.AccessControlException: Permission denied
Modify the hdfs-site.xml on the master node by adding the following
Dfs.permissions
False
The purpose of this paper is to cancel the permission check to solve the problem that when I configure eclipse to connect to the hadoop server on the windows machine, I will report an error after configuring the map/reduce connection.
3. Operation report: [org.apache.hadoop.security.ShellBasedUnixGroupsMapping]-[WARN] got exception trying to get groups for user bdata
Modify the hdfs-site.xml on the master node by adding the following
Dfs.web.ugi
Bdata,supergroup
These are all the contents of this article entitled "what are the common mistakes in hadoop?" Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.