In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Start-dfs.sh
All processes have been started successfully
Master:
65456 Jps
64881 NameNode
65057 DataNode
7380 NodeManager
65276 SecondaryNameNode
Slave:
3607 DataNode
7380 NodeManager
3675 Jps
Under hadoop:
Slaves file settings:
Master
Slave1
Slave2
-
Netstat-anp | grep 9000
Tcp 00 192.168.1.2009000 LISTEN 64881/java 0.0.0.0
Tcp 00 192.168.1.200:9000 192.168.1.200:42846 ESTABLISHED 64881/java
Tcp 00 192.168.1.200 tcp 42853 192.168.1.200 9000 TIME_WAIT-
Tcp 00 192.168.1.200:42846 192.168.1.200:9000 ESTABLISHED 65057/java
Problem description:
Under the http://master:50070/ monitoring page.
Live nodes is 1.
Only master's datanode can see that the other two slave have processes but cannot connect to master.
And there is no current file generated under slave and dfs/data.
View the log as follows:
2016-11-08 13 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/192.168.0.100:9000. Already tried 6 time (s); retry policy is RetryUpToMaximumCountWithFixedSleep (maxRetries=10, sleepTime=1000 MILLISECONDS)
The reason is that centos7
Turn off selinux and iptables. No, turn off the dynamic firewall.
Turn off the firewall: * very important *
# systemctl status firewalld.service-check the status of the firewall
# systemctl stop firewalld.service-turn off the firewall
# systemctl disable firewalld.service-permanently turn off the firewall
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.