In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Add datanode node dynamically, hostname node14.cn
Shell > hadoop-daemon.sh start datanode
Shell > jps # to see if the datanode process has been started
It is found that the DataNode process disappears immediately after it starts. Query the log and find a record:
2018-04-1500 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP] INT] 2018-04-1500 createNameNode [] 2018-04-1500 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: 43673 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties2018-04-1500 loaded properties from hadoop-metrics2.properties2018-04-1500 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: Scheduled snapshot period at 10 second (s). 2018-04-1500 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: NameNode metrics system started2018 -04-15 00 WARN org.apache.hadoop.fs.FileSystem 08 INFO org.apache.hadoop.hdfs.server.namenode.NameNode 43839 WARN org.apache.hadoop.fs.FileSystem: "node11.cn:9000" is a deprecated filesystem name. Use "hdfs://node11.cn:9000/" instead.2018-04-1500 INFO org.apache.hadoop.hdfs.DFSUtil: Http request log for http.requests: Http request log for http.requests .namenode is not defined2018-04-1500 Added global filter safety' (class=org.apache.hadoop.http.HttpServer2 $QuotingInputFilter) 2018-04-1500 safety' 08safety' 44298 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs2018-04-1500 Added filter static_user_filter Added filter static_user_filter (class=org.apache.hadoop) .http.lib.StaticUserWebFilter $StaticUserFilter) to context logs2018-04-1500 INFO org.apache.hadoop.http.HttpServer2 08Added filter static_user_filter 44298 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static2018-04-150008StaticUserFilter 44374 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter) 2018-04-1500VOBUserWebFilter 08Ze44377 INFO org.apache.hadoop.http .HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources Org.apache.hadoop.hdfs.web.resources PathSpec=/webhdfs/v1/*2018-04-1500 pathSpec=/webhdfs/v1/*2018 08 threw a non Bind IOExceptionjava.net.BindException 44411 INFO org.apache.hadoop.http.HttpServer2: HttpServer.start () threw a non Bind IOExceptionjava.net.BindException: Port in use: node11.cn:9001 at org.apache.hadoop.http.HttpServer2.openListeners (HttpServer2.java:892) at org.apache.hadoop.http.HttpServer2.start (HttpServer2.java:828) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start (NameNodeHttpServer. Java:142) at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer (NameNode.java:706) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize (NameNode.java:593) at org.apache.hadoop.hdfs.server.namenode.NameNode. (NameNode.java:765) at org.apache.hadoop.hdfs.server.namenode.NameNode. (NameNode.java:749) at org.apache.hadoop .hdfs.server.namenode.NameNode.createNameNode (NameNode.java:1446) Caused by: java.net.BindException: Cannot assign requested address at sun.nio.ch.Net.bind0 (Native Method) at sun.nio.ch.Net.bind (Net.java:433) at sun.nio.ch.Net.bind (Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind (ServerSocketChannelImpl.java:223) At sun.nio.ch.ServerSocketAdaptor.bind (ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open (SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners (HttpServer2.java:887)... 8 more2018-04-15 00 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl 08 at sun.nio.ch.ServerSocketAdaptor.bind 44414 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...2018-04-15 00 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2018-04-1500 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2018-04-1500 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
Java.net.BindException: Port in use: node11.cn:9001 at org.apache.hadoop.http.HttpServer2.openListeners (HttpServer2.java:892) at org.apache.hadoop.http.HttpServer2.start (HttpServer2.java:828) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start (NameNodeHttpServer.java:142) at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer (NameNode.java:706) at org .apache.hadoop.hdfs.server.namenode.NameNode.initialize (NameNode.java:593) at org.apache.hadoop.hdfs.server.namenode.NameNode. (NameNode.java:765) at org.apache.hadoop.hdfs.server.namenode.NameNode. (NameNode.java:749) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode (NameNode.java:1446) at org.apache.hadoop.hdfs.server.namenode.NameNode.main (NameNode.java:1512) Caused by: java.net.BindException: Cannot assign requested address at sun.nio.ch.Net.bind0 (Native Method) at sun.nio.ch.Net.bind (Net.java:433) at sun.nio.ch.Net.bind (Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind (ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind (ServerSocketAdaptor.java:74 ) at org.mortbay.jetty.nio.SelectChannelConnector.open (SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners (HttpServer2.java:887)... 8 more2018-04-15 00 at org.apache.hadoop.http.HttpServer2.openListeners 08 more2018-04-15 00 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 12018-04-15 00 12 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:/* * SHUTDOWN_MSG: Shutting down NameNode at node14.cn/192.168.74.114***/
Solution:
Delete the contents of the dfs directory and re-execute the command
Shell > rm-rf dfs/
Shell > hadoop-daemon.sh start datanode
Shell > yarn-daemon.sh start nodemanager
Refresh the nanenode node
Shell > hdfs dfsadmin-refreshNodes
Shell > start-balancer.sh
Datanode added successfully
Distribute the data to the new datanode node host
Shell > hadoop balancer-threshold 10 # 50 parameters that control disk utilization. The smaller the value, the more balanced the disk utilization of each node.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.