In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
1. Launch yarn Times Error: JAVA_HOME is not set and could not be found.
To modify the JAVA_HOME in / etc/hadoop/hadoop-env.sh, you should use an absolute path.
Export JAVA_HOME=$JAVA_HOME / / error, cannot be corrected in this way
Export JAVA_HOME=/usr/java/jdk1.6.0_45 / / correct, it should be changed this way
2. The following warning message appears when you start hadoop after installation
2016-05-20 18 INFO org.mortbay.log 45 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 500702016-05-20 18 INFO org.mortbay.log 45 INFO org.mortbay.log: jetty-6.1.262016-05-20 18 INFO org.mortbay.log 45 INFO org.mortbay.log: Started HttpServer2 $SelectChannelConnectorWithSafeStartup@0.0.0.0:500702016-05-20 18 INFO org.apache.hadoop.http.HttpServer2 45 INFO org.mortbay.log: 11725 2016-05-20 18 INFO org.apache.hadoop.http.HttpServer2 45 11833 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.2016-05-20 18 dfs.block.invalidate.limit=10002016 45 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem 11849 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=10002016-05-20 18 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem 11900 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: 1904 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode .startup.delay.block.deletion.sec is set to 00011907 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2016 May 20 18 Computing capacity for map BlocksMap2016 45 45 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap2016-05-20 1845 INFO org.apache.hadoop.util.GSet: VM type = 64-bit2016-05-20 18 1914 : 45 MB 11919 INFO org.apache.hadoop.util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
The cause of the problem and its solution
The change to the warning message is mainly due to the fact that the road strength prefix of the dfs.namenode.name.dir parameter in the configuration file hdfs-site.xml does not include file://, such as:
Dfs.replication
two
Dfs.namenode.name.dir
File:////hadoop/dfs/name
Dfs.datanode.data.dir
File:////hadoop/dfs/data
3. Hadoop executes any command and reports an error on HDFS operation as follows:
WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... Cause and solution of using builtin-java classes where applicable problem: the change warning is mainly due to the lack of libhadoop.so in the hadoop installation directory / hadoop/lib/native/ or the version of libhadoop.so (32/64bit) that is inconsistent with the version of the operating system. Just download hadoop-native-64-2.6.0 and extract it to the / hadoop/lib/native/ directory, or replace the original one.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.