In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article introduces the relevant knowledge of "Hadoop 2.7.1 distributed installation configuration process". In the actual case operation process, many people will encounter such difficulties. Next, let Xiaobian lead you to learn how to deal with these situations! I hope you can read carefully and learn something!
environment briefing
VirtualBox5(three units), CentOS 7, Hadoop 2.7.1
1. Basic installation and configuration
Complete Hadoop 2.7.1 distributed installation first-Preparation
Note: Hadoop-2.5.1 finally no longer need to compile 64-bit libhadoop.so.1.0.0, and earlier versions of hadoop come with 32-bit, if you need 64-bit self-examination compilation, see Hadoop 2.4.1 distributed installation.
2. Download and extract hadoop 2.7.1 compressed package
Download and extract hadoop archive from apache hadoop official website to local directory/home/wukong/local/hadoop-2.7.1/
On the machine you intend to make namenode, wget or other means to download the hadoop archive and extract it to a local directory. Download decompression command reference Linux common command.
3. Various configurations that need to be done
There are seven files, located in/home/wukong/local/hadoop-2.7.1/etc/hadoop, described as follows:
hadoop-env.sh
# The java implementation to use.export JAVA_HOME=/opt/jdk1.7.0_79#Optional. # The maximum amount of heap to use, in MB. Default is 1000.export HADOOP_HEAPSIZE=500export HADOOP_NAMENODE_INIT_HEAPSIZE="100"
yarn-env.sh
# some Java parametersexport JAVA_HOME=/opt/jdk1.7.0_79if [ "$JAVA_HOME" != "" ]; then #echo "run java in $JAVA_HOME" JAVA_HOME=$JAVA_HOMEfiif [ "$JAVA_HOME" = "" ]; then echo "Error: JAVA_HOME is not set. " exit 1fiJAVA=$JAVA_HOME/bin/javaJAVA_HEAP_MAX=-Xmx600m #default heap_max is 1000m, my virtual machine does not have such a large memory, so it is changed to small
slaves
#Write to your slave node. If there are more than one, write one in each line and write the host name bd02bd03.
core-site.xml
fs.defaultFS hdfs://bd01:9000 io.file.buffer.size 131072 hadoop.tmp.dir file:/home/wukong/local/hdp-data/tmp Abase for other temporary directories. hadoop.proxyuser.hduser.hosts * hadoop.proxyuser.hduser.groups *
Among them, the hdp-data directory is originally unavailable and needs to be created by itself.
hdfs-site.xml
dfs.namenode.secondary.http-address bd01:9001 dfs.namenode.name.dir file:/home/wukong/local/hdp-data/name dfs.datanode.data.dir file:/home/wukong/a_usr/hdp-data/data dfs.replication 3 dfs.webhdfs.enabled true
mapred-site.xml
mapreduce.framework.name yarn mapreduce.jobhistory.address bd01:10020 mapreduce.jobhistory.webapp.address bd01.19888
yarn-site.xml
yarn.nodemanager.aux-services mapreduce_shuffle yarn.nodemanager.aux-services.mapreduce.shuffle.class org.apache.hadoop.mapred.ShuffleHandler yarn.resourcemanager.address bd01:8032 yarn.resourcemanager.scheduler.address bd01:8030 yarn.resourcemanager.resource-tracker.address bd01:8031 yarn.resourcemanager.admin.address bd01:8033 yarn.resourcemanager.webapp.address bd01:8088 4. Copy hadoop to all nodes
Remote copy method see Linux common command
5, Format HDFS[wukong@bd01 hadoop-2.7.1]$ hdfs namenode -format
When the execution is completed, no exception is thrown, and when you see this sentence, it is successful.
15/07/31 10:51:09 INFO common.Storage: Storage directory /home/wukong/local/hdp-data/name has been successfully formatted.
6, Initiate DFS[wukong@bd01 ~]$start-dfs.sh Starting namenodes on [bd01]bd01: starting namenodes, logging to /home/wukong/local/hadoop-2.7.1/logs/hadoop-wukong-namenode-bd01.outbd02: starting datanodes, logging to /home/wukong/local/hadoop-2.7.1/logs/hadoop-wukong-datanode-bd02.outbd03: starting datanode, logging to /home/wukong/local/hadoop-2.7.1/logs/hadoop-wukong-datanode-bd03.outStarting secondary namenodes [bd01]bd01: starting secondarynamenode, logging to /home/wukong/local/hadoop-2.7.1/logs/hadoop-wukong-secondarynamenode-bd01.out[wukong@bd01 ~]
Jps and log to see if the boot is successful. jps View the progress of the machine startup. Normally there should be namenode and sencondarynamenode on the master. Slave has datanodes.
[wukong@bd01 hadoop]$jps5224 Jps5074 SecondaryNameNode4923 NameNode[wukong@bd02 ~]$jps2307 Jps2206 DataNode[wukong@bd03 ~]$jps2298 Jps2198 DataNode7, Start YARN[wukong@bd01 ~]$ start-yarn.shstarting yarn daemonsstarting resourcemanager, logging to /home/wukong/local/hadoop-2.7.1/logs/yarn-wukong-resourcemanager-bd01.outbd03: starting nodemanager, logging to /home/wukong/local/hadoop-2.7.1/logs/yarn-wukong-nodemanager-bd03.outbd02: starting nodemanager, logging to /home/wukong/local/hadoop-2.7.1/logs/yarn-wukong-nodemanager-bd02.out[wukong@bd01 ~]$
Verify startup success via jps and logs
[wukong@bd01 ~]$ jps5830 ResourceManager6106 Jps5074 SecondaryNameNode4923 NameNode[wukong@bd01 ~]$ [wukong@bd02 ~]$ jps4615 Jps2206 DataNode4502 NodeManager[wukong@bd02 ~]$ [wukong@bd03 ~]$ jps 4608 Jps4495 NodeManager2198 DataNode[wukong@bd03 ~]$8. Possible problems 8.1. Start DFS TIME REPORT JAVA_HOME Failed to find [wukong@bd01 ~]$ start-dfs.shStarting namesodes on [bd01]The authenticity of host 'bd01 (192.168.1.21)' can't be established.ECDSA key fingerprint is af:96:74:e1:41:ec:af:ec:d8:8e:df:cd:99:61:33:0d.Are you sure you want to continue connecting (yes/no)? yesbd01: Warning: Permanently added 'bd01,192.168.1.21' (ECDSA) to the list of known hosts.bd01: Error: JAVA_HOME is not set and could not be found.bd03: Error: JAVA_HOME is not set and could not be found.bd02: Error: JAVA_HOME is not set and could not be found.Starting secondary namenodes [bd01]bd01: Error: JAVA_HOME is not set and could not be found. [wukong@bd01 some_log]$ java -versionjava version "1.7.0_79"Java(TM) SE Runtime Environment (build 1.7.0_79-b15)Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)[wukong@bd01 ~]8.2, Configure hadoop related environment variables # .bash_profile# Get the aliases and functionsif [ -f ~/.bashrc ]; then . ~/.bashrcfi#Add the path to both the hadoop executable and script to PATH=$PATH:$HOME/local/hadoop-2.7.1/bin:$HOME/local/hadoop-2.7.1/sbinexport PATH~ ~ ~ ".bash_profile" 16L, 267C"Hadoop 2.7.1 distributed installation configuration process" content is introduced here, thank you for reading. If you want to know more about industry-related knowledge, you can pay attention to the website. Xiaobian will output more high-quality practical articles for everyone!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.