In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
1: install the package
Http://hadoop.apache.org/releases.html
Http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
Hadoop-2.6.4 (binary)
Jdk-8u102-linux-x64.tar.gz
2: node information
Centos7 * 3
Master 10.10.0.115
Slave1 10.10.0.116
Slave2 10.10.0.117
3: installation process
Initialization under Section 3.1 (abbreviated)
Hostname, firewall, selinux, etc.
3.2 Node mutual trust
[root@master ~] # ssh-keygen
[root@master ~] # cat / root/.ssh/id_rsa.pub > / root/.ssh/authorized_keys
[root@master] # scp-r / root/.ssh slave1:/root/
[root@master] # scp-r / root/.ssh slave2:/root/
3.3 install jdk on all nodes
[root@master ~] # tar-zxvf jdk-8u102-linux-x64.tar.gz
[root@master ~] # mkdir / usr/soft
[root@master ~] # mv jdk1.8.0_102 / usr/soft/
[root@master ~] # vim / etc/profile
...
Export JAVA_HOME=/usr/soft/jdk1.8.0_102
Export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
Export PATH=$PATH:$JAVA_HOME/bin
[root@master ~] # source / etc/profile
[root@master ~] # java-version
Java version "1.8.0,102"
Java (TM) SE Runtime Environment (build 1.8.0_102-b14)
Java HotSpot (TM) 64-Bit Server VM (build 25.102-b14, mixed mode)
Other nodes operate the same way
3.4 install hadoop
Install the configuration on master first, and then scp to other nodes
[root@master ~] # tar-zxvf hadoop-2.6.4.tar.gz
[root@master ~] # mv hadoop-2.6.4 / usr/soft/
[root@master ~] # cd / usr/soft/hadoop-2.6.4/
[root@master ~] # mkdir hdfs
[root@master ~] # mkdir hdfs/data
[root@master ~] # mkdir hdfs/name
[root@master ~] # mkdir tmp
[root@master ~] # cat / usr/soft/hadoop-2.6.4/etc/hadoop/core-site.xml
Fs.defaultFS
Hdfs://10.10.0.115:9000
Hadoop.tmp.dir
File:/usr/soft/hadoop-2.6.4/tmp
Io.file.buffer.size
131702
[root@master ~] # cat / usr/soft/hadoop-2.6.4/etc/hadoop/hdfs-site.xml
Dfs.namenode.name.dir
File:/usr/soft/hadoop-2.6.4/hdfs/name
Dfs.datanode.data.dir
File:/usr/soft/hadoop-2.6.4/hdfs/data
Dfs.replication
two
Dfs.namenode.secondary.http-address
10.10.0.115:9001
Dfs.webhdfs.enabled
True
[root@master ~] # cp / usr/soft/hadoop-2.6.4/etc/hadoop/mapred-site.xml.template / usr/soft/hadoop-2.6.4/etc/hadoop/mapred-site.xml
[root@master ~] # cat / usr/soft/hadoop-2.6.4/etc/hadoop/mapred-site.xml
Mapreduce.framework.name
Yarn
Mapreduce.jobhistory.address
10.10.0.115:10020
Mapreduce.jobhistory.webapp.address
10.10.0.115:19888
Yarn.nodemanager.aux-services
Mapreduce_shuffle
Yarn.nodemanager.auxservices.mapreduce.shuffle.class
Org.apache.hadoop.mapred.ShuffleHandler
Yarn.resourcemanager.address
10.10.0.115:8032
Yarn.resourcemanager.scheduler.address
10.10.0.115:8030
Yarn.resourcemanager.resource-tracker.address
10.10.0.115:8031
Yarn.resourcemanager.admin.address
10.10.0.115:8033
Yarn.resourcemanager.webapp.address
10.10.0.115:8088
Yarn.nodemanager.resource.memory-mb
seven hundred and sixty eight
[root@master ~] # cat / usr/soft/hadoop-2.6.4/etc/hadoop/hadoop-env.sh
...
Export JAVA_HOME=/usr/soft/jdk1.8.0_102
[root@master ~] # cat / usr/soft/hadoop-2.6.4/etc/hadoop/yarn-env.sh
...
Export JAVA_HOME=/usr/soft/jdk1.8.0_102
[root@master ~] # cat / usr/soft/hadoop-2.6.4/etc/hadoop/slaves
10.10.0.116
10.10.0.117
[root@master] # scp-r / usr/soft/jdk1.8.0_102 slave1:/usr/soft/
[root@master] # scp-r / usr/soft/jdk1.8.0_102 slave2:/usr/soft/
[root@master] # / usr/soft/hadoop-2.6.4/bin/hdfs namenode-format
[root@master ~] # / usr/soft/hadoop-2.6.4/sbin/start-all.sh
Browser opens http://10.10.0.115:8088/
Browser opens http://10.10.0.115:50070/
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.