In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article focuses on "CentOS7-64bit compiling Hadoop-2.5.0 and distributed installation steps", interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn the steps of CentOS7-64bit compiling Hadoop-2.5.0 and distributed installation.
1. System environment description
CentOS version 7.0x64
192.168.1.7 master
192.168.1.8 slave
192.168.1.9 slave
192.168.1.10 slave
two。 Preparation before installation 2.1 turn off firewall # systemctl status firewalld.service-check firewall status # systemctl stop firewalld.service-turn off firewall # systemctl disable firewalld.service-permanently close firewall 2.2 check ssh installation If not, install ssh# systemctl status sshd.service-check ssh status # yum install openssh-server openssh-clients2.3 install vim# yum-y install vim2.4 set static ip address # vim / etc/sysconfig/network-scripts/ifcfg-eno16777736
BOOTPROTO= "static"
ONBOOT= "yes"
IPADDR0= "192.168.1.7"
PREFIX0= "255.255.255.0"
GATEWAY0= "192.168.1.1"
DNS1= "61.147.37.1"
DNS2= "101.226.4.6"
2.5 modify host name # vim / etc/sysconfig/network
HOSTNAME=master
# vim / etc/hosts192.168.1.7 master192.168.1.8 slave1192.168.1.9 slave2192.168.1.10 slave3# hostnamectl set-hostname master (the original method of modifying host under CentOS7 is invalid) 2.6 create a hadoop user # useradd hadoop-create a user with a user name of hadoop # passwd hadoop-set a password for user hadoop 2.7. configure ssh keyless login
-the following is the operation on master
# su hadoop-- switch to hadoop user $cd ~-- Open the user folder $ssh-keygen-t rsa-P''--generate a password pair / home/hadoop/.ssh/id_rsa and / home/hadoop/.ssh/id_rsa.pub$ cat ~ / .ssh/id_rsa.pub > > ~ / .ssh/authorized_keys-- append id_rsa.pub to authorized key to $chmod 6000.ssh/authorized_keys-- modify permissions $su-- switch to root user # vim / etc/ssh/sshd_config-- modify ssh configuration file RSAAuthentication Yes # enable RSA authentication PubkeyAuthentication yes # enable public key private key pairing authentication AuthorizedKeysFile .ssh / authorized_keys # Public key file path # su hadoop-switch to hadoop user $scp ~ / .ssh/id_rsa.pub hadoop@192.168.1.8:~/-copy the public key on all Slave machines
-the following is the operation on slave1
# su hadoop-- switch to hadoop user $mkdir ~ / .ssh $chmod 700 ~ / .ssh $cat ~ / id_rsa.pub > > ~ / .ssh/authorized_keys-- append to the license file "authorized_keys" $chmod 600 ~ / .ssh/authorized_keys-- modify permissions $su-switch back to root user # vim / etc/ssh/sshd_config-- modify Ssh configuration file RSAAuthentication yes # enable RSA authentication PubkeyAuthentication yes # enable public key private key pairing authentication AuthorizedKeysFile .ssh / authorized_keys # public key file path 3. Install the required software 3.1 install JDK# rpm-ivh jdk-7u67-linux-x64.rpm
Preparing...
# # [100%]
1:jdk
# # [100%]
Unpacking JAR files...
Rt.jar...
Jsse.jar...
Charsets.jar...
Tools.jar...
Localedata.jar...
# vim / etc/profile export JAVA_HOME=/usr/java/jdk1.7.0_67 export PATH=$PATH:$JAVA_HOME/bin# source profile-- change effective 3.2 install other required software # yum install maven svn ncurses-devel gcc* lzo-devel zlib-devel autoconf automake libtool cmake openssl-devel3.3 install ant# tar zxvf apache-ant-1.9.4-bin.tar.gz# vim / etc/profile export ANT_HOME=/usr/local/apache-ant-1.9.4 Export PATH=$PATH:$ANT_HOME/bin3.4 install findbugs# tar zxvf findbugs-3.0.0.tar.gz# vim / etc/profile export FINDBUGS_HOME=/usr/local/findbugs-3.0.0 export PATH=$PATH:$FINDBUGS_HOME/bin3.5 install protobuf# tar zxvf protobuf-2.5.0.tar.gz (must be version 2.5.0 Otherwise, an error will be reported when compiling hadoop) # cd protobuf-2.5.0#. / configure-- prefix=/usr/local# make & & make install4. Compile the hadoop source code # tar zxvf hadoop-2.5.0-src.tar.gz# cd hadoop-2.5.0-src# mvn package-Pdist,native,docs-DskipTests-Dtar4.1 maven central repository configuration (change to oschina Increase access speed) # vim / usr/share/mavem/conf/settings.xml nexus-osc * Nexus osc http://maven.oschina.net/content/groups/public/ jdk17 true 1.7 1.7 1.7 Nexus local private nexus http://maven.oschina.net/content/groups/public/ true false nexus After the compilation of local private nexus http://maven.oschina.net/content/groups/public/ true false 4.2 is complete Directory / usr/hadoop-2.5.0-src/hadoop-dist/target/hadoop-2.5.0#. / bin/hadoop versionHadoop 2.5.0Subversion Unknown-r UnknownCompiled by root on 2014-09-12T00:47ZCompiled with protoc 2.5.0From source with checksum 423dcd5a752eddd8e45ead6fd5ff9a24This command was run using / usr/hadoop-2.5.0-src/hadoop-dist/target/hadoop-2.5.0/share/hadoop/common/hadoop-common-2.5.0.jar# file lib//native / * lib//native/libhadoop.a: current ar archivelib//native/libhadooppipes.a: current ar archivelib//native/libhadoop.so: symbolic link to `libhadoop.so.1.0.0'lib//native/libhadoop.so.1.0.0: ELF 64-bit LSB shared object X86-64, version 1 (SYSV), dynamically linked, BuildID [sha1] = 0x972b31264a1ce87a12cfbcc331c8355e32d0e774, not strippedlib//native/libhadooputils.a: current ar archivelib//native/libhdfs.a: current ar archivelib//native/libhdfs.so: symbolic link to `libhdfs.so.0.0.0'lib//native/libhdfs.so.0.0.0: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID [sha1] = 0x200ccf97f44d838239db3347ad5ade435b472cfa, not stripped5. Configure hadoop5.1 basic operation # cp-r / usr/hadoop-2.5.0-src/hadoop-dist/target/hadoop-2.5.0 / opt/hadoop-2.5.0# chown-R hadoop:hadoop / opt/hadoop-2.5.0# vi / etc/profile export HADOOP_HOME=/opt/hadoop-2.5.0 export PATH=$PATH:$HADOOP_HOME/bin# su hadoop$ cd / opt/hadoop-2.5.0$ mkdir-p dfs/name$ mkdir-p dfs/data$ Mkdir-p tmp$ cd etc/hadoop5.2 configure all slave nodes $vim slavesslave1slave2slave35.3 modify hadoop-env.sh and yarn-env.sh$ vim hadoop-env.sh / vim yarn-env.shexport JAVA_HOME=/usr/java/jdk1.7.0_675.4 modify core-site.xml
Fs.defaultFS
Hdfs://master:9000
Io.file.buffer.size
131702
Hadoop.tmp.dir
File:/opt/hadoop-2.5.0/tmp
Hadoop.proxyuser.hadoop.hosts
Hadoop.proxyuser.hadoop.groups
5.5 modify hdfs-site.xml
Dfs.namenode.name.dir
/ opt/hadoop-2.5.0/dfs/name
Dfs.datanode.data.dir
/ opt/hadoop-2.5.0/dfs/data
Dfs.replication
three
Dfs.namenode.secondary.http-address
Master:9001
Dfs.webhdfs.enabled
True
5.6 modify mapred-site.xml# cp mapred-site.xml.template mapred-site.xml
Mapreduce.framework.name
Yarn
Mapreduce.jobhistory.address
Master:10020
Mapreduce.jobhistory.webapp.address
Master:19888
5.7 configure yarn-site.xml
Yarn.nodemanager.aux-services
Mapreduce_shuffle
Yarn.nodemanager.auxservices.mapreduce.shuffle.class
Org.apache.hadoop.mapred.ShuffleHandler
Yarn.resourcemanager.address
Master:8032
Yarn.resourcemanager.scheduler.address
Master:8030
Yarn.resourcemanager.resource-tracker.address
Master:8031
Yarn.resourcemanager.admin.address
Master:8033
Yarn.resourcemanager.webapp.address
Master:8088
Yarn.nodemanager.resource.memory-mb
seven hundred and sixty eight
5.8. format namenode$. / bin/hdfs namenode-format5.9 start hdfs$. / sbin/start-dfs.sh$. / sbin/start-yarn.sh5.10 check startup http://192.168.1.7:8088http://192.168.1.7:50070 so far, I believe you have a deeper understanding of "CentOS7-64bit compiling Hadoop-2.5.0 and distributed installation steps". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.