In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly shows you "how to install hadoop", the content is easy to understand, clear, hope to help you solve your doubts, the following let the editor lead you to study and learn "how to install hadoop" this article.
Tools:
Xshell ()
Install the package:
Hadoop-2.6.0.tar.gz- > 2.4.1 http://archive.apache.org/dist/hadoop/core/hadoop-2.4.1/
-5/19/2017-start
Https://archive.apache.org/dist/hadoop/common/hadoop-2.5.0/hadoop-2.5.0.tar.gz
Wget-no-check-certificate-no-cookies-header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u112-b15/jdk-8u112-linux-x64.tar.gzwget-no-check-certificate-no-cookies-header "Cookie: oraclelicense=accept-securebackup-cookie" https://archive.apache.org/dist/hadoop/common/hadoop-2.5.0/hadoop-2.5.0.tar.gz
-5/19/2017-end
Jdk-7u9-linux-i586.tar.gz
Installation packages for subsequent use
Hbase-0.94.2.tar.gz
Hive-0.9.0.tar.gz
Pig-0.10.0.tar.gz
Zookeeper-3.4.3.tar.gz
Add users and groups
Groupadd hadoop
Useradd hadoop-g hadoop
Switch users
Su hadoop
Quit
Exit
JDK installation (installation under root users)
Plan a: rpm
Plab b: decompress it
Mkdir / usr/java
Tar-zxvf jdk-7u9-linux-i506.tar.gz-C / usr/java
Establish a link:
Ls-s / usr/java/jdk1.6.0_30 / usr/java/jdk
Configure environment variables:
Modify vi / etc/profile to add at the end
Export JAVA_HOME=/usr/java/jdk
Export PATH=$JAVA_HOME/bin:$PATH
Let the environment variable take effect source / etc/profile
Check echo $PATH and java-version
SSH and password-less login
Install the SSH client:
Yum-y install openssh-clients
= > the virtual machine can be copied at this time
Ssh master
Generate a public-private key pair without a password:
Ssh-keygen-t rsa
Cp / .ssh/id_rsa.pub ~ / .ssh/authorized_keys
(you can send the public key to other machines later, ssh-copy-id 192.168.137.44)
Copy virtual machine
Copy-> full copy
Vi / etc/sysconfig/network-scripts/ifcfg-eth0
Modify the settings based on the real mac of the virtual machine-network, you can see
DEVICE= "eth2" HWADDR=...IPADDR=192.168.56.3
Change eth0 to eht1
Mv / etc/sysconfig/network-scripts/ifcfg-eth0 / etc/sysconfig/network-script/ifcfg-eth2
Restart the network card
Multiple virtual machines can be replicated through the above methods
Install hadoop
Download address http://archive.apache.org/dist/hadoop/core/stable
Decompress:
Tar-zxvf hadoop-1.0.3.tar.gz-C / opt/ # used to be installed in / usr/local, but now it is generally installed in opt
Mv / opt/hadoop-1.0.3 / opt/hadoop # renaming is easy to use
Chown-R hadoop:hadoop / opt/hadoop # assigns permissions for folders to hadoop users
Su hadoop # is configured under hadoop user
Configuration 0:
Vi / etc/profile
Export JAVA_HOME/usr/java/jdk
Export HADOOP_HOME=/opt/hadoopp-2.6.0
Export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin
Source / etc/profile
Configuration 1:
Hadoop-evn.sh
Export JAVA_HOME/usr/java/jdk
Configure 2:vim core-site.xml (hostname is recommended, not ip)
Fs.defaultFS
Hdfs://192.168.137.2:9000
Hadoop.tmp.dir
/ opt/hadoop-2.6.0/tmp
Configure 3:hdfs-site.xml
Dfs.replication
one
Configuration 4: mv mapred-site.xml.template mapred-site.xml
Mapreduce.framework.name
Yarn
Configuration 5: yarn-site.xml
Yarn.nodemanager.aux-services
Mapreduce_shuffle
Yarn.resourcemanager.hostname
Master
Hadoop-env.sh
Set up JAVA_HOME
Initialize HDFS
Hdfs namenode-format
Generate the tmp folder below
Start hadoop
. / start-all.sh
Verify-jps command to view the process
ResourceManager
NodeManager
NameNode
Jps
SecondaryNameNode
DataNode
Inspection-http://192.168.137.2:50070
Http://192.168.137.2:50070/dfsnodelist.jsp?whatNodes=LIVE
Http://192.168.137.2:50075/browseDirectory.jsp?dir=%2F&go=go&namenodeInfoPort=50070&nnaddr=192.168.137.2%3A9000
Http://192.168.137.2:8088
If you cannot access it, you need to turn off the firewall service iptables stop
Error:
Could not get the namenode ID of this node.
Hadoop-hdfs-2.6.0.jar (hdfs-default.xml) dfs.ha.namenode.id
Principle: http://blog.csdn.net/chenpingbupt/article/details/7922004
Public static String getNameNodeId (Configuration conf, String nsId) {String namenodeId = conf.getTrimmed (DFS_HA_NAMENODE_ID_KEY); if (namenodeId! = null) {return namenodeId;} String suffixes [] = DFSUtil.getSuffixIDs (conf, DFS_NAMENODE_RPC_ADDRESS_KEY, nsId, null, DFSUtil.LOCAL_ADDRESS_MATCHER) If (suffixes = = null) {String msg = "Configuration" + DFS_NAMENODE_RPC_ADDRESS_KEY + "must be suffixed with nameservice and namenode ID for HA" + "configuration."; throw new HadoopIllegalArgumentException (msg);} return suffixes [1];}
DFS_HA_NAMENODE_ID_KEY = "dfs.ha.namenode.id"
DFS_NAMENODE_RPC_ADDRESS_KEY = "dfs.namenode.rpc-address"
Please make sure iptables is closed first.
0 check all configuration files of each machine
1 whether there is no configuration file
2 whether the ssh login-free login between the computers is normal
= > due to namenode mismatch
The above is all the contents of the article "how to install hadoop". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.