In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces "the installation steps of Hadoop2.x". In the daily operation, I believe many people have doubts about the installation steps of Hadoop2.x. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts of "installation steps of Hadoop2.x". Next, please follow the editor to study!
I. installation and configuration
1. Create a hadoop user (I am adding to root group, or I can add hadoop group)
[root@hftclclw0001 ~] # useradd hadoop [root@hftclclw0001 ~] # usermod-g root [root@hftclclw0001 ~] # cat / etc/passwd.hadoop:x:50295:0::/home/hadoop:/bin/bash [root@hftclclw0001 ~] # chmod 644 / etc/suders [root@hftclclw0001] # vi 644 / etc/suders.root ALL= (ALL) ALLhadoop ALL= (ALL) ALL...
2.ssh password-free login
[hadoop@hftclclw0001 hadoop] $ssh-keygen-t rsa [hadoop@hftclclw0001 hadoop] $cat ~ / .ssh/id_rsa.pub > > ~ / .ssh/authorized_keys [hadoop@hftclclw0001 hadoop] $tree ~ / .ssh//home/hadoop/.ssh/ ├── authorized_keys ├── id_rsa ├── id_rsa.pub └── known_hosts0 directories, 4 files operates on each other machine and copies the public key (d_rsa.pub) to the authorized_keys of each other machine. I use scp, copy it to another machine, and append it to the authorized_keys file using cat
3. Download hadoop-2.x.y.tar.gz
[root@hftclclw0001 hadoop] # pwd/home/hadoop [root@hftclclw0001 hadoop] # tar-zxvf hadoop-2.7.1.tar.gz [root@hftclclw0001 hadoop] # lltotal 546584drwx-11 hadoop root 4096 Oct 20 09:05 hadoop-2.7.1-rw- 1 hadoop root 210606807 Oct 20 09:00 hadoop-2.7.1.tar.gzdrwx- 13 hadoop root 4096 Oct 20 09:22 spark-1.5.1 -bin-hadoop2.6-rw- 1 hadoop root 280901736 Oct 20 09:19 spark-1.5.1-bin-hadoop2.6.tgzdrwx- 22 hadoop root 4096 Oct 21 00:07 sqoop-1.99.6-bin-hadoop200-rw- 1 hadoop root 68177818 May 5 22:34 sqoop-1.99.6-bin-hadoop200.tar.gz
4. Configure hadoop-2.x.y
[hadoop@hftclclw0001 hadoop] $pwd/home/hadoop/hadoop-2.7.1/etc/hadoop [hadoop@hftclclw0001 hadoop] $vi hadoop-env.sh # The java implementation to use.export JAVA_HOME=/usr/java/latest = > configure java_ home [ha doop@hftclclw0001 hadoop] $vi core-site.xml hadoop.tmp.dir / home/hadoop/hadoop-2.7.1/tmp = > need to create Fs.defaultFS hdfs:// {master:IP}: 9000 [hadoop@hftclclw0001 hadoop] $vi hdfs-site.xml dfs.http.address {master:ip}: 50070 dfs.replication by default under / tmp I have three machines here. 2 datanode and 1 Namenode [hadoop@hftclclw0001 hadoop] $vi mapred-site.xml mapreduce.framework.name yarn [hadoop@hftclclw0001 hadoop] $vi yarn-env.sh...export JAVA_HOME=/usr/java/ latest. [Hadoop @ hftclclw0001 hadoop] $vi yarn-site.xml yarn.resourcemanager.hostname = > needs to be configured At startup, nodemanager will visit resouremanager {master:ip} yarn.nodemanager.aux-services mapreduce_shuffle [hadoop@hftclclw0001 hadoop] $vi masters = > the secondary namenode that actually works on that node {master:ip} [hadoop@hftclclw0001 hadoop] $vi slaves = > the datanode that works on those nodes {slave-1:ip} {slave-2:ip}
5. Copy to another machine
[hadoop@hftclclw0001 ~] $pwd/home/hadoop [hadoop@hftclclw0001 ~] $scp-r hadoop-2.7.1 hadoop@ {ip}: / home/hadoop
6. Start
[hadoop@hftclclw0001 hadoop-2.7.1] $. / bin/hadoop namenode-format [hadoop@hftclclw0001 hadoop-2.7.1] $pwd/home/hadoop/hadoop-2.7.1 [hadoop@hftclclw0001 hadoop-2.7.1] $. / sbin/start-dfs.sh = > start dfs, jps view process master:namenode, secondary namenodeslave:datanode [hadoop@hftclclw0001 hadoop-2.7.1] $. / sbin/start-yarn.sh = > start yarn
7. Verification
A.jps = > verify each process
B.netstat = > check Port
C.webui = > can verify the overall condition of cluster
d. You can also operate hdfs or submit mr job
[hadoop@hftclclw0001 hadoop-2.7.1] $pwd/home/hadoop/hadoop-2.7.1 [hadoop@hftclclw0001 hadoop-2.7.1] $. / bin/hdfs dfs-ls /. [hadoop@hftclclw0001 hadoop-2.7.1] $. / bin/hdfs dfs-mkdir / test. [hadoop@hftclclw0001 hadoop-2.7.1] $. / bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1. Jar wordcount {in} {out} [hadoop@hftclclw0001 hadoop-2.7.1] $. / bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar pi 10 10
II. Trouble shooting
File write permission problem
When external programs write to hdfs, user authentication is performed by default. According to the above configuration, only hadoop account can write hdfs.
Dfs.premissions.enabled=true authenticates the user. Change to false
Dfs.datanode.data.dir.perm=700 is the write permission of the local directory. Modified to 755
At this point, the study on the "installation steps of Hadoop2.x" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 260
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un