In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces how to achieve a single point of installation of VM9+Debian6+hadoop0.23.9, the article introduces in great detail, has a certain reference value, interested friends must read it!
I. Environmental preparation
1.1 Debian 6, install SSH; according to the prompts during installation (if it is simulated in window, install VMware first, I choose VMware workstation 9)
1.2 jdk1.7,hadoop0.23.9: download location http://mirror.esocc.com/apache/hadoop/common/hadoop-0.23.9/hadoop-0.23.9.tar.gz
II. Installation process
2.1 install sudo for Debian
Root@debian:apt-get install sudo
2.2 install jdk1.7
First transfer jdk-7u45-linux-i586.tar.gz to the / root/ path through the SSH client, and then execute the following command
Root@debian~:tar-zxvf jdk-7u45-linux-i586.tar.gz-C / usr/java/
2.3 hadoop download & install
Root@debian~:wget http://mirror.esocc.com/apache/hadoop/common/hadoop-0.23.9/hadoop-0.23.9.tar.gzroot@debian~:tar zxvf hadoop-0.23.9.tar.gz-C / opt/root@debian~:cd / opt/root@debian:/opt/# ln-s hadoop-0.23.9/ hadoop
-A mapping of hadoop0.23.9 is made here, which is equivalent to the. link under windows.
2.4 add hadoop user rights
Root@debian~:groupadd hadooproot@debian~:useradd-g hadoop hadooproot@debian~:passwd hadooproot@debian~:vi / etc/sudoers
Add hadoop user rights to sudoers
Add below root ALL= (ALL) ALL
Hadoop ALL= (ALL:ALL) ALL
2.5 configure SSH login
Root@debian:su-hadooproot@debian:ssh-keygen-t rsa-P "own password" can be passwordless root@debian:cat ~ / .ssh/id_rsa.pub > > ~ / .ssh/authorized_keysroot@debian:chmod 6000.ssh/authorized_keys
Test login
Root@debian:ssh localhost
If you want to set an empty password to log in, or prompt for a password, confirm the configuration file of the native sshd (root permission is required)
Root @ debian: vi / etc/ssh/sshd_config
Find the following and remove the comment character "#"
RSAAuthentication yes
PubkeyAuthentication yes
AuthorizedKeysFile .ssh / authorized_keys
Then restart sshd. If you don't want to set an empty password to log in, you don't have to restart.
Root @ debian: servicesshd restart
2.6 configure hadoop users
Root@debian:chown-R hadoop:hadoop / opt/hadooproot@debian:chown-R hadoop:hadoop / opt/hadoop-0.23.9root@debian:su-hadoophadoop@debian6-01:~#:vi .bashrc
Add the following section
Export JAVA_HOME=/usr/java//usr/java/jdk1.7.0_45
Export JRE_HOME=$ {JAVA_HOME} / jre
Export HADOOP_HOME=/opt/hadoop
Export CLASSPATH=.:$ {JAVA_HOME} / lib:$ {JRE_HOME} / lib
Export PATH=$ {JAVA_HOME} / bin:$HADOOP_HOME/bin:$PATH
Export HADOOP_COMMON_HOME=$HADOOP_HOME
Export HADOOP_HDFS_HOME=$HADOOP_HOME
Export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
Export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
Root@debian:cd / opt/hadoop/etc/hadoop/root@debian6-01:/opt/hadoop/etc/hadoop# vi yarn-env.sh
Add the following
Export HADOOP_FREFIX=/opt/hadoop
Export HADOOP_COMMON_HOME=$ {HADOOP_FREFIX}
Export HADOOP_HDFS_HOME=$ {HADOOP_FREFIX}
Export PATH=$PATH:$HADOOP_FREFIX/bin
Export PATH=$PATH:$HADOOP_FREFIX/sbin
Export HADOOP_MAPRED_HOME=$ {HADOOP_FREFIX}
Export YARN_HOME=$ {HADOOP_FREFIX}
Export HADOOP_CONF_HOME=$ {HADOOP_FREFIX} / etc/hadoop
Export YARN_CONF_DIR=$ {HADOOP_FREFIX} / etc/hadoop
Root@debian6-01:/opt/hadoop/etc/hadoop# vi core-site.xml
Fs.defaultFS
Hdfs://localhost:12200
Hadoop.tmp.dir
/ opt/hadoop/hadoop-root
Fs.arionfs.impl
Org.apache.hadoop.fs.pvfs2.Pvfs2FileSystem
The FileSystem for arionfs.
Root@debian6-01:/opt/hadoop/etc/hadoop# vi hdfs-site.xml
Dfs.namenode.name.dir
File:/opt/hadoop/data/dfs/name
True
Dfs.namenode.data.dir
File:/opt/hadoop/data/dfs/data
True
Dfs.replication
one
Dfs.permission
False
Root@debian6-01:/opt/hadoop/etc/hadoop#cp mapred-site.xml.templatemapred-site.xmlroot@debian6-01:/opt/hadoop/etc/hadoop# vi mapred-site.xml
Mapreduce.framework.name
Yarn
Mapreduce.job.tracker
Hdfs://localhost:9001
True
Mapreduce.map.memory.mb
1536
Mapreduce.map.java.opts
-Xmx1024M
Mapreduce.reduce.memory.mb
3072
Mapreduce.reduce.java.opts
-Xmx2560M
Mapreduce.task.io.sort.mb
five hundred and twelve
Mapreduce.task.io.sort.factor
one hundred
Mapreduce.reduce.shuffle.parallelcopies
fifty
Mapreduce.system.dir
File:/opt/hadoop/data/mapred/system
Mapreduce.local.dir
File:/opt/hadoop/data/mapred/local
True
Root@debian6-01:/opt/hadoop/etc/hadoop# vi yarn-site.xml
Yarn.nodemanager.aux-services
Mapreduce.shuffle
Yarn.nodemanager.aux-services.mapreduce.shuffle.class
Org.apache.hadoop.mapred.ShuffleHandler
Mapreduce.framework.name
Yarn
User.name
Hadoop
Yarn.resourcemanager.address
Localhost:54311
Yarn.resourcemanager.scheduler.address
Localhost:54312
Yarn.resourcemanager.webapp.address
Localhost:54313
Yarn.resourcemanager.resource-tracker.address
Localhost:54314
Yarn.web-proxy.address
Localhost:54315
Mapred.job.tracker
Localhost
2.7 start and run the wordcount program
Set up JAVA_HOME
Root@debian6-01:vi / opt/hadoop/libexec/hadoop-config.sh
# Attempt to set JAVA_HOME if it is not set
Export JAVA_HOME=/usr/java/jdk1.7.0_45-add
If [[- z $JAVA_HOME]]; then -: wq! Save exit
Format namenode
Root@debian6-01:/opt/hadoop/lib# hadoop namenode-format
Start
Root@debian6-01:/opt/hadoop/sbin/start-dfs.shroot@debian6-01:/opt/hadoop/sbin/start-yarn.sh
Check
Root@debian6-01:jps
6365 SecondaryNameNode
7196 ResourceManager
6066 NameNode
7613 Jps
6188 DataNode
7311 NodeManager
The above is all the contents of the article "how to achieve a single point of installation of VM9+Debian6+hadoop0.23.9". Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.