In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces how to use cloudrea's rpm source to install Hadoop, has a certain reference value, interested friends can refer to, I hope you can learn a lot after reading this article, the following let Xiaobian take you to understand.
Install Hadoop using the rpm source of cloudrea
The environment is:
192.168.255.132 test01.linuxjcq.com = "master
192.168.255.133 test02.linuxjcq.com = "slave01
192.168.255.134 test03.linuxjcq.com = "slave02
The / etc/hosts file in each host has the above configuration and basic java environment settings, and the java package used is openjdk
1. Install cloudrea
Wget http://archive.cloudera.com/RedHat/6/x86_64/cdh/cdh4-repository-1.0-1.noarch.rpm-P / usr/local/src
Yum localinstall-- nogpgcheck / usr/local/src/cdh4-repository-1.0-1.noarch.rpm
Rpm-- import http://archive.cloudera.com/redhat/6/x86_64/cdh/RPM-GPG-KEY-cloudera
two。 Install the Hadoop package
Yum install-y hadoop-0.20 hadoop-0.20-namenode hadoop-0.20-secondarynamenode hadoop-0.20-datanode hadoop-0.20-jobtracker hadoop-0.20-tasktracker hadoop-0.20-source
Hadoop is divided into different parts according to their functions.
Source:hadoop-0.20-source
Base:hadoop-0.20
Namenode:hadoop-0.20-namenode
Secondnamenode:hadoop-0.20-secondarynamenode
Jobtracker:hadoop-0.20-jobtracker
Tasktracker:hadoop-0.20-tasktracker
Two users and a group are added by default at the same time
Hdfs users are used to manipulate the hdfs file system
Mapred users are used for mapreduce work
Both users belong to the hadoop group, and there are no hadoop users.
The above 1 and 2 need to be operated on each node.
3. Configure the master nod
a. Create configuration
Cloudrea configuration can be done through the alternatives tool
Cp-r / etc/Hadoop-0.20/conf.empty / etc/hadoop-0.20/conf.my_cluster
Copy Profil
Alternatives-display hadoop-0.20-conf
Alternatives-install / etc/hadoop-0.20/conf
Hadoop-0.20-conf / etc/hadoop-0.20/conf.my_cluster 50
View the configuration and install the new configuration
Alternatives-display hadoop-0.20-conf
Hadoop-0.20-conf-status is auto.
Link currently points to / etc/hadoop-0.20/conf.my_cluster
/ etc/hadoop-0.20/conf.empty-priority 10
/ etc/hadoop-0.20/conf.my_cluster-priority 50
Current `best' version is / etc/hadoop-0.20/conf.my_cluster.
Confirm that the new configuration is installed
b. Set up the java home directory
Vim hadoop-env.sh
Export JAVA_HOME=/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64
JAVA_HOME is the home directory of JAVA, and you can use OPENJDK
c. Set up core-site.xml
Vim core-site.xml
Fs.default.name
Hdfs://test01.linuxjcq.com:9000/
Use this to access the hdfs file system
d. Set up hdfs-site.xml
Vim / etc/hadoop/hdfs-site.xml
Dfs.replication
two
Dfs.name.dir
/ data/hadoop/hdfs/name
Dfs.data.dir
/ data/hadoop/hdfs/data
e. Set up mapred-site.xml
Mapred.system.dir
/ mapred/system
Mapred.local.dir
/ data/hadoop/mapred/local
Mapred.job.tracker
Test01.linuxjcq.com:9001
f. Set up secondnamenode and datanode
Secondnamenode
Vim / etc/hadoop/masters
Test02.linuxjcq.com
Datanode
Test02.linuxjcq.com
Test03.linuxjcq.com
g. Create the appropriate directory
Create dfs.name.dir and dfs.data.dir
Mkdir-p / data/hadoop/hdfs/ {name,data}
Create mapred.local.dir
Mkdir-p / data/hadoop/mapred/local
Change the owner of dfs.name.dir and dfs.data.dir to hdfs, the owner of the group to hadoop, and the directory permission to 0700
Chown-R hdfs:hadoop / data/hadoop/hdfs/ {name,data}
Chmod-R 0700 / data/hadoop/hdfs/ {name,data}
Modify mapred.local.dir owner to mapred, group owner to hadoop, and directory permission to 755
Chown-R mapred:hadoop / data/hadoop/mapred/local
Chmod-R 0755 / data/hadoop/mapred/local
4. Configure secondnamenode and datanode nodes
Repeat step aMuf in 3
5. Format namenode on the master node
Sudo-u hdfs hadoop namenode-format
6. Start the node
Master starts namenode
Service Hadoop-0.20-namenode start
Secondnamenode start
Service hadoop-0.20-secondnamenode start
Start each data node
Service hadoop-0.20-datanode start
7. Create the / tmp directory and mapred.system.dir of hdfs
Sudo-u hdfs hadoop fs-mkdir / mapred/system
Sudo-u hdfs hadoop fs-chown mapred:hadoop / mapred/system
Sudo-u hdfs hadoop fs-chmod 700 / mapred/system
Mapred.system.dir needs to be created before jobtracker starts
Sudo-u hdfs hadoop dfs-mkdir / tmp
Sudo-u hdfs hadoop dfs-chmod-R 1777 / tmp
8. Turn on mapreduce
Execute on the datanode node
Service hadoop-0.20-tasktracker start
Start jobtracker on the namenode node
Service hadoop-0.20-jobtasker start
9. Set up boot boot
Namenode node: what needs to be started is namenode and jobtracker, and other services are disabled.
Chkconfig hadoop-0.20-namenode on
Chkconfig hadoop-0.20-jobtracker on
Chkconfig hadoop-0.20-secondarynamenode off
Chkconfig hadoop-0.20-tasktracker off
Chkconfig hadoop-0.20-datanode off
Datanode node: need to start datanode and tasktracker
Chkconfig hadoop-0.20-namenode off
Chkconfig hadoop-0.20-jobtracker off
Chkconfig hadoop-0.20-secondarynamenode off
Chkconfig hadoop-0.20-tasktracker on
Chkconfig hadoop-0.20-datanode on
Secondarynamenode node: need to start secondarynamenode
Chkconfig hadoop-0.20-secondarynamenode on
Description:
These hadoop packages are started as separate services, do not need to go through ssh, but can also be configured through ssh, through the use of start-all.sh and stop-all.sh to manage services.
Thank you for reading this article carefully. I hope the article "how to install Hadoop using cloudrea's rpm source" shared by the editor will be helpful to you. At the same time, I also hope you will support us and follow the industry information channel. More related knowledge is waiting for you to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.