Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to build Hadoop Environment on SUSE

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

Editor to share with you how to build a Hadoop environment on SUSE. I hope you will get something after reading this article. Let's discuss it together.

[environment]:

Often encounter problems caused by relying on software version mismatch, this carelessness, thinking that java is not a big problem, so use the original installation through yast java1.6 openjdk to do, the result can be imagined, a lot of problems, repeated positioning, repeated Google Baidu, the last friend inspired decided to change the jdk version. The problem is solved, so post my environment here.

Java environment: java version "1.7.051"

Java (TM) SE Runtime Environment (build 1.7.0_51-b13)

Java HotSpot (TM) 64-Bit Server VM (build 24.51-b03, mixed mode)

System: openSUSE 11.2 (x86x64)

Hadoop version: Hadoop-1.1.2.tar.gz

[Step1:] create hadoop users and user groups

Group: hadoop

User: hadoop-> / home/hadoop

Add permissions: vi / etc/sudoers adds hadoop ALL= (ALL:ALL) ALL

[Stpe2:] install hadoop

The directory structure of the author after tar xf installation is as follows (for reference):

/ home/hadoop/hadoop-home/ [bin | conf]

[Step3:] with SSH (avoid requiring a password to start hadoop)

Slightly install ssh

Ssh-keygen-t rsa-P "" [enter all the way and confirm]

Cat ~ / .ssh/id_rsa.pub > > ~ / .ssh/authorized_keys

Try ssh localhost [check if you don't need a password]

[Step4:] install java

For the version, see [Environment].

[Step5:] with conf/hadoop-env.sh

Export JAVA_HOME=/usr/java/jdk1.7.0_17xxx # [jdk directory]

Export HADOOP_INSTALL=/home/hadoop/hadoop-home

Export PATH=$PATH:$HADOOP_INSTALL/bin # [here is the directory where the hadoop script is located]

[Step6:] use stand-alone mode

Hadoop version

Mkdir input

Man find > input/test.txt

Hadoop jar hadoop-examples-1.1.2.jar wordcount input output

[Step7:] pseudo distribution mode (namenode,datanode,tackerd and other modules implemented on a stand-alone machine)

Conf/ [core-site.xml 、 hdfs-site.xml 、 mapred-site.xml]

Core-site.xml

Fs.default.name hdfs://localhost:9000 hadoop.tmp.dir / usr/local/hadoop/tmp

Hdfs-site.xml

Dfs.replication 2 dfs.name.dir / usr/local/hadoop/datalog1,/usr/local/hadoop/datalog2 dfs.data.dir / usr/local/hadoop/data1,/usr/local/hadoop/data2

Mapred-site.xml

Mapred.job.tracker localhost:9001

[Step8:] start

Formatting: hadoop namenode-format

Cd bin

Sh start-all.sh

Hadoop@linux-peterguo:~/hadoop-home/bin > shstart-all.shstarting namenode, logging to / home/hadoop/hadoop-home/libexec/../logs/hadoop-hadoop-namenode-linux-peterguo.outlocalhost: starting datanode, logging to / home/hadoop/hadoop-home/libexec/../logs/hadoop-hadoop-datanode-linux-peterguo.outlocalhost: starting secondarynamenode, logging to / home/hadoop/hadoop-home/libexec/../logs/hadoop-hadoop-secondarynamenode-linux-peterguo.outstarting jobtracker Logging to / home/hadoop/hadoop-home/libexec/../logs/hadoop-hadoop-jobtracker-linux-peterguo.outlocalhost: starting tasktracker, logging to / home/hadoop/hadoop-home/libexec/../logs/hadoop-hadoop-tasktracker-linux-peterguo.out

Jps to check whether the process starts all five java processes jobtracker/tasktracker/namenode/datanode/sencondarynamenode

You can check whether the service is normal by doing the following, and the Web interface used to monitor the health status of the cluster in Hadoop:

Http://localhost:50030/-Hadoop management interface

Http://localhost:50060/-Hadoop Task Tracker statu

Http://localhost:50070/-Hadoop DFS statu

[Step9:] manipulate dfs data files

Hadoop dfs-mkdir input

Hadoop dfs-copyFromLocal input/test.txt input

Hadoop dfs-ls input

[Step10:] run mr on dfs

Hadoop jar hadoop-examples-1.1.2.jar wordcount input output

Hadoop dfs-cat output/*

[Step11:] close

Stop-all.sh

After reading this article, I believe you have a certain understanding of "how to build a Hadoop environment on SUSE". If you want to know more about it, you are welcome to follow the industry information channel. Thank you for reading!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report