Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Installation steps for Hadoop

2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly explains "the installation steps of Hadoop". Interested friends may wish to have a look at it. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn the installation steps of Hadoop.

Environment:

Ubuntu14.04

Hadoop 2.4.0,Hive 0.13.1

one。 Stand-alone mode

1. Install a brand new Ubuntu system and update the system with sudo apt-get update and sudo apt-get upgrade. This is a personal habit and you don't have to do it.

two。 Create hadoop user groups and hadoop accounts:

Sudo addgroup hadoop

Sudo adduser-ingroup hadoop hadoop

3. Edit the / etc/sudoer file to give the hadoop account the same permissions as root hadoop ALL= (ALL:ALL) ALL

4. Log in as hadoop user: su hadoop

5. Confirm that openssh is installed if not installed:

Sudo apt-get install openssh-server

Sudo / etc/init.d/ssh start

6. Create password-free login and generate private key and public key

Ssh-keygen-t rsa-P ""

7. Appends the public key to the authorized_keys, where the user saves all public key contents that are allowed to log in to the ssh client user as the current user.

Cat ~ / .ssh/id_rsa.pub > > ~ / .ssh/authorized_keys

The 8.ssh localhost test logs in without a password.

9. Install the java environment (preferably sunjdk, not openjdk)

a. Download the latest version of jdk and extract it

b. Create the installation directory mkdir / usr/lib/jvm

c. Move the extracted jdk to the installation directory, such as sudo mv jdk1.7/ / usr/lib/jvm/java-7-sun

d. Edit ~ / .bashrc to configure the environment for java:

Export JAVA_HOME=/usr/lib/jvm/java-7-sun

Export JRE_HOME=$ {JAVA_HOME} / jre

Export CLASSPATH=.:$ {JAVA_HOME} / lib:$ {JRE_HOME} / lib

Export PATH=$ {JAVA_HOME} / bin:$PATH

E. source ~ / .bashrc to make the configuration effective, and env to view the results

f. Configure the default program

Sudo update-alternatives-install / usr/bin/java java / usr/lib/jvm/java-7-sun/bin/java 300

Sudo update-alternatives-install / usr/bin/javac javac / usr/lib/jvm/java-7-sun/bin/javac 300

Sudo update-alternatives-config java sudo update-alternatives-config javac

G.java-version;avac-version to see if the java version number is correct.

(Ps: I installed the java1.8 version before, but saved it later when compiling hadoop, and then decisively changed it to version 1.7, everything is fine.)

10. Download the latest version of hadoop http://mirror.bit.edu.cn/apache/hadoop/common/ and extract it

11. Create the hadoop installation directory sudo mkdir / usr/local/hadoop, and move the extracted hadoop folder to sudo mv. / hadoop-2.4.0/* / usr/local/hadoop; modify hadoop installation directory permissions sudo chmod 774 / usr/local/hadoop

twelve。 Configure the haoop environment variable, vi ~ / .bashrc

14. test

Installation in stand-alone mode is complete. Verify whether the installation is successful by executing WordCount with hadoop instance.

Create an input folder under / usr/local/hadoop path

Mkdir input

Copy README.txt to input

Cp README.txt input

Execute WordCount

Bin/hadoop jar share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-2.4.0-sources.jar org.apache.hadoop.examples.WordCount input output

At this point, the stand-alone configuration is completed, in fact, it is still very simple.

two。 Pseudo-distributed

15.

Set *-site.xml

Here you need to set three files: core-site.xml,hdfs-site.xml,mapred-site.xml, all in the / usr/local/Hadoop/conf directory

Core-site.xml: configuration items for Hadoop Core, such as the commonly used HDFS and MapReduce I _ Unip O settings, etc.

Hdfs-site.xml: configuration items for the Hadoop daemon, including namenode, secondary namenode, datanode, etc.

Mapred-site.xml: configuration items for the MapReduce daemon, including jobtracker and tasktracker.

First create a few new folders under the hadoop directory

~ / hadoop$ mkdir tmp

~ / hadoop$ mkdir hdfs

~ / hadoop$ mkdir hdfs/name

~ / hadoop$ mkdir hdfs/data

Edit the configuration file:

Core-site.xml:

Fs.default.name

Hdfs://localhost:9000

Hadoop.tmp.dir

/ usr/local/hadoop/tmp

Hdfs-site.xml:

Dfs.replication

one

Dfs.name.dir

/ usr/local/hadoop/hdfs/name

Dfs.data.dir

/ usr/local/hadoop/hdfs/data

16. The result of formatting hdfs / usr/local/hadoop/bin/hadoop namenode-format is similar

17. Start hadoop/ usr/local/hadoop/sbin/start-all.sh (the old version of the startup program is in the / usr/local/hadoop/bin directory), run jps, and the following result indicates success:

First create the input directory in dfs

Hadoop@Ubuntu:/usr/local/hadoop$ bin/hadoop dfs-mkdir input

Copy files from conf to input in dfs

Hadoop@ubuntu: / usr/local/hadoop$ hadoop dfs-copyFromLocal conf/* input

Running WordCount in pseudo-distributed mode

Hadoop@ubuntu: / usr/local/hadoop$ hadoop jar hadoop-examples-1.0.2.jar wordcount input output

18. Run / usr/local/hadoop/bin/hadoop dfsadmin-report to check the running status or enter http://ip:50070 to view the running status of the web version

19. Run the test example

First create the input directory in dfs

Hadoop@Ubuntu:/usr/local/hadoop$ bin/hadoop dfs-mkdir input

Copy files from conf to input in dfs

Hadoop@ubuntu: / usr/local/hadoop$ hadoop dfs-copyFromLocal conf/* input

Running WordCount in pseudo-distributed mode

Hadoop@ubuntu: / usr/local/hadoop$ hadoop jar hadoop-examples-1.0.2.jar wordcount input output

PS: will report "WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform..." under the 64-bit version Using builtin-java classes where applicable ", this is because the hadoop website provides a 32-bit version, which requires me to recompile the hadoop source code under 64-bit, but I still report an error after recompiling it. After opening the debug mode, I found that the path of lib is wrong, so the cp / usr/local/hadoop/lib/native/* / usr/local/hadoop/lib/ problem is solved.

At this point, I believe that everyone on the "Hadoop installation steps" have a deeper understanding, might as well come to the actual operation of it! Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report