Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Study notes-- hadoop

2025-01-20 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

Hadoop

Hadoop-1.2.1.tar.gz jdk-6u32-linux-x64.bin

Useradd-u 900 hadoop

Mv jdk1.6.0_32 / home/hadoop

Mv hadoop-1.2.1.tar.gz/home/hadoop

Chown hadoop.hadoop / home/hadoop-R

Su-hadoop

Ln-s jdk1.6.0_32 java

Tar zxf hadoop-1.2.1.tar.gz hadoop-1.2.1

Ln-s hadoop-1.2.1 hadoop

Change the environment variable:

Vim / hadoop/conf/hadoop-env.sh

Cd / hadoop

Mkdir input

Cp conf/*.xml input

Bin/hadoop jar hadoop-examples-1.2.1.jar grep input output 'dfs [a murz.] +'

Set login without password:

Ssh-keygen

Ssh-copy-id 172.25.60.1

Ensure that master can log in without password to all slave nodes.

Cd ~ / hadoop/conf

Vim slaves-> 172.25.60.1

Vim masters---- > 172.25.60.1

Vim core-site.xml adds the following to the configuration

Fs.default.name

Hdfs://172.25.60.1:9000

Vim hdfs-site.xml adds the following to the configuration

Dfs.replication

one

Vim mapred-site.xml adds the following to the configuration

Mapred.job.tracker

172.25.60.1:9001

Format a new distributed file system:

$bin/hadoop namenode-format

Start the Hadoop daemon:

$bin/start-all.sh

View the hadoop process on each node:

$jps

The log of the Hadoop daemon is written to the ${HADOOP_HOME} / logs directory

Browse the network interfaces of NameNode and JobTracker, and their addresses are:

NameNode-http://172.25.60.1:50070/

JobTracker-http://172.25.60.1:50030/

Copy the input file to the distributed file system:

$bin/hadoop fs-put conf input

Run the sample program provided with the distribution:

$bin/hadoop jar hadoop-*-examples.jar grep input output 'dfs [a murz.] +'

View the output file:

Copy the output file from the distributed file system to the local file system to view:

$bin/hadoop fs-get output output

$cat output/*

Or

View the output file on the distributed file system:

$bin/hadoop fs-cat output/*

When all operations are complete, stop the daemon:

$bin/stop-all.sh

Fully distributed (three nodes) server1 server2 server4:

Install rpcbind nfs-utils on three nodes and open the rpcind nfs service

Vim / etc/exports

/ home/hadoop * (rw,all_squash,anonuid=900,anongid=900)

Add user useradd-u 900 hadoop on slave2 4

Mount 172.25.60.1:/home/hadoop/ / home/hadoop/

Make ssh connection to ssh 172.25.60.2ssh272.25.60.4 on 1

On master: vim ~ / hadoop/conf

Vim slaves

172.25.60.2

172.25.60.4

Vim hdfs-site.xml

1-> 2

(delete tmp- > format-> bin/start-dfs.sh- > bin/hadoop fs-put conf/ input- > bin/start-mapred.sh

Bin/hadoop jar hadoop-examples-1.2.1.jar grep input output 'dfs [a murz.] +')

Bin/hadoop dfsadmin-report: view the running status of nodes

Bin/hadoop fs-ls: viewing output files

Add a node file:

Add nodes online:

Add user useradd-u 900 hadoop

Mount 172.25.60.1:/home/hadoop/home/hadoop

Su-hadoop

Vim slaves joins this node-> > 172.25.60.5

Bin/hadoop-daemon.sh start datanode

Bin/hadoop-daemon.sh start tasktracker

Delete nodes online:

Do the data migration first:

On server: vim mapred-site.xml

Dfs.hosts.exclude

/ home/hadoop/hadoop/conf/hostexclude

Vim hadoop/hadoop/conf/hostexclude- > 172.25.60.4

Bin/hadoop dfsadmin-refreshNodes # refresh nodes

Recycle Bin function:

Vimcore-site.xml adds the following:

Fs.trash.interval

1440 1440 60 24

Lab: bin/hadoop fs-rm input/hadoop-env.sh

Bin/hadoop fs-ls input to see if delete

Bin/hadoop fs-ls adds a new directory .Trash

Bin/hadoop fs-ls .Trash / Current/user/hadoop/input

Move this file back to the original directory and restore it.

Bin/hadoop fs-mv. Trash/Current/user/hadoop/input/hadoop-env.sh input

Optimization:

Update hadoop to version 2.6

Delete the previous link, extract hadoop-2.6.4.tar.gz jdk-7u79-linux-x64.tar.gz to hadoop home directory, and change the permission to hadoop.hadoop to enter hadoop users, link to hadoop and java, and enter hadoop/etc/hadoop/

Vim hadoop-env.sh export JAVA_HOME=/home/hadoop/java

Cd / hadoop/etc/hadoop

Vim core-site.xml

Fs.defaultFS

Hdfs://172.25.60.1:9000

Vim hdfs-site.xml

Dfs.replication

two

Vim yarn-env.sh

# some Java parameters

Export JAVA_HOME=/home/hadoop/java

Cp mapred-site.xml.template mapred-site.xml

Mapreduce.framework.name

Yarn

Vim yarn-site.xml

Yarn.nodemanager.aux-services

Mapreduce_shuffle

Vim slaves

172.25.60.4

172.25.60.5

Bin/hdfs namenode-format

Sbin/start-dfs.sh

Bin/hdfs dfs-mkdir / user

Bin/hdfs dfs-mkdir / user/hadoop

Bin/hdfs dfs-put etc/hadoop input

Sbin/start-yarn.sh

Bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.4.jar grep input output 'dfs [a murz.] +'

Visit 172.25.60.1purl 50070 172.25.60.1purl 8088

# replace the file under lib to 64 bits (if you don't change it, there will be a warn warning when you start)

Mv hadoop-native-64-2.6.0.tar / home/hadoop/hadoop/lib/native

Tarxf hadoop-native-64-2.6.0.tar

# specify node directory

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report