Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to build pseudo-distributed Cluster of HDFS

2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly introduces how to build a pseudo-distributed cluster of HDFS. It is very detailed and has a certain reference value. Interested friends must read it!

1. Pre-installation preparation

1.1 View virtual machine ip [root@localhost ~] # ifconfig

Get the ip:192.168.88.155 of hadoop01

1.2 modify ip mapping relationship

[root@localhost ~] # vi / etc/hosts

Add the following record to it and save the exit

192.168.88.155 hadoop01

1.3 turn off the firewall

Check the status of the protective wall service iptables status

Close service iptables stop

Check the firewall boot status chkconfig iptables-list

Turn off Boot and start chkconfig iptables off

1.4 install JDK1.7 see another article

Check to see if [root@localhost] # java-version has been successfully installed

Java version "1.7.079"

Java (TM) SE Runtime Environment (build 1.7.0_79-b15)

Java HotSpot (TM) Client VM (build 24.79-b02, mixed mode)

The above message indicates that the installation is successful

two。 Configure hadoop

2.1 download the hadoop package

[root@localhost ~] # wget http://apache.fayea.com/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz

2.2 decompression

[root@localhost] # tar-zxvf hadoop-1.1.2.tar.gz-C / cloud/

2.3 configure hadoop pseudo-distribution (4 files to be modified)

The first one: hadoop-env.sh

[root@localhost cloud] # cd / cloud/hadoop-1.2.1/conf/

Add the following field export JAVA_HOME=/usr/lib/java/java-7-sun

The second one: core-site.xml

Vim core-site.xml

Fs.default.name

Hdfs://hadoop01:9000

Hadoop.tmp.dir

/ cloud/hadoop-1.2.1/tmp

The third one: hdfs-site.xml

Vim hdfs-site.xml

Dfs.replication

one

The fourth: mapred-site.xml

Vim mapred-site.xml

Mapred.job.tracker

Hadoop01:9001

2.4 add hadoop to the environment variable

Vim / etc/profile

Export JAVA_HOME=/usr/lib/java/java-7-sun

Export HADOOP_HOME=/cloud/hadoop-1.1.2

Export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin

Source / etc/profile

2.5 formatting HDFS

[root@localhost conf] # cd.. /

[root@localhost hadoop-1.2.1] # cd bin

[root@localhost bin] #. / hadoop namenode-format

2.6 start hadoop

[root@localhost bin] # sh start-all.sh

2.7 verify that the cluster starts successfully

2.7.1 using jps

[root@localhost sbin] # jps

12152 JobTracker

13835 Jps

11952 DataNode

12298 TaskTracker

11815 NameNode

12080 SecondaryNameNode

2.7.2 using netstat

[root@localhost sbin] # netstat-nltp

JpsActive Internet connections (only servers)

Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name

Tcp 0 0 0.0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 01. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 15. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0

Tcp 0 0 0.0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 8. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0

Tcp 0 0 127.0.0.1 631 0.0.0.0 * LISTEN 3857/cupsd

Tcp 0 0 127.0.0.1 acce 25 0.0.0.0 * LISTEN 3964/sendmail:

Tcp 00: 50020: * LISTEN 11952/java

Tcp 0 0: 46087: * LISTEN 12080/java

Tcp 00:: ffff:192.168.88.155:9000: * LISTEN 11815/java

Tcp 00:: ffff:192.168.88.155:9001: * LISTEN 12152/java

Tcp 00: 50090: * LISTEN 12080/java

Tcp 00: 50060: * LISTEN 12298/java

Tcp 00: 50030: * LISTEN 12152/java

Tcp 0 0: 42256: * LISTEN 12152/java

Tcp 0 0:: ffff:127.0.0.1:42194: * LISTEN 12298/java

Tcp 00: 50070: * LISTEN 11815/java

Tcp 0 0: 48758: * LISTEN 11815/java

Tcp 0 0: 22:: * LISTEN 3848/sshd

Tcp 00: 50010: * LISTEN 11952/java

Tcp 00: 50075: * LISTEN 11952/java

Tcp 0 0: 51163: * LISTEN 11952/java

It can also be verified by the browser.

Http://192.168.1.110:50070 (hdfs Management Interface)

Http://192.168.1.110:50030 (mr Management Interface)

Add the mapping between linux hostname and IP in this file

C:\ Windows\ System32\ drivers\ etc

3. Configure ssh login-free

Generate ssh login-free key

Ssh-keygen-t rsa

After executing this command, two files, id_rsa (private key) and id_rsa.pub (public key), are generated.

Copy the public key to the machine that will not be logged in

Cat ~ / .ssh/id_rsa.pub > > ~ / .ssh/authorized_keys

4. Configure ssh login-free

Generate ssh login-free key

Cd ~, go to my home directory

Cd .ssh /

Ssh-keygen-t rsa (four carriage returns)

After executing this command, two files, id_rsa (private key) and id_rsa.pub (public key), are generated.

Copy the public key to the machine that will not be logged in

Cat ~ / .ssh/id_rsa.pub > > ~ / .ssh/authorized_keys (file)

Or

Ssh-copy-id-I localhost

Supplement

(2) modify sshd_config file

# vi / etc/ssh/sshd_config / / enable the following

RSAAuthentication yes

PubkeyAuthentication yes

AuthorizedKeysFile .ssh / authorized_keys

Chmod 600 ~ / .ssh/authorized_keys

Service sshd restart

Ssh username@serverhost

These are all the contents of the article "how to build pseudo-distributed clusters in HDFS". Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report