Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How Hadoop deploys pseudo-distribution patterns

2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

This article will explain in detail how to deploy the pseudo-distribution pattern in Hadoop. The editor thinks it is very practical, so I share it with you as a reference. I hope you can get something after reading this article.

Deployment method:

1. Stand-alone mode standalone 1 java process

two。 Pseudo-distributed mode Pseudo-Distributed Mode development | Learning multiple java processes

3. Cluster mode Cluster Mode: produce multiple machines and multiple java processes

Pseudo-distributed deployment: HDFS

1. Create a user of the hadoop service

[root@hadoop02 software] # useradd hadoop

[root@hadoop02 software] # id hadoop

Uid=515 (hadoop) gid=515 (hadoop) groups=515 (hadoop)

[root@rzdatahadoop02 software] # [root@hadoop02 software] # vi / etc/sudoers

Hadoop ALL= (root) NOPASSWD:ALL

two。 Deploy JAVA

Oracle jdk1.8 (try not to use Open JDK)

[root@hadoop02 jdk1.8.0_45] # which java

/ usr/java/jdk1.8.0_45/bin/java

[root@hadoop02 jdk1.8.0_45] #

3. To deploy the ssh service is to run

[root@hadoop02 ~] # service sshd status

Openssh-daemon (pid 1386) is running...

[root@hadoop02 ~] #

4. Decompress hadoop

[root@hadoop02 software] # tar-xzvf hadoop-2.8.1.tar.gz

Chown-R hadoop:hadoop folder-- > inside folders and folders

Chown-R hadoop:hadoop soft connection folder-- > modify only the soft connection folder, not the

Chown-R hadoop:hadoop soft connection folder / *-- > the soft connection folder is not modified, only the

Chown-R hadoop:hadoop hadoop-2.8.1-- > modify the original folder

[root@hadoop02 software] # ln-s hadoop-2.8.1 hadoop

[root@hadoop02 software] # cd hadoop

[root@hadoop02 hadoop] # rm-f * .txt

[root@hadoop02 hadoop] # ll

Total 28

Drwxr-xr-x. 2 hadoop hadoop 4096 Dec 10 11:54 bin

Drwxr-xr-x. 3 hadoop hadoop 4096 Dec 10 11:54 etc

Drwxr-xr-x. 2 hadoop hadoop 4096 Dec 10 11:54 include

Drwxr-xr-x. 3 hadoop hadoop 4096 Dec 10 11:54 lib

Drwxr-xr-x. 2 hadoop hadoop 4096 Dec 10 11:54 libexec

Drwxr-xr-x. 2 hadoop hadoop 4096 Dec 10 11:54 sbin

Drwxr-xr-x. 3 hadoop hadoop 4096 Dec 10 11:54 share

[root@hadoop02 hadoop] # bin: command

Etc: configuration fil

Sbin: used to start and shut down the hadoop process

5. Switch between hadoop users and configuration

[root@hadoop02 hadoop] # su-hadoop

[hadoop@hadoop02 ~] $ll

Total 0

[hadoop@hadoop02 ~] $cd / opt/software/hadoop

[hadoop@hadoop02 hadoop] $ll

Total 28

Drwxr-xr-x. 2 hadoop hadoop 4096 Dec 10 11:54 bin

Drwxr-xr-x. 3 hadoop hadoop 4096 Dec 10 11:54 etc

Drwxr-xr-x. 2 hadoop hadoop 4096 Dec 10 11:54 include

Drwxr-xr-x. 3 hadoop hadoop 4096 Dec 10 11:54 lib

Drwxr-xr-x. 2 hadoop hadoop 4096 Dec 10 11:54 libexec

Drwxr-xr-x. 2 hadoop hadoop 4096 Dec 10 11:54 sbin

Drwxr-xr-x. 3 hadoop hadoop 4096 Dec 10 11:54 share

[hadoop@hadoop02 hadoop] $cd etc/hadoop hadoop-env.sh: hadoop configuration environment

Core-site.xml: hadoop core profile

Hdfs-site.xml: hdfs service-- > will start a process

[mapred-site.xml: configuration file required for mapred computing] is available only when jar computing

Yarn-site.xml: yarn service-- > will start a process

Slaves: machine name of the cluster [hadoop@hadoop02 hadoop] $vi core-site.xml

Fs.defaultFS

Hdfs://localhost:9000

[hadoop@hadoop02 hadoop] $vi hdfs-site.xml

Dfs.replication

one

6. Configure the trust relationship of the hadoop user's ssh

[hadoop@hadoop02] $ssh-keygen-t rsa-P'- f ~ / .ssh/id_rsa

Generating public/private rsa key pair.

Created directory'/ home/hadoop/.ssh'.

Your identification has been saved in / home/hadoop/.ssh/id_rsa.

Your public key has been saved in / home/hadoop/.ssh/id_rsa.pub.

The key fingerprint is:

5b:07:ff:e5:82:85:f3:41:32:f3:80:05:c9:57:0f:e9 hadoop@rzdatahadoop002

The key's randomart image is:

+-[RSA 2048]-+

|.. o. O. | |

| oo. .o |

| o.resume.. . |

| | o OE |

| S. = +. |

| o. * + |

|. . +. |

|. | |

| | |

+-+

[hadoop@hadoop02] $cat ~ / .ssh/id_rsa.pub > > ~ / .ssh/authorized_keys

[hadoop@hadoop02] $chmod 0600 ~ / .ssh/authorized_keys

7. Formatting

[hadoop@hadoop002 hadoop] $bin/hdfs namenode-format

17-12-13 22:22:04 INFO common.Storage: Storage directory / tmp/hadoop-hadoop/dfs/name has been successfully formatted.

17-12-13 22:22:04 INFO namenode.FSImageFormatProtobuf: Saving image file / tmp/hadoop-hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression

17-12-13 22:22:04 INFO namenode.FSImageFormatProtobuf: Image file / tmp/hadoop-hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 323 bytes saved in 0 seconds.

22:22:04 on 17-12-13 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid > = 0

17-12-13 22:22:04 INFO util.ExitUtil: Exiting with status 0

17-12-13 22:22:04 INFO namenode.NameNode: SHUTDOWN_MSG:

/ *

SHUTDOWN_MSG: Shutting down NameNode at rzdatahadoop002/192.168.137.201

* * / Storage directory: / tmp/hadoop-hadoop/dfs/name

1. Which configuration is the default storage path?

What does 2.hadoop-hadoop mean?

Core-site.xml

Hadoop.tmp.dir: / tmp/hadoop-$ {user.name}

Hdfs-site.xml

Dfs.namenode.name.dir: file://${hadoop.tmp.dir}/dfs/name

8. Start the HDFS service

[hadoop@hadoop02 sbin] $. / start-dfs.sh

Starting namenodes on [localhost]

The authenticity of host 'localhost (:: 1)' can't be established.

RSA key fingerprint is 9a:ea:f5:06:bf:de:ca:82:66:51:81:fe:bf:8a:62:36.

Are you sure you want to continue connecting (yes/no)? Yes

Localhost: Warning: Permanently added 'localhost' (RSA) to the list of known hosts.

Localhost: Error: JAVA_HOME is not set and could not be found.

Localhost: Error: JAVA_HOME is not set and could not be found.

Starting secondary namenodes [0.0.0.0]

The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.

RSA key fingerprint is 9a:ea:f5:06:bf:de:ca:82:66:51:81:fe:bf:8a:62:36.

Are you sure you want to continue connecting (yes/no)? Yes

0.0.0.0: Warning: Permanently added '0.0.0.0' (RSA) to the list of known hosts.

0.0.0.0: Error: JAVA_HOME is not set and could not be found.

[hadoop@hadoop02 sbin] $ps-ef | grep hadoop

Root 11292 11085 0 21:59 pts/1 00:00:00 su-hadoop

Hadoop 11293 11292 0 21:59 pts/1 00:00:00-bash

Hadoop 11822 11293 0 22:34 pts/1 00:00:00 ps-ef

Hadoop 11823 11293 0 22:34 pts/1 00:00:00 grep hadoop

[hadoop@rzdatahadoop002 sbin] $echo $JAVA_HOME

/ usr/java/jdk1.8.0_45

Found that the JAVA_HOME variable exists and cannot start the HDFS service

[hadoop@hadoop02 sbin] $vi.. / etc/hadoop/hadoop-env.sh

# The java implementation to use.

Export JAVA_HOME=/usr/java/jdk1.8.0_45 [hadoop@hadoop02 sbin] $. / start-dfs.sh

Starting namenodes on [localhost]

Localhost: starting namenode, logging to / opt/software/hadoop-2.8.1/logs/hadoop-hadoop-namenode-rzdatahadoop002.out

Localhost: starting datanode, logging to / opt/software/hadoop-2.8.1/logs/hadoop-hadoop-datanode-rzdatahadoop002.out

Starting secondary namenodes [0.0.0.0]

0.0.0.0: starting secondarynamenode, logging to / opt/software/hadoop-2.8.1/logs/hadoop-hadoop-secondarynamenode-rzdatahadoop002.out namenode (name node): localhost

Datanode (data node): localhost

Secondary namenode (second name node): 0.0.0.0

Http://localhost:50070/

Default port: 50070

9. Use commands (hadoop, hdfs)

[hadoop@hadoop02 bin] $. / hdfs dfs-mkdir / user

[hadoop@hadoop02 bin] $. / hdfs dfs-mkdir / user/hadoop [hadoop@hadoop02 bin] $echo "123456" > rz.log

[hadoop@hadoop02 bin] $. / hadoop fs-put rz.log hdfs://localhost:9000/

[hadoop@hadoop02 bin] $

[hadoop@hadoop02 bin] $. / hadoop fs-ls hdfs://localhost:9000/

Found 2 items

-rw-r--r-- 1 hadoop supergroup 7 2017-12-13 22:56 hdfs://localhost:9000/rz.log

Drwxr-xr-x-hadoop supergroup 0 2017-12-13 22:55 hdfs://localhost:9000/user [hadoop@hadoop02 bin] $. / hadoop fs-ls /

Found 2 items

-rw-r--r-- 1 hadoop supergroup 7 2017-12-13 22:56 hdfs://localhost:9000/rz.log

Drwxr-xr-x-hadoop supergroup 0 2017-12-13 22:55 hdfs://localhost:9000/user 10. Want to change hdfs://localhost:9000 to hdfs://192.168.137.201:9000

[hadoop@hadoop02 bin] $.. / sbin/stop-dfs.sh [hadoop@hadoop02 bin] $vi.. / etc/hadoop/core-site.xml

Fs.defaultFS

Hdfs://192.168.137.201:9000

[hadoop@hadoop02 bin] $. / hdfs namenode-format

[hadoop@hadoop02 bin] $.. / sbin/start-dfs.sh

Starting namenodes on [hadoop002]

Rzdatahadoop002: starting namenode, logging to / opt/software/hadoop-2.8.1/logs/hadoop-hadoop-namenode-rzdatahadoop002.out

Localhost: starting datanode, logging to / opt/software/hadoop-2.8.1/logs/hadoop-hadoop-datanode-rzdatahadoop002.out

Starting secondary namenodes [0.0.0.0]

0.0.0.0: starting secondarynamenode, logging to / opt/software/hadoop-2.8.1/logs/hadoop-hadoop-secondarynamenode-rzdatahadoop002.out [hadoop@hadoop02 bin] $netstat-nlp | grep 9000

(Not all processes could be identified, non-owned process info

Will not be shown, you would have to be root to see it all.)

Tcp 00 192.168.137.201 9000 0.0.0.0 * LISTEN 14974/java

[hadoop@hadoop02 bin] $

11. Modify the service of HDFS to start with hadoop02

Namenode: hadoop02

Datanode: localhost

Secondarynamenode: 0.0.0.0 for datanode modification:

[hadoop@hadoop002 hadoop] $vi slaves

Hadoop02 for secondarynamenode modification:

[hadoop@hadoop02 hadoop] $vi hdfs-site.xml

Dfs.replication

one

Dfs.namenode.secondary.http-address

Rzdatahadoop002:50090

Dfs.namenode.secondary.https-address

Rzdatahadoop002:50091

"hdfs-site.xml" 35L, 1173C written [hadoop@hadoop02 hadoop] $cd. /.. / sbin

[hadoop@hadoop02 sbin] $. / stop-dfs.sh

[hadoop@hadoop02 sbin] $. / start-dfs.sh

Starting namenodes on [hadoop02]

Hadoop02: starting namenode, logging to / opt/software/hadoop-2.8.1/logs/hadoop-hadoop-namenode-rzdatahadoop002.out

Hadoop02: starting datanode, logging to / opt/software/hadoop-2.8.1/logs/hadoop-hadoop-datanode-rzdatahadoop002.out

Starting secondary namenodes [rzdatahadoop002]

Hadoop02: starting secondarynamenode, logging to / opt/software/hadoop-2.8.1/logs/hadoop-hadoop-secondarynamenode-rzdatahadoop002.out

Add:

A service data directory is on disk A (500G), with 10g left. / a/dfs/data

Add B disk 2T.

1.A disk: mv / a/dfs / b /

2. B disk: ln-s / b/dfs / a

3. Check (modify) the permissions of users and user groups in the folders of disk An and disk B.

This is the end of the article on "how to deploy pseudo-distribution patterns in Hadoop". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, please share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report