Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to build pseudo-distribution of hadoop 2

2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

Editor to share with you how to build hadoop 2 pseudo-distribution, I hope you will learn something after reading this article, let's discuss it together!

Single point pseudo-distributed

1. Download hadoop

two。 Install jdk and set environment variables

Export JAVA_HOME=/usr/local/java/jdk1.7.0_79

Export JRE_HOME=$JAVA_HOME/jre

Export PATH=$PATH:$JAVA_HOME/bin

Export CLASSPATH=./:/$JAVA_HOME/lib:$JAVA_HOME/jre/lib

[root@iZ281cu2lqjZ etc] # source / etc/profile

3. Create for and group

Groupadd hadoop

Useradd-g hadoop yarn

Useradd-g hadoop hdfs

Useradd-g hadoop mapred

4. Create data and log directories

Mkdir-p / var/data/hadoop/hdfs/nn

Mkdir-p / var/data/hadoop/hdfs/snn

Mkdir-p / var/data/hadoop/hdfs/dn

Chown hdfs:hadoop / var/data/hadoop/hdfs-R

Mkdir-p / var/log/hadoop/yarn

Chown yarn:hadoop / var/log/hadoop/yarn-R

Go to the directory of yarnhadoop

Mkdir logs

Chmod Grouw logs

Chown yarn:hadoop. -R

5. Configure core-site.xml

Fs.default.name

Hdfs://localhost:9000

Hadoop.http.staticuser.user

Hdfs

Fs.default.name is the Namenode that specifies the hostname and request port number

Hadoop.http.staticuser.user is the default user name for hdfs

6. Configure hdfs.site.xml

Dfs.replication

1-default is 3

Dfs.namenode.name.dir

File:/var/data/hadoop/hdfs/nn

Fs.checkpoint.dir

File:/var/data/hadoop/hdfs/snn

Fs.checkpoint.edit.dir

File:/var/data/hadoop/hdfs/snn

Dfs.datanode.data.dir

File:/var/data/hadoop/hdfs/db

7. Configure mapred-site.xml

Mapreduce.framework.name

Yarn

Develop a framework name for mapreduce, using yarn

8 configure yarn-site.xml

Yarn.nodemanager.aux-services

Mapreduce_shuffle

Yarn.nodemanager.aux-services.mapreduce.shuffle.class

Org.apache.hadoop.mapred.ShuffleHandler

Shuffle is mainly configured. Shuffle is not configured by default.

9. Resize the JAVA heap

Hadoop-env.sh

HADOOP_HEAPSIZE= "500s"

Yarn-env.sh

YARN_HEAPSIZE=500

10. Format HDFS

Switch to hdfs user and go to hadoop bin directory to execute

. / hdfss namenode-format

Encounter a problem

11../hdfs namenode-format

12. [hdfs@localhost sbin] $. / hadoop-daemon.sh start namenode

Starting namenode, logging to / opt/yarn/hadoop-2.7.1/logs/hadoop-hdfs-namenode-localhost.out

[hdfs@localhost sbin] $. / hadoop-daemon.sh start secondarynamenode

Starting secondarynamenode, logging to / opt/yarn/hadoop-2.7.1/logs/hadoop-hdfs-secondarynamenode-localhost.out

[hdfs@localhost sbin] $. / hadoop-daemon.sh start datanode

Starting datanode, logging to / opt/yarn/hadoop-2.7.1/logs/hadoop-hdfs-datanode-localhost.out

Use JPS to check the process

Results:

[hdfs@localhost sbin] $jps

3915 SecondaryNameNode

3969 DataNode

3833 NameNode

4047 Jps

twelve。 Start yarn

[yarn@localhost sbin] $. / yarn-daemon.sh start nodemanager

/ opt/yarn/hadoop-2.7.1/etc/hadoop/yarn-env.sh: line 121: unexpected EOF while looking for matching `"'

/ opt/yarn/hadoop-2.7.1/etc/hadoop/yarn-env.sh: line 124: syntax error: unexpected end of file

Starting nodemanager, logging to / opt/yarn/hadoop-2.7.1/logs/yarn-yarn-nodemanager-localhost.out

/ opt/yarn/hadoop-2.7.1/etc/hadoop/yarn-env.sh: line 121: unexpected EOF while looking for matching `"'

/ opt/yarn/hadoop-2.7.1/etc/hadoop/yarn-env.sh: line 124: syntax error: unexpected end of file

[yarn@localhost sbin] $jps

4132 ResourceManager

4567 Jps

4456 NodeManager

13 Verification

Visit: ip:50070

Ip:8088

Finally, you can run the example in the hadoop package to verify. This is the pseudo-distributed simple installation step.

Problem: if port 50070 cannot be accessed, yarn does not start successfully.

Go to sbin to start yarn. / start-yarn.sh Times

Localhost: Error: JAVA_HOME is not set and could not be found.

You need to modify the setting of java_home to absolute path in hadoop-env.xml.

Restart yarn to resolve the issue.

Configure hbase:

Modify hbase-env.sh

Modify # export JAVA_HOME=/usr/local/java/jdk1.7.0_79

Modify hbase-site.xml

Hbase.rootdir

Hdfs://localhost:9000/hbase

Dfs.replication

one

Hbase.cluster.distributed

True

Start hbase:

[root@iZ281cu2lqjZ bin] #. / start-hbase.sh

Root@localhost's password:

Localhost: starting zookeeper, logging to / usr/local/hbase/hbase-1.1.4/bin/../logs/hbase-root-zookeeper-iZ281cu2lqjZ.out

Starting master, logging to / usr/local/hbase/hbase-1.1.4/bin/../logs/hbase-root-master-iZ281cu2lqjZ.out

Starting regionserver, logging to / usr/local/hbase/hbase-1.1.4/bin/../logs/hbase-root-1-regionserver-iZ281cu2lqjZ.out

[root@iZ281cu2lqjZ bin] # jps

1597 DataNode

3180 ResourceManager

3463 NodeManager

1462 NameNode

8680 HRegionServer

1543 SecondaryNameNode

8536 HQuorumPeer

8597 HMaster

8729 Jps

Complete

After reading this article, I believe you have some understanding of "how to build hadoop 2 pseudo-distribution". If you want to know more about it, you are welcome to follow the industry information channel. Thank you for reading!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report