Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the steps to install a Hadoop cluster under Linux

2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

What is the step of installing Hadoop cluster under Linux? in view of this problem, this article introduces the corresponding analysis and solution in detail, hoping to help more partners who want to solve this problem to find a more simple and feasible method.

1. Create a Hadoop directory under the usr directory, import the installation package into the directory, and extract the files

two。 Go to the vim / etc/profile file and edit the configuration file # hadoopexport HADOOP_HOME=/usr/hadoop/hadoop-2.6.0export CLASSPATH=$CLASSPATH:$HADOOP_HOME/libexport PATH=$PATH:$HADOOP_HOME/bin

3. Make the file effective source / etc/profile

4. Enter cd / usr/hadoop/hadoop-2.6.0/etc/hadoop under the Hadoop directory

5. Edit configuration file

(1) enter the vim hadoop-env.sh file to add (where the java jdk file is located)

Export JAVA_HOME=/usr/java/jdk1.8.0_181

(2) enter vim core-site.xml (Z1: the ip or mapping name on the primary node (changed to your own))

Hadoop.tmp.dir file:/root/hadoop/tmp fs.default.name hdfs://z1:9000 fs.trash .insterval 10080 io.file. Buffer.sizei 4096 39 and 9 bottom end

(3) Hadoop does not have a mapred-site.xml file. Now copy the file here and enter the mapred-site.xml.

Cp mapred-site.xml.template mapred-site.xmlvim mapred-site.xml

(Z1: ip or mapping name on the primary node (changed to your own))

Mapreduce.framework.name yarn mapred.job.ubertask.enable true mapred.job.tracker z1:9001 mapreduce.jobhistory.addressCMaster:10020

(4) enter yarn-site.xml

Vim yarn-site.xml

(Z1: ip or mapping name on the primary node (changed to your own))

Yarn.resourcemanager.hostname z1The address of the appiications manager interface inthe RM. Yarn.resourcemanager.address z1pur8032 yarn.resourcemanager.scheduler.address z1pur8030 yarn.resourcemanager.webapp.address z1pur8088 yarn.resourcemanager.webapp.https.address z1pur8090 yarn.resourcemanager.resource-tracker.address z1pur8031 yarn.resourcemanager.admin.address z1pur8033 yarn.nodemanager.aux-services mapreduce_shuffle yarn.scheduler.maximum-a11ocation-mb 2024 available memory per node (in M) Default 8182MB yarn.nodemanager.vmem-pmem-ratio 2.1 yarn.nodemanager.resource.memory-mb 1024 yarn.nodemanager.vmem-check-enabled false yarn.nodemanager.aux-services.mapreduce.shuffle.classorg.apache.hadoop.mapred.ShuffleHandler

(5) enter hdfs-site.xml

Vim hdfs-site.xml

Dfs.namenode.name.dir file:/usr/hadoop/hadoop-2.6.0/hadoopDesk/namenodeDatas dfs.datanode.data.dir file:/usr/hadoop/hadoop-2.6.0/hadoopDatas/namenodeDatas dfs.replication3dfs.permissionsfalsedfs.bloksize1342177286. Enter slaves to add master node and slave node vim slaves

Add your own master node and slave node (mine is z1jinz2jinz3)

7. Copy each file to another virtual machine scp-r / etc/profile root@z2:/etc/profile # distribute the environment variable profile file to the z2 node scp-r / etc/profile root@z3:/etc/profile # distribute the environment variable profile file to the z3 node scp-r / usr/hadoop root@z2:/usr/ # distribute the hadoop file to the z2 node scp-r / usr/hadoop root@z3:/usr/ # distribute the hadoop file to the z3 node

Take effect the environment variables of two slave nodes

Source / etc/profile8. Format hadoop (operate only in the primary node)

First check to see if jps starts hadoop

Hadoop namenode-format

When you see Exiting with status 0, the format is successful.

9. Go back to the Hadoop directory (operate on the master node only) cd / usr/hadoop/hadoop-2.6.0sbin/start-all.sh starts Hadoop to operate only on the master node

The master node enters the jps effect:

Input jps effect from node:

The answer to the question about the steps of installing Hadoop cluster under Linux is shared here. I hope the above content can be of some help to you. If you still have a lot of doubts to be solved, you can follow the industry information channel for more related knowledge.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report