Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Hadoop Cluster Building-suse linux 11

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

I haven't written it for a long time. I don't feel like a tech guy anymore.

Let's have a stock!

Hadoop cluster building

3-machine suse system

Planning

IP hostname hostname hadoop role

10.96.91.93 namenode93 NameNode 、 SecondaryNameNode 、 ResourceManage 、 DataNode 、 NodeManager

10.96.91.129 datanode129 DataNode NodeManager

10.96.91.130 datanode130 DataNode NodeManager

Create a hadoop user

Useradd-u 501-g users-d / home/hadoop-s / bin/bash hadoop

Mkdir / home/hadoop

Chown-R hadoop:users / home/hadoop

Passwd hadoop password Settin

Easy to remember, I set the user name and password to be the same

Modify hostname

File location / etc/HOSTNAME

Vim / etc/HOSTNAME file

/ etc/rc.d/boot.localnet start

Modify the host file! All three machines need to be modified!

File location / etc/hosts

10.96.91.93 namenode93

10.96.91.129 datanode129

10.96.91.130 datanode130

Configure ssh password-free login

Ssh-keygen-t rsa

Under the .ssh directory

Cat id_rsa.pub > > authorized_keys

Send your local public key to the target machine

Ssh-copy-id-I / .ssh/id_rsa.pub hadoop@datanode129

Ssh-copy-id-I / .ssh/id_rsa.pub hadoop@datanode130

Ssh-copy-id-I / .ssh/id_rsa.pub hadoop@namenode93

Configure the environment

File location / etc/profile

Export JAVA_HOME=/home/hadoop/jdk1.8.0_191

Export JRE_HOME=/home/hadoop/jdk1.8.0_191/jre

Export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH

Export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$JAVA_HOME:$PATH

Export HADOOP_HOME=/home/hadoop/hadoop-2.9.1

Export PATH= "$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH"

Export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop

Most important to-Hadoop file configuration

Create a hdfs folder

The folder does not have to be created this way, but it must correspond to the configuration file. After creation, it is as follows:

/ home/hadoop/hadoop-2.9.1/hdfs

/ home/hadoop/hadoop-2.9.1/hdfs/tmp

/ home/hadoop/hadoop-2.9.1/hdfs/name

/ home/hadoop/hadoop-2.9.1/hdfs/data

Enter the configuration directory

Cd / home/hadoop/hadoop-2.9.1/etc/hadoop

Configure core-site.xml

Add in

Hadoop.tmp.dir / home/hadoop/hadoop-2.9.1/hdfs/tmpp A base for other temporary directories. Io.file.buffer.size 131072 fs.defaultFS hdfs://namenode93:9000

Note: the value of hadoop.tmp.dir should be the same as the / home/hadoop/hadoop-2.9.1/hdfs/tmp path we created earlier.

Configure the hadoop-env.sh file

Configure the JAVA_HOME file as the native JAVA_HOME path

Export JAVA_HOME=/home/hadoop/jdk1.8.0_191

Configure yarn-env.sh

Modify the JAVA_HOME to the native JAVA_HOME path

Export JAVA_HOME=/home/hadoop/jdk1.8.0_191

Configure hdfs-site.xml

Add in

Dfs.replication 2 dfs.namenode.name.dir file:/home/hadoop/hadoop-2.9.1/hdfs/name true dfs.datanode.data.dir file:/home/hadoop/hadoop-2.9.1/hdfs/data true dfs.namenode.secondary.http-address namenode93:9001 dfs.webhdfs.enabled true dfs.permissions false

Note: the value of dfs.namenode.name.dir and dfs.datanode.data.dir is the same as the / hdfs/name and / hdfs/data paths created earlier

Configure mapred-site.xml

Copy the mapred-site.xml.template file and name it mapred-site.xml

Cp mapred-site.xml.template mapred-site.xml

Configure mapred-site.xml to add to the label

Mapreduce.framework.name yarn

Configure yarn-site.xml

Add to the label to

Yarn.resourcemanager.address namenode93:18040 yarn.resourcemanager.scheduler.address namenode93:18030 yarn.resourcemanager.webapp.address namenode93:18088 yarn.resourcemanager.resource-tracker.address namenode93:18025 yarn.resourcemanager.admin.address namenode93:18141 yarn.nodemanager.aux-services mapreduce_shuffle yarn.nodemanager.auxservices.mapreduce.shuffle.class org.apache.hadoop.mapred.ShuffleHandler

Configure the slaves file

Delete the original localhost and change it to

Namenode93

Datanode129

Datanode130

Transfer the hadoop-2.9.1 folder to another virtual machine

Scp-r hadoop-2.9.1 hadoop@datanode129:~/

Scp-r hadoop-2.9.1 hadoop@datanode130:~/

Note: hadoop is the user name of the virtual machine

Initialize hadoop on the namenode machine

Hdfs namenode-format

Start Hadoop

Start-dfs.sh

Start-yarn.sh

Or

Start-all.sh

Stop Hadoop

Stop-yarn.sh

Stop-dfs.sh

Or

Stop-all.sh

View command

You can view the current login machine to the role

Jps

The result of jps query is as follows

Hadoop@namenode93:~ > jps

15314 SecondaryNameNode

15484 ResourceManager

14956 NameNode

15116 DataNode

15612 NodeManager

16781 Jps

129130 two machines are DataNode. In the Hadoop to configuration, you can configure various machine roles flexibly. The jps query results are as follows on the official website of the configuration.

Hadoop@datanode130:~ > jps

10233 NodeManager

10365 Jps

10110 DataNode

At this point, the Hadoop cluster built by the three machines is completed.

Test hadoop cluster running tasks with self-contained samples

Use the command

Hadoop jar / home/hadoop/hadoop-2.9.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.1.jar pi 10 10

To calculate pi, pi is the name of the class, the first 10 represents the number of Map, and the second 10 represents the number of randomly generated points

In the end, there are results.

Job Finished in 32.014 seconds

Estimated value of Pi is 3.20000000000000000000

Ok, cluster testing is available. There are also examples of testing and counting words on the Internet. If you are interested, you can have a try.

HDFS management interface

Http://10.96.91.93:50070/

Yarn management interface

Http://10.96.91.93:18088/

Other suse commands

View version

Lsb_release-d

Under SUSE11: turn off defense

Service SuSEfirewall2_setup stop

Service SuSEfirewall2_init stop

Cancel booting and start the firewall:

Chkconfig SuSEfirewall2_setup off

Chkconfig SuSEfirewall2_init off

Check the port condition

Netstat-ntpl

references

CentOs6 builds Hadoop environment, which is a good example on the Internet. In particular, this part of Hadoop HA installation is more comprehensive.

Hadoop configuration file parameters are detailed, this is just a reference, different versions of Hadoop may also have some differences in configuration. Check the configuration section of the corresponding version of the project.

Apache Hadoop 2.9.1 Project

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report