Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Install distributed Hadoop 3.1.1 under centos

2025-04-07 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

I) installation environment

Centos 7

JDK 1.8.0_181

VMware 12 pro

Hadoop 3.1.1

II) preparation of the installation environment

Distributed system preparation

In the following example, I installed three separate virtual machine systems, with HW-Machine as master and the other two as slaves:

Java environment configuration / / 3 virtual machines all need to be configured

For the acquisition of the JDK installation package and the configuration of the Java environment, please refer to another blog post, "installation and configuration of the Java environment under Centos".

Static IP settings / / 3 virtual machines all need to be set, and modify / etc/hosts file and / etc/hostname file

Because the system hostname or host IP is written in the distributed system configuration, we need to set the static IP for the system host used in the Hadoop environment. For more information, please refer to another blog, "Centos system Settings static IP in VMware".

My settings here are as follows:

SSH password-free login configuration / / all 3 virtual machines need to be configured

SSH is used for data transfer between Hadoop's master and slave, so we also need to set SSH password-free login to the system host used in the Hadoop environment. For more information, please refer to another blog post, "Centos sets SSH password-free remote login".

III) Hadoop installation configuration

Note:

A. hadoop requires that all the host systems used need to establish the same user. In this case, I directly use the root account, so all operations are done under the root account.

b. Corresponding to the following configuration file modification, without special instructions, just modify the file above the master.

Hadoop download

Go directly to the apache official website to download http://mirrors.hust.edu.cn/apache/, as in this experiment: hadoop-3.1.1.tar.gz

Directly extract the download package: tar-zxvf hadoop-3.1.1.tar.gz-C DestinationFolder. For example, I unzipped it to the / usr/local/ directory (all 3 systems here need to be downloaded and unzipped)

Profile modification

3.1 modify the core-site.xml file. The file location / xxx/etc/hadoop/ directory must be modified by all three systems.

3.2 modify the hadoop-env.sh file, file location / xxx/etc/hadoop/ directory, all three systems must modify the file

Or modify it as follows (please note that JAVA_HOME and HADOOP_HOME are configured according to your own environment), and add more contents in the red box, so that you can omit the following three steps: 3.7, 3.8, 3.8, 3.9:

3.3 modify hdfs-site.xml file, file location / xxx/etc/hadoop/ directory, only need to modify master node

Alternatively, press the following settings (note that the name, data directory path and namenode address are modified according to the actual configuration of master):

3.4 modify the mapred-site.xml file, under the file location / xxx/etc/hadoop/ directory, just modify the master node

Note: this file can only be set to this one property, and others will be initialized by default.

3.5 modify the workers file, under the file location / xxx/etc/hadoop/ directory, just modify the master node

Note: only slave1 and slave2 can be set here so that the master system does not act as a DataNode node.

3.6 modify the yarn-site.xml file, under the file location / xxx/etc/hadoop/ directory, just modify the master node

Note: you can also just set the yarn.resourcemanager.hostname and yarn.nodemanager.aux-services properties here.

3.7 modify the start-dfs.sh and stop-dfs.sh files. Under the file location / xxx/sbin/ directory, add the following variables to the headers of the two files:

HDFS_DATANODE_USER=root

HADOOP_SECURE_DN_USER=hdfs

HDFS_NAMENODE_USER=root

HDFS_SECONDARYNAMENODE_USER=root

3.8 modify the start-yarn.sh and stop-yarn.sh files under the file location / xxx/sbin/ directory, and add the following variables to the headers of the two files:

YARN_RESOURCEMANAGER_USER=root

HADOOP_SECURE_DN_USER=yarn

YARN_NODEMANAGER_USER=root

3.9 modify the start-all.sh and stop-all.sh files under the file location / xxx/sbin/ directory, and add the following variables to the headers of the two files:

TANODE_USER=root

HDFS_DATANODE_SECURE_USER=hdfs

HDFS_NAMENODE_USER=root

HDFS_SECONDARYNAMENODE_USER=root

YARN_RESOURCEMANAGER_USER=root

HADOOP_SECURE_DN_USER=yarn

YARN_NODEMANAGER_USER=root

4. Initialize the Hadoop system and change to the / xxx/bin directory

Run the command:. / hdfs namenode-format

No error returns "Exiting with status 0" as success and "Exiting with status 1" as failure

5. Start Hadoop and verify, and change to the / xxx/sbin directory

Run the command to start:. / start-all.sh

Run the command to verify: jps, if you see the following service, it means that the service has been started successfully:

Alternatively, you can open a browser and type http://master:50070 to verify, and you can see the following web page:

At this point, the Hadoop installation verification is complete!

Note:

Installation and configuration of Hadoop3.1.1 can also be combined with reference: https://blog.csdn.net/qq_41684957/article/details/81946128

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report