Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to install hadoop2.2.0 Cluster in centos6.4 32 + 64-bit computer

2025-04-10 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

The editor would like to share with you how to install hadoop2.2.0 cluster in centos6.4 32amp 64-bit computer. I hope you will get something after reading this article. Let's discuss it together.

1. Prepare the environment

Install VMware10, three centos6.4 versions, and install it under the VMware virtual machine.

1) install the Chinese input method:

1. Root permission is required, so log in with root, or su root

2. Yum install "@ Chinese Support"

2) install ssh or vsftp

Use chkconfig-- list to see if the vsftpd service is installed

Use the yum command to directly install: yum install vsftpd

View and manage ftp services:

Start the ftp service: service vsftpd start

View ftp service status: service vsftpd status

Restart the ftp service: service vsftpd restart

Turn off the ftp service: service vsftpd stop

3) jdk installation

Reference http://my.oschina.net/kt431128/blog/269262

2. Modify the host name

I install a virtual machine, and then clone and complete the installation of the other two machines through the virtual machine-"manage -". Now one of the problems is that the hostname is the same, which is obviously not what I want. so you need to modify the hostnames of the other two.

[root@slaver2 sysconfig] # vi / etc/sysconfig/network

NETWORKING=yes

HOSTNAME=slaver

3. Configure / ect/hosts. The configuration of the three servers is the same.

Vi / etc/hosts

192.168.21.128 master

192.168.21.131 slaver

192.168.21.130 slaver2

4. Create a user (use root user to create and find Browse the filesystem error later, check the document later, and recommend to use the new user)

Useradd hadoop

Passwd hadoop

Enter the password and confirm

5. Log in without a password for ssh

Reference: http://my.oschina.net/kt431128/blog/269266

6. Download HADOOP and configure the environment

Http://mirror.esocc.com/apache/hadoop/common/hadoop-2.2.0/

[] hadoop-2.2.0.tar.gz 07-Oct-2013 14:46 104M

Configuration of hadoop environment variables:

Vi/etc/profile

Add at the bottom of the file

Export HADOOP_HOME=/usr/zkt/hadoop2.2.0/hadoop-2.2.0

Export PAHT=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

Export HADOOP_LOG_DIR=/usr/zkt/hadoop2.2.0/hadoop-2.2.0/logs

Export YARN_LOG_DIR=$HADOOP_LOG_DIR

Export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native

Export HADOOP_OPTS= "- Djava.library.path=$HADOOP_HOME/lib"

Note: you need to add the red configuration information to the 64-bit operating system.

Another solution seen online:

The following warning is reported when starting with. / sbin/start-dfs.sh or. / sbin/start-all.sh:

Java HotSpot (TM) 64-Bit Server VM warning: You have loaded library / usr/local/hadoop-2.2.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.

....

Java: ssh: Could not resolve hostname Java: Name or service not known

HotSpot (TM): ssh: Could not resolve hostname HotSpot (TM): Name or service not known

64-Bit: ssh: Could not resolve hostname 64-Bit: Name or service not known

....

The error cause of this problem occurs on 64-bit operating systems because native library files (such as lib/native/libhadoop.so.1.0.0) used by the official download hadoop are compiled based on 32-bit, and the above error occurs when running on 64-bit systems.

One workaround is to recompile hadoop on a 64-bit system, and another is to add the following two lines to hadoop-env.sh and yarn-env.sh:

Export HADOOP_COMMON_LIB_NATIVE_DIR=$ {HADOOP_HOME} / lib/native

Export HADOOP_OPTS= "- Djava.library.path=$HADOOP_HOME/lib"

Note: / usr/zkt/hadoop2.2.0/hadoop-2.2.0 is the custom path to download the hadoop file.

7. Modify the configuration file hadoop2.2.0/etc/hadoop of hadoop

1. Modify hadoop-env.sh and yarn-env.sh to ensure the java environment needed for hadoop to run

# The java implementation to use.

Export JAVA_HOME=/usr/java/jdk1.7.0_55

2. Modify the configuration of the core-site.xml file definition file system

Fs.default.name

Hdfs://master:9000/

Hadoop.tmp.dir

/ usr/zkt/hadoop2.2.0/tmp

3. Modify hadfs-site.xml definition name node and data node

Dfs.datanode.data.dir

/ usr/zkt/hadoop2.2.0/hdf/data

True

Dfs.namenode.name.dir

/ usr/zkt/hadoop2.2.0/hdf/name

True

Dfs.replication

two

Dfs.permissions

False

4. Modify mapred-site.xml Configurations for MapReduce Applications

Mapreduce.framework.name

Yarn

Mapreduce.jobhistory.address

Master:10020

Mapreduce.jobhistory.webapp.address

Master:19888

5. Modify yarn-site.xml file

This file is mainly used for:

1 、 Configurations for ResourceManager and NodeManager:

2 、 Configurations for ResourceManager:

3 、 Configurations for NodeManager:

4. Configurations for History Server (Needs to be moved elsewhere):

Yarn.nodemanager.aux-services.mapreduce.shuffle.class

Org.apache.hadoop.mapred.ShuffleHandler

Yarn.resourcemanager.address

Master:8032

Yarn.resourcemanager.scheduler.address

Master:8030

Yarn.resourcemanager.resource-tracker.address

Master:8031

Yarn.resourcemanager.admin.address

Master:8033

Yarn.resourcemanager.webapp.address

Master:8088

8. Create extra folders in the configuration file in step 7

Data tmp name log mkdir-r / usr/zkt/hadoop2.2.0/hdf/data, etc.

9. It is critical to assign permissions to these folders, otherwise there is no file creation or write permission when the file is generated.

Su-root

Chown-R hadoop:hadoop / usr/zkt/hadoop2.2.0 (for those who don't understand, please see the chown command)

Or switch to hadoop user and grant permissions via chmod-R 777 data

10. Copy the configured hadoop to the slaver and slaver2 hosts respectively

Scp-r / usr/zkt/hadoop2.2.0/hadoop-2.2.0 hadoop@slaver:/usr/zkt/hadoop2.2.0/

Scp-r / usr/zkt/hadoop2.2.0/hadoop-2.2.0 hadoop@slaver2:/usr/zkt/hadoop2.2.0/

11. Initialization of hadoop namenode

If there is no problem with the configuration of hadoop environment variables, use it directly.

Hdfs namenode-format

Hadoop command not found solution:

Echo $PATH

It is found that the environment variable of hadoop is: / home/hadoop/bin rather than the environment variable we configured. We need to copy the bin and sbin folders under the hadoop-2.2.0 package to / home/hadoop/, echo $PATH again, and find out that it is OK.

12. Turn off the firewall. The firewalls of all three servers need to be turned off.

View iptables status:

Service iptables status

Iptables boot automatically starts:

Enable: chkconfig iptables on

Turn off: chkconfig iptables off

Iptables shuts down the service:

Enable: service iptables start

Turn off: service iptables stop

13. Start hadoop

Start-all.sh

Close hadoop

Stop-all.sh

14. View the started node process

Jps

15. View the service information after startup

There should be ResourceManager services in master and nodemanager services in slave

Check the cluster status:. / bin/hdfs dfsadmin-report

View the file block composition:. / bin/hdfsfsck /-files-blocks

View each node status: http://master:50070

After reading this article, I believe you have a certain understanding of "how to install hadoop2.2.0 cluster in centos6.4 32 Universe 64-bit computer". If you want to know more about it, welcome to follow the industry information channel. Thank you for your reading!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report