Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Hadoop 2.4installation and configuration

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

First, hadoop2.4 compilation method can be referred to: Linux 64-bit operating system (CentOS6.6) on how to compile hadoop2.4.0. Second, prepare tools

1. The deployment prepares two machines, test1 for Master,test2 and Slave.

Vim / etc/hosts on both machines, and add the following:

192.168.1.100 test1 192.168.1.200 test2

two。 Increase hadoop users

Groupadd hadoop useradd hadoop-g hadoop passwd hadoop

3. Set Master password-less login slave

Generate a public / private key pair under test1 and copy it to test2.

Su hadoop ssh-keygen-t rsa-P''scp ~ / .ssh/id_rsa.pub hadoop@test2:~/.ssh/id_rsa.pub

Test2 adds the id_rsa.pub copied from test1 to the .ssh / authorzied_keys file

Cat id_rsa.pub > > authorized_keys chmod 600.ssh / authorized_keys

4. Number of modified handles:

Ulimit-n 65535 vim / etc/security/limits.conf is added as follows:\ * hard nofile 65535\ * soft nofile 65535 III. Configure hadoop

Unpack the compiled hadoop2.4.0 and modify the configuration file in the etc/hadoop directory.

1. Modify hadoop-env.sh

Export JAVA_HOME=/opt/jdk

two。 Modify core-site.xml

Fs.default.name hdfs://HDP125:9000 hadoop.tmp.dir / home/$ {user.name} / tmp fs.trash.interval 1440

Description:

Fs.defaultDFS: setting the IP and port of NameNode

Hadoop.tmp.dir: set the Hadoop temporary directory (default / tmp, machine restart will lose data! )

Fs.trash.interval: open the Hadoop Recycle Bin

3. Modify hdfs-site.xml

Dfs.replication 1 dfs.namenode.name.dir / home/$ {user.name} / dfs_name dfs.datanode.data.dir / home/$ {user.name} / dfs_data dfs.support.append true dfs.datanode.max.xcievers 4096

Description:

Dfs.replication: number of file copies

Dfs.namenode.name.dir: sets the local file system location where NameNode stores metadata (fsp_w_picpath)

Dfs.datanode.data.dir: sets the local file system location where DataNode stores data

Dfs.support.append: sets whether HDFS supports appending data at the end of the file

Dfs.datanode.max.xcievers: sets the maximum number of xcievers that datanode can create

4. Modify yarn-site.xml

Yarn.nodemanager.aux-services mapreduce_shuffle yarn.resourcemanager.hostname master yarn.nodemanager.aux-services.mapreduce.shuffle.class org.apache.hadoop.mapred.ShuffleHandler yarn.timeline-service.hostname master

5. Modify mapred-site.xml

Mapreduce.framework.name yarn

6. Modify slave

Test1 test2

7. Copy the configured Hadoop program to another node

Scp-r / opt/hadoop hadoop@test2:/opt/ chown-R hadoop:hadoop / opt/hadoop IV. Start and shut down hadoop

1. The first time you use a distributed file system, you need to format it:

. / bin/hadoop namenode-format

two。 Start Hadoop:

Sbin/start-all.sh

3. Turn off Hadoop:

Sbin/start-all.sh

4.Hadoop 's own WEB management interface

Http://test1:50070

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report