Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to configure Hadoop2.4.1 pseudo-distribution

2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces how to configure Hadoop2.4.1 pseudo-distribution, which has a certain reference value. Interested friends can refer to it. I hope you can learn a lot after reading this article.

Experimental environment:

VMware Workstation 10.0

CentOS 6.5

Hadoop 2.4.1

Jdk1.7.0

one。 Linux system environment preparation

Use WindSCP and other software to upload the installation packages of JDK and Hadoop to the CentOS virtual machine.

User Settin

If you use a normal user to do the following, first add the normal user to the / etc/sudoers file so that you can execute the sudo command.

Switch to root$su-add normal user to sudoers#vim / etc/sudoers add hadoop ALL= (ALL) ALL on line 99 of the file add normal user hadoop

Configure IP

The virtual machine uses NAT mode.

Modify the hostname with the graphical interface or directly modify the / etc/sysconfig/network-scripts/ifcfg-eth0 file IP:192.168.18.101MASK:255.255.255.0ROUTE:192.168.18.1

Non-necessary steps, for the convenience of the future, it is better to configure.

Modify the host name to hadoop001, restart the effective [root@localhost ~] # vim / etc/sysconfig/network NETWORKING=yes HOSTNAME=hadoop001 to view the hostname [root@localhost ~] # hostname configuration hostname and IP mapping for testing convenience, you can also configure the host windows mapping C:\ Windows\ System32\ drivers\ etc\ hosts [root @ localhost ~] # vim / etc/hosts add: 192.168.18.101 hadoop001 turn off the firewall

Without turning off the firewall, other hosts cannot access services such as http. Of course, if you can, it's better to configure the firewall yourself.

Turn off the firewall [root@localhost ~] # service iptables stop to view the firewall status [root@localhost ~] # service iptables statusiptables: Firewall is not running. Set not to boot firewall [root@localhost ~] # chkconfig iptables off configuration completed, restart CentOS#reboot II. Install JDK and Hadoop

Extract the installation package for JDK and Hadoop * .tar.gz.

Configure environment variables

Recompile the environment variable configuration file # source / etc/profile test command $java-version$ Hadoop configuration 1. Etc/hadoop/hadoop-env.sh

Modify HADOOP_HOME/etc/hadoop/hadoop-env.sh

Replace line 27, ${JAVA_HOME}, with the absolute path of jdk.

It may be the shell script of hadoop, where it often fails to get JAVA_HOME, so replace it with the absolute path of jdk.

2. Etc/hadoop/core-site.xml fs.defaultFS hdfs://hadoop001:9000 hdfs://hostname | name of the IP:port file system. The default port number is 8020 (see NameNode source code) Hadoop.tmp.dir / home/hadoop/apps/hadoop-2.4.1/tmp specifies where name table (fsimage) is stored on the local system, and if the value is multiple directories separated by commas, one copy is saved in each directory as a redundant backup. Note: if the first does not add / is the relative HADOOP_HOME path 3. The number of etc/hadoop/hdfs-site.xml dfs.replication 1 block copies (replica factor) when building pseudo-distribution, the value is 1, overriding the default value of 3 configured in hdfs-default.xml. 4. Etc/hadoop/mapred-site.xml

Modify the mapred-site.xml.template file name to mapred-site.xml

Mapreduce.framework.name yarn specifies that MapReduce runs on Yarn. Etc/hadoop/yarn-site.xml yarn.resourcemanager.hostname hadoop001 yarn.nodemanager.aux-services mapreduce_shuffle IV. Start HadoopSSH password-free login

Set up nodes such as password-less login NameNode. Because we are a pseudo-distributed experimental environment, it is not convenient to enter passwords each time when starting or shutting down HDFS and Yarn.

Test whether you can log in without a password through SSH.

$ssh localhost

If you cannot ssh to localhost without a passphrase, execute the following commands:

$ssh-keygen-t dsa-P''- f ~ / .ssh/id_dsa $cat ~ / .ssh/id_dsa.pub > > ~ / .ssh/authorized_keys # then test it with ssh localhost, and if not, change the permissions of the key. # chmod 600.ssh / * format HDFS

Why do I need formatting?

And when using a new hard drive, you also need to format the disk according to the type of file system you choose.

Select NTFS file system under windows. This is HDFS file system, but HDFS is a logical disk.

Hfds namenode-format or hadoop namenode-form this is a command in hadoop1.x and is out of date.

After the formatting is successful, Hadoop automatically creates the dfs/name directory under the hadoop.tmp.dir directory configured in the core-site.xml file.

Formatting information:

Check that there are only fsimage files under dfs/name in the initial state.

Start the Hadoop service

Hadoop 1.x starts hdfs and MapReduce$ bin/start-all.shhadoop2.x through start-all.sh start hdfs service $start-dfs.sh starts yarn$start-yarn.sh verification successfully

1) use JPS to view java processes

2) Browse the web interface for the NameNode and theMapReduce

NameNode Web UI- http://localhost:50070/

MapReduce Web UI-http://localhost:8088/

Thank you for reading this article carefully. I hope the article "how to configure Hadoop2.4.1 pseudo-distribution" shared by the editor will be helpful to everyone. At the same time, I also hope you will support us and pay attention to the industry information channel. More related knowledge is waiting for you to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report