Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Analysis of Hadoop2 namenode Federal experiment

2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

The main content of this article is "Hadoop2 namenode Federal Experimental Analysis". Interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Now let the editor take you to learn "Hadoop2 namenode Federal Experimental Analysis"!

The Hadoop version of the experiment is 2.5.2, and the hardware environment is 5 virtual machines using the CentOS6.6 operating system. The virtual machines IP and hostname are:

192.168.63.171 node1.zhch

192.168.63.172 node2.zhch

192.168.63.173 node3.zhch

192.168.63.174 node4.zhch

192.168.63.175 node5.zhch

Ssh password-free, firewall, JDK will not be discussed here. The role assignment of the virtual machine is node1, 2 is the namendoe node, and node3, 4, 5 is the datanode node.

The steps are basically the same as building an ordinary hadoop cluster, except that the configuration file hdfs-site.xml is basically the same as the configuration of hadoop.

1. Configure Hadoop

# # decompress [yyl@node1 program] $tar-zxf hadoop-2.5.2.tar.gz # # create a folder [yyl@node1 program] $mkdir hadoop-2.5.2/name [yyl@node1 program] $mkdir hadoop-2.5.2/data [yyl@node1 program] $mkdir hadoop-2.5.2/tmp## configuration hadoop-env.sh [yyl@node1 program] $cd hadoop-2.5.2/etc/hadoop/ [yyl@node1 hadoop ] $vim hadoop-env.shexport JAVA_HOME=/usr/lib/java/jdk1.7.0_80## configuration yarn-env.sh [yyl@node1 hadoop] $vim yarn-env.shexport JAVA_HOME=/usr/lib/java/jdk1.7.0_80## configuration slaves [yyl@node1 hadoop] $vim slaves node3.zhchnode4.zhchnode5.zhch## configuration core-site.xml [yyl@node1 program] $cd hadoop-2.5.2/etc/hadoop/ [yyl@ Node1 hadoop] $vim core-site.xml fs.defaultFS hdfs://node1.zhch:9000 io.file.buffer.size 131072 hadoop.tmp.dir file:/home/yyl/program/hadoop-2.5.2/tmp hadoop.proxyuser.hduser.hosts * hadoop.proxyuser.hduser.groups * # configure hdfs-site.xml [yyl@node1 hadoop] $vim hdfs-site.xml dfs.namenode.name.dir file:/home/yyl/program/hadoop-2.5.2/name Dfs.datanode.data.dir file:/home/yyl/program/hadoop-2.5.2/data dfs.replication 1 dfs.webhdfs.enabled true dfs.permissions false dfs.nameservices ns1 Ns2 dfs.namenode.rpc-address.ns1 node1.zhch:9000 dfs.namenode.http-address.ns1 node1.zhch:50070 dfs.namenode.rpc-address.ns2 node2.zhch:9000 dfs.namenode.http-address.ns2 node2.zhch:50070## configuration mapred-site.xml [yyl@node1 hadoop] $cp mapred-site.xml.template mapred-site.xml [yyl@node1 hadoop] $vim mapred-site.xml mapreduce.framework.name yarn mapreduce.jobhistory.address node1.zhch:10020 mapreduce.jobhistory.webapp .address node1.zhch:19888## configure yarn-site.xml [yyl@node1 hadoop] $vim yarn-site.xml yarn.nodemanager.aux-services mapreduce_shuffle yarn.nodemanager.aux-services.mapreduce.shuffle.class org.apache.hadoop.mapred.ShuffleHandler yarn.resourcemanager.address node1.zhch:8032 yarn.resourcemanager.scheduler.address node1.zhch:8030 yarn.resourcemanager.resource-tracker.address node1.zhch:8031 yarn.resourcemanager.admin.address node1.zhch:8033 yarn.resourcemanager.webapp.address node1.zhch: 8088 rp hadoop-2.5.2 yyl@node3.zhch:/home/yyl/program/ # distributed to each node [yyl@node1 hadoop] $cd / home/yyl/program/ [yyl@node1 program] $scp-rp hadoop-2.5.2 yyl@node2.zhch:/home/yyl/program/ [yyl@node1 program] $scp-rp hadoop-2.5.2 yyl@node3.zhch:/home/yyl/program/ [yyl@node1 program] $scp-rp hadoop-2.5.2 yyl@node4.zhch:/home/yyl/program/ [yyl@node1 Program] $scp-rp hadoop-2.5.2 yyl@node5.zhch:/home/yyl/program/## sets the hadoop environment variable [yyl@node1 ~] $vim .bash _ profile export HADOOP_PREFIX=/home/yyl/program/hadoop-2.5.2export HADOOP_COMMON_HOME=$HADOOP_PREFIXexport HADOOP_HDFS_HOME=$HADOOP_PREFIXexport HADOOP_MAPRED_HOME=$HADOOP_PREFIXexport HADOOP_YARN_HOME=$HADOOP_PREFIXexport HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoopexport PATH=$PATH:$HADOOP_PREFIX/bin:$HADOOP_PREFIX/sbin on each node

II. NameNode

# # format [yyl@node1 ~] $hdfs namenode-format-clusterId C1 formats on namenode1 # format [yyl@node2 ~] $hdfs namenode-format-clusterId C1 formats # launch Namenode [yyl @ node1 ~] $hadoop-daemon.sh start namenodestarting namenode on namenode1 Logging to / home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-namenode-node1.zhch.out [yyl@node1 ~] $jps1177 NameNode1240 Jps## launches Namenode [yyl @ node2 ~] $hadoop-daemon.sh start namenodestarting namenode, logging to / home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-namenode-node2.zhch.out [yyl@node2 ~] $jps1508 Jps1445 NameNode in namenode2

III. Federal inspection of HDFS

Http://node1.zhch:50070/

Http://node2.zhch:50070/

Start DataNode and yarn

[yyl@node1 ~] $hadoop-daemons.sh start datanodenode4.zhch: starting datanode, logging to / home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-datanode-node4.zhch.outnode5.zhch: starting datanode, logging to / home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-datanode-node5.zhch.outnode3.zhch: starting datanode Logging to / home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-datanode-node3.zhch.out [yyl@node1 ~] $start-yarn.shstarting yarn daemonsstarting resourcemanager, logging to / home/yyl/program/hadoop-2.5.2/logs/yarn-yyl-resourcemanager-node1.zhch.outnode5.zhch: starting nodemanager, logging to / home/yyl/program/hadoop-2.5.2/logs/yarn-yyl-nodemanager-node5.zhch.outnode3.zhch: starting nodemanager Logging to / home/yyl/program/hadoop-2.5.2/logs/yarn-yyl-nodemanager-node3.zhch.outnode4.zhch: starting nodemanager, logging to / home/yyl/program/hadoop-2.5.2/logs/yarn-yyl-nodemanager-node4.zhch.out [yyl@node1 ~] $jps1402 Jps1177 NameNode1333 ResourceManager [yyl@node2 ~] $jps1445 NameNode1539 Jps [yyl@node3 ~] $jps1214 NodeManager1166 DataNode1256 Jps

You do not need to repeat the above steps to start the next startup, you can start the cluster directly using the following command:

Sh $HADOOP_HOME/sbin/start-dfs.sh

Sh $HADOOP_HOME/sbin/start-yarn.sh

At this point, I believe you have a deeper understanding of the "Hadoop2 namenode Federal Experimental Analysis". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report