In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-08 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces the example analysis of Hadoop2 namenode HA, which is very detailed and has certain reference value. Friends who are interested must finish it!
The Hadoop version of the experiment is 2.5.2, and the hardware environment is 5 virtual machines using the CentOS6.6 operating system. The virtual machines IP and hostname are:
192.168.63.171 node1.zhch
192.168.63.172 node2.zhch
192.168.63.173 node3.zhch
192.168.63.174 node4.zhch
192.168.63.175 node5.zhch
Ssh password-free, firewall, JDK will not be discussed here. The role assignment for virtual machines is that node1 is the primary namenode node, node2 is the standby namendoe node, and node3, 4, 5 are datanode nodes. Zookeeper and journalnode will also be deployed on node1, 2, and 3.
First, set up Zookeeper cluster
The part of setting up a Zookeeper cluster in a Storm0.9.4 installation
[yyl@node1 ~] $zkServer.sh startJMX enabled by defaultUsing config: / home/yyl/program/zookeeper-3.4.6/bin/../conf/zoo.cfgStarting zookeeper. STARTED [yyl@node1 ~] $zkServer.sh statusJMX enabled by defaultUsing config: / home/yyl/program/zookeeper-3.4.6/bin/../conf/zoo.cfgMode: follower [yyl@node2 ~] $zkServer.sh startJMX enabled by defaultUsing config: / home/yyl/program/zookeeper-3.4.6/bin/../conf/zoo.cfgStarting zookeeper. STARTED [yyl@node2 ~] $zkServer.sh statusJMX enabled by defaultUsing config: / home/yyl/program/zookeeper-3.4.6/bin/../conf/zoo.cfgMode: leader [yyl@node3 ~] $zkServer.sh startJMX enabled by defaultUsing config: / home/yyl/program/zookeeper-3.4.6/bin/../conf/zoo.cfgStarting zookeeper. STARTED [yyl@node3 ~] $zkServer.sh statusJMX enabled by defaultUsing config: / home/yyl/program/zookeeper-3.4.6/bin/../conf/zoo.cfgMode: follower
2. Configure the Hadoop environment
# # decompress [yyl@node1 program] $tar-zxf hadoop-2.5.2.tar.gz # # create a folder [yyl@node1 program] $mkdir hadoop-2.5.2/name [yyl@node1 program] $mkdir hadoop-2.5.2/data [yyl@node1 program] $mkdir hadoop-2.5.2/journal [yyl@node1 program] $mkdir hadoop-2.5.2/tmp## configuration hadoop-env.sh [yyl@node1 program] $cd Hadoop-2.5.2/etc/hadoop/ [yyl@node1 hadoop] $vim hadoop-env.shexport JAVA_HOME=/usr/lib/java/jdk1.7.0_80## configuration yarn-env.sh [yyl@node1 hadoop] $vim yarn-env.shexport JAVA_HOME=/usr/lib/java/jdk1.7.0_80## configuration slaves [yyl@node1 hadoop] $vim slavesnode3.zhchnode4.zhchnode5.zhch## configuration core-site.xml [yyl@node1 hadoop] $ Vim core-site.xml fs.defaultFS hdfs://mycluster io.file.buffer.size 131072 hadoop.tmp.dir file:/home/yyl/program/hadoop-2.5.2/tmp hadoop.proxyuser.hadoop.hosts * hadoop.proxyuser.hadoop.groups * ha.zookeeper.quorum node1.zhch:2181 Node2.zhch:2181,node3.zhch:2181 ha.zookeeper.session-timeout.ms 1000 configuration hdfs-site.xml [yyl@node1 hadoop] $vim hdfs-site.xml dfs.namenode.name.dir file:/home/yyl/program/hadoop-2.5.2/name dfs.datanode.data.dir file:/home/yyl/program/hadoop-2.5.2/data dfs.replication 1 dfs.webhdfs.enabled true dfs.permissions false dfs.permissions.enabled false dfs.nameservices mycluster dfs.ha.namenodes.mycluster nn1 Nn2 dfs.namenode.rpc-address.mycluster.nn1 node1.zhch:9000 dfs.namenode.rpc-address.mycluster.nn2 node2.zhch:9000 dfs.namenode.servicerpc-address.mycluster.nn1 node1.zhch:53310 dfs.namenode.servicerpc-address.mycluster.nn2 node2.zhch:53310 dfs.namenode.http-address.mycluster.nn1 node1.zhch:50070 dfs.namenode.http-address.mycluster.nn2 node2.zhch:50070 dfs.namenode.shared.edits.dir qjournal://node1.zhch:8485 Node2.zhch:8485 Node3.zhch:8485/mycluster dfs.client.failover.proxy.provider.mycluster org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider dfs.ha.fencing.methods sshfence dfs.ha.fencing.ssh.private-key-files / home/yyl/.ssh/id_rsa dfs.ha.fencing.ssh.connect-timeout 30000 dfs.journalnode.edits.dir / home/yyl/program/hadoop-2.5.2/journal dfs.ha.automatic-failover.enabled true ha.failover-controller.cli- Check.rpc-timeout.ms 60000 ipc.client.connect.timeout 60000 dfs.image.transfer.bandwidthPerSec 4194304 configuration mapred-site.xml [yyl@node1 hadoop] $cp mapred-site.xml.template mapred-site.xml [yyl@node1 hadoop] $vim mapred-site.xml mapreduce.framework.name yarn mapreduce.jobhistory.address node1.zhch:10020 Node2.zhch:10020 mapreduce.jobhistory.webapp.address node1.zhch:19888 Node2.zhch:19888 # # configure yarn-site.xml [yyl@node1 hadoop] $vim yarn-site.xml yarn.nodemanager.aux-services mapreduce_shuffle yarn.nodemanager.aux-services.mapreduce.shuffle.class org.apache.hadoop.mapred.ShuffleHandler yarn.resourcemanager.address node1.zhch:8032 yarn.resourcemanager.scheduler.address node1.zhch:8030 yarn.resourcemanager.resource-tracker.address node1.zhch:8031 yarn.resourcemanager.admin.address node1.zhch:8033 yarn.resourcemanager.webapp.address node1.zhch: 8088 rp hadoop-2.5.2 yyl@node3.zhch:/home/yyl/program/ # distributed to each node [yyl@node1 hadoop] $cd / home/yyl/program/ [yyl@node1 program] $scp-rp hadoop-2.5.2 yyl@node2.zhch:/home/yyl/program/ [yyl@node1 program] $scp-rp hadoop-2.5.2 yyl@node3.zhch:/home/yyl/program/ [yyl@node1 program] $scp-rp hadoop-2.5.2 yyl@node4.zhch:/home/yyl/program/ [yyl@node1 Program] $scp-rp hadoop-2.5.2 yyl@node5.zhch:/home/yyl/program/## sets the hadoop environment variable [yyl@node1 ~] $vim .bash _ profile export HADOOP_PREFIX=/home/yyl/program/hadoop-2.5.2export HADOOP_COMMON_HOME=$HADOOP_PREFIXexport HADOOP_HDFS_HOME=$HADOOP_PREFIXexport HADOOP_MAPRED_HOME=$HADOOP_PREFIXexport HADOOP_YARN_HOME=$HADOOP_PREFIXexport HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoopexport PATH=$PATH:$HADOOP_PREFIX/bin:$HADOOP_PREFIX/sbin on each node
Third, create a znode
Start each zookeeper and execute the following command on one of the namenode nodes to create a znode in Zookeeper
[yyl@node1 ~] $hdfs zkfc-formatZK## verify whether the creation is successful: [yyl@node3 ~] $zkCli.sh [zk: localhost:2181 (CONNECTED) 0] ls / [hadoop-ha, zookeeper] [zk: localhost:2181 (CONNECTED) 1] ls / hadoop-ha [mycluster] [zk: localhost:2181 (CONNECTED) 2]
4. Start journalnode
Run the command on node1.zhch, node2.zhch, node3.zhch: hadoop-daemon.sh start journalnode
[yyl@node1 ~] $hadoop-daemon.sh start journalnodestarting journalnode, logging to / home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-journalnode-node1.zhch.out [yyl@node1 ~] $jps1126 QuorumPeerMain1349 JournalNode1395 Jps [yyl@node2 ~] $hadoop-daemon.sh start journalnodestarting journalnode, logging to / home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-journalnode-node2.zhch.out [yyl@node2 ~] $jps1524 JournalNode1570 Jps1376 QuorumPeerMain [yyl@node3 ~] $hadoop-daemon.sh start journalnodestarting journalnode Logging to / home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-journalnode-node3.zhch.out [yyl@node3 ~] $jps1289 JournalNode1126 QuorumPeerMain1335 Jps
5. NameNode
# # use the command hadoop namenode-format to format the namenode and journalnode directories [yyl@node1 ~] $hadoop namenode-format## on the main namenode node to launch the main namenode [yyl@node1 ~] $hadoop-daemon.sh start namenodestarting namenode Logging to / home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-namenode-node1.zhch.out [yyl@node1 ~] $jps1478 NameNode1561 Jps1126 QuorumPeerMain1349 JournalNode## in standby namenode node synchronization metadata [yyl@node2 ~] $hdfs namenode- bootstrapStandby## boot standby NameNode [yyl@node2 ~] $hadoop-daemon.sh start namenodestarting namenode Logging to / home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-namenode-node2.zhch.out [yyl@node2 ~] $jps1524 JournalNode1626 NameNode1709 Jps1376 QuorumPeerMain## execute the following command on both namenode nodes to configure automatic failover: install and run ZKFC [yyl@node1 ~] $hadoop-daemon.sh start zkfcstarting zkfc Logging to / home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-zkfc-node1.zhch.out [yyl@node1 ~] $jps1624 DFSZKFailoverController1478 NameNode1682 Jps1126 QuorumPeerMain1349 JournalNode [yyl@node2 ~] $hadoop-daemon.sh start zkfcstarting zkfc, logging to / home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-zkfc-node2.zhch.out [yyl@node2 ~] $jps1524 JournalNode1746 DFSZKFailoverController1626 NameNode1800 Jps1376 QuorumPeerMain
Start DataNode and Yarn
[yyl@node1 ~] $hadoop-daemons.sh start datanodenode4.zhch: starting datanode, logging to / home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-datanode-node4.zhch.outnode3.zhch: starting datanode, logging to / home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-datanode-node3.zhch.outnode5.zhch: starting datanode Logging to / home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-datanode-node5.zhch.out [yyl@node1 ~] $start-yarn.shstarting yarn daemonsstarting resourcemanager, logging to / home/yyl/program/hadoop-2.5.2/logs/yarn-yyl-resourcemanager-node1.zhch.outnode4.zhch: starting nodemanager, logging to / home/yyl/program/hadoop-2.5.2/logs/yarn-yyl-nodemanager-node4.zhch.outnode3.zhch: starting nodemanager Logging to / home/yyl/program/hadoop-2.5.2/logs/yarn-yyl-nodemanager-node3.zhch.outnode5.zhch: starting nodemanager, logging to / home/yyl/program/hadoop-2.5.2/logs/yarn-yyl-nodemanager-node5.zhch.out [yyl@node1 ~] $jps1763 ResourceManager1624 DFSZKFailoverController1478 NameNode1126 QuorumPeerMain1349 JournalNode2028 Jps [yyl@node3 ~] $jps1289 JournalNode1462 NodeManager1367 DataNode1126 QuorumPeerMain1559 Jps
On the next startup, under the premise that the zookeeper cluster has been started, all processes and services can be started by directly executing the following command:
Sh start-dfs.sh
Sh start-yarn.sh
You can view namenode status through URL
Http://node1.zhch:50070 http://node2.zhch:50070
You can also view it through the command
[yyl@node1 ~] $hdfs haadmin-getServiceState nn1
Active
[yyl@node1 ~] $hdfs haadmin-getServiceState nn2
Standby
VII. Testing
Use the jps command to find the process number of namenode on the main namenode machine, and then kill the process by kill-9 to see if another namenode node changes from state standby to active state:
[yyl@node1 ~] $jps1763 ResourceManager1624 DFSZKFailoverController1478 NameNode2128 Jps1126 QuorumPeerMain1349 JournalNode [yyl@node1 ~] $kill-9 1478 [yyl@node1 ~] $hdfs haadmin-getServiceState nn2active [yyl@node1 ~] $hadoop-daemon.sh start namenodestarting namenode, logging to / home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-namenode-node1.zhch.out [yyl@node1 ~] $hdfs haadmin-getServiceState nn1standby are all the contents of this article "sample Analysis of Hadoop2 namenode HA". Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.