In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
I. Overview
This experiment uses a VMware virtual machine, and the linux version is CentOS7.
Because most of the five machines required for the experiment have the same configuration, configure one of them, and then use the cloning function to copy the other four copies and make specific modifications.
Some of these steps have been configured before, it is explained here that no longer do specific configuration, the specific configuration can read the previous blog posts.
Second, the experimental environment
1. Close selinux and firewall
2.hadoopmura 2.7.4.tar.gzentzookeepermuri 3.4.10.tar.gzwitjdkmuri 8u131liuxmuri x64.tar.gz
III. Mainframe planning
IPHost process
192.168.100.11hadoop1
NameNode
ResourceManager
DFSZKFailoverController
192.168.100.12hadoop2
NameNode
ResourceManager
DFSZKFailoverController
192.168.100.13hadoop3
DataNode
NodeManager
JournalNode
QuorumPeerMain
192.168.100.14hadoop4
DataNode
NodeManager
JournalNode
QuorumPeerMain
192.168.100.15hadoop5
DataNode
NodeManager
JournalNode
QuorumPeerMain
IV. Environmental preparation
1. Set the IP address: 192.168.100.11
two。 Set hostname: hadoop1
3. Set the mapping between IP and hostname
[root@hadoop1 ~] # cat / etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.100.11 hadoop1192.168.100.12 hadoop2192.168.100.13 hadoop3192.168.100.14 hadoop4192.168.100.15 hadoop5
4. Configure the ssh distribution script
5. Decompress jdk
[root@hadoop1 ~] # tar-zxf jdk-8u131-linux-x64.tar.gz [root@hadoop1 ~] # cp-r jdk1.8.0_131/ / usr/local/jdk
6. Decompress hadoop
[root@hadoop1 ~] # tar-zxf hadoop-2.7.4.tar.gz [root@hadoop1 ~] # cp-r hadoop-2.7.4 / usr/local/hadoop
7. Decompress zookeeper
[root@hadoop1 ~] # tar-zxf zookeeper-3.4.10.tar.gz [root@hadoop1 ~] # cp-r zookeeper-3.4.10 / usr/local/hadoop/zookeeper [root@hadoop1 ~] # cd / usr/local/hadoop/zookeeper/conf/ [root@hadoop1 conf] # cp zoo_sample.cfg zoo.cfg [root@hadoop1 conf] # vim zoo.cfg # modify dataDirdataDir=/usr/local/hadoop/zookeeper/data# to add the following three lines of server.1=hadoop3:2888:3888server .2 = hadoop4:2888:3888server.3=hadoop5:2888:3888 [root@hadoop1 conf] # cd.. [root@hadoop1 zookeeper] # mkdir data# there are operations here But the zookeeper module is not deployed on hadoop1, so we will modify it later.
8. Configure environment variables
[root@hadoop1 ~] # tail-4 / etc/profileexport JAVA_HOME=/usr/local/jdkexport HADOOP_HOME=/usr/local/hadoopexport ZOOKEEPER_HOME=/usr/local/hadoop/zookeeperexport PATH=.:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$ path [root @ hadoop1 ~] # source / etc/profile
9. Test environment variables are available
[root@hadoop1 ~] # java-versionjava version "1.8.0mm 131" Java (TM) SE Runtime Environment (build 1.8.0_131-b11) Java HotSpot (TM) 64-Bit Server VM (build 25.131-b11, mixed mode) [root@hadoop1 ~] # hadoop versionHadoop 2.7.4Subversion Unknown-r UnknownCompiled by root on 2017-08-28T09:30ZCompiled with protoc 2.5.0From source with checksum 50b0468318b4ce9bd24dc467b7ce1148This command was run using / usr/local/hadoop/share/hadoop/common/hadoop-common-2.7.4.jar
5. Configure hadoop
1.core-site.xml
Fs.defaultFS hdfs://master/ hadoop.tmp.dir / usr/local/hadoop/tmp ha.zookeeper.quorum hadoop3:2181,hadoop4:2181,hadoop5:2181
2.hdfs-site.xml
Dfs.namenode.name.dir / usr/local/hadoop/dfs/name dfs.datanode.data.dir / usr/local/hadoop/dfs/data dfs.replication 2 dfs.nameservices master dfs.ha.namenodes.master nn1 Nn2 dfs.namenode.rpc-address.master.nn1 hadoop1:9000 dfs.namenode.rpc-address.master.nn2 hadoop2:9000 dfs.namenode.http-address.master.nn1 hadoop1:50070 dfs.namenode.http-address.master.nn2 hadoop2:50070 dfs.journalnode.http-address 0.0.0.0:8480 dfs.journalnode.rpc-address 0.0.0.0:8485 dfs.namenode.shared.edits.dir qjournal://hadoop3:8485 Hadoop4:8485 Hadoop5:8485/master dfs.journalnode.edits.dir / usr/local/hadoop/dfs/journal dfs.client.failover.proxy.provider.master org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider dfs.ha.fencing.methods sshfence shell (/ bin/true) Dfs.ha.fencing.ssh.private-key-files / root/.ssh/id_rsa dfs.ha.fencing.ssh.connect-timeout 30000 dfs.ha.automatic-failover.enabled true ha.zookeeper.quorum hadoop3:2181 Hadoop4:2181,hadoop5:2181 ha.zookeeper.session-timeout.ms 2000
3.yarn-site.xml
Yarn.nodemanager.aux-services mapreduce_shuffle yarn.resourcemanager.connect.retry-interval.ms 2000 yarn.resourcemanager.ha.enabled true yarn.resourcemanager.cluster-id yrc yarn.resourcemanager.ha.rm-ids rm1 Rm2 yarn.resourcemanager.hostname.rm1 hadoop1 yarn.resourcemanager.hostname.rm2 hadoop2 yarn.resourcemanager.ha.automatic-failover.enabled true yarn.resourcemanager.recovery.enabled true yarn.resourcemanager.store.class org.apache.hadoop.yarn.server.resourcemanager.recovery. ZKRMStateStore yarn.resourcemanager.zk-address hadoop3:2181 Hadoop4:2181 Hadoop5:2181 yarn.resourcemanager.scheduler.address.rm1 hadoop1:8030 yarn.resourcemanager.scheduler.address.rm2 hadoop2:8030 yarn.resourcemanager.resource-tracker.address.rm1 hadoop1:8031 yarn.resourcemanager.resource-tracker.address.rm2 hadoop2:8031 yarn.resourcemanager.address.rm1 hadoop1 : 8032 yarn.resourcemanager.address.rm2 hadoop2:8032 yarn.resourcemanager.admin.address.rm1 hadoop1:8033 yarn.resourcemanager.admin.address.rm2 hadoop2:8033 yarn.resourcemanager.webapp.address.rm1 hadoop1:8088 yarn.resourcemanager.webapp.address.rm2 hadoop2:8088
4.mapred-site.xml
Mapreduce.framework.name yarn mapreduce.jobhistory.address hadoop1:10020 mapreduce.jobhistory.webapp.address hadoop1:19888
5.slaves
[root@hadoop1 hadoop] # cat slaves hadoop3hadoop4hadoop5
6.hadoop-env.sh
Export JAVA_HOME=/usr/local/jdk # is added later
VI. Clone a virtual machine
1. Use hadoop1 as a template to clone 4 virtual machines and regenerate the MAC address of the network card
two。 Modify the hostname to hadoop2-hadoop5
3. Modify IP address
4. Configure ssh secret-free login between all machines (ssh public key distribution)
7. Configure zookeeper
[root@hadoop3 ~] # echo 1 > / usr/local/hadoop/zookeeper/data/myid # on hadoop3 [root@hadoop4] # echo 2 > / usr/local/hadoop/zookeeper/data/myid # on hadoop4 [root@hadoop5 ~] # echo 3 > / usr/local/hadoop/zookeeper/data/myid # on hadoop5
Start the cluster
1. Start zookeeper on hadoop3-5
[root@hadoop3 ~] # zkServer.sh startZooKeeper JMX enabled by defaultUsing config: / usr/local/hadoop/zookeeper/bin/../conf/zoo.cfgStarting zookeeper. STARTED [root@hadoop3 ~] # zkServer.sh statusZooKeeper JMX enabled by defaultUsing config: / usr/local/hadoop/zookeeper/bin/../conf/zoo.cfgMode: follower [root@hadoop3 ~] # jps2184 QuorumPeerMain2237 Jps#hadoop4 and hadoop5 have the same operation
two。 Format a ZooKeeper cluster on hadoop1
[root@hadoop1 ~] # hdfs zkfc-formatZK
3. Start journalnode on hadoop3-5
[root@hadoop3 ~] # hadoop-daemon.sh start journalnodestarting journalnode, logging to / usr/local/hadoop/logs/hadoop-root-journalnode-hadoop3.out [root@hadoop3 ~] # jps2244 JournalNode2293 Jps2188 QuorumPeerMain
4. Format namenode on hadoop1
[root@hadoop1] # hdfs namenode-format...17/08/29 22:53:30 INFO util.ExitUtil: Exiting with status 017 SHUTDOWN_MSG 09 22:53:30 INFO namenode.NameNode: SHUTDOWN_MSG: / * SHUTDOWN_MSG: Shutting down NameNode At hadoop1/192.168.100.11***/
5. Start the newly formatted namenode on hadoop1
[root@hadoop1 ~] # hadoop-daemon.sh start namenodestarting namenode, logging to / usr/local/hadoop/logs/hadoop-root-namenode-hadoop1.out [root@hadoop1 ~] # jps2422 Jps2349 NameNode
6. Synchronize nn1 (hadoop1) data to nn2 (hadoop2) on hadoop2
[root@hadoop2] # hdfs namenode-bootstrapStandby...17/08/29 22:55:45 INFO namenode.TransferFsImage: Image Transfer timeout configured to 60000 milliseconds17/08/29 22:55:45 INFO namenode.TransferFsImage: Transfer took 0.00s at 0.00 KB/s17/08/29 22:55:45 INFO namenode.TransferFsImage: Downloaded file fsp_w_picpath.ckpt_0000000000000000000 size 321 bytes.17/08/29 22:55:45 INFO util.ExitUtil: Exiting with status 017ame08KB/s17/08/29 29 22:55:45 INFO Namenode.NameNode: SHUTDOWN_MSG: / * SHUTDOWN_MSG: Shutting down NameNode at hadoop2/192.168.100.12** * * /
7. Start namenode on hadoop2
[root@hadoop2 ~] # hadoop-daemon.sh start namenode
8. Start all services in the cluster
[root@hadoop1 ~] # start-all.sh
9. Start yarn on hadoop2
[root@hadoop2 ~] # yarn-daemon.sh start resourcemanager
10. Turn on historyserver
[root@hadoop1 ~] # mr-jobhistory-daemon.sh start historyserverstarting historyserver, logging to / usr/local/hadoop/logs/mapred-root-historyserver-hadoop1.out [root@hadoop1 ~] # jps3026 DFSZKFailoverController3110 ResourceManager3894 JobHistoryServer3927 Jps2446 NameNode
11. View the process
[root@hadoop3 ~] # jps2480 DataNode2722 Jps2219 JournalNode2174 QuorumPeerMain2606 NodeManager [root@hadoop4 ~] # jps2608 NodeManager2178 QuorumPeerMain2482 DataNode2724 Jps2229 JournalNode [root@hadoop5 ~] # jps2178 QuorumPeerMain2601 NodeManager2475 DataNode2717 Jps2223 JournalNode
IX. Testing
1. Connect
Namenode on 2.kill hadoop2
[root@hadoop2 ~] # jps2742 NameNode3016 DFSZKFailoverController4024 JobHistoryServer4057 Jps3133 ResourceManager [root@hadoop2 ~] # kill-9 2742 [root@hadoop2 ~] # jps3016 DFSZKFailoverController3133 ResourceManager4205 Jps
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.