In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
I. Environment
1. System: Red Hat Enterprise Linux Server release 6.4,
2. Required software packages
Hadoop-2.2.0.tar.gz
Hbase-0.98.2-hadoop2-bin.tar.gz
Jdk-7u67-linux-x64.tar.gz
Zookeeper-3.4.6.tar.gz
3. Operation service of each machine
192.168.10.40 master1 namenode resourcemanager ZKFC hmaster
192.168.10.41 master2 namenode ZKFC hmaster (backup)
192.168.10.42 slave1 datanode nodemanager journalnode hregionserver zookeeper
192.168.10.43 slave2 datanode nodemanager journalnode hregionserver zookeeper
192.168.10.44 slave3 datanode nodemanager journalnode hregionserver zookeeper
2. Installation steps: (in order to facilitate synchronization, it is usually operated on master1)
1. Log in without a password for ssh
(mkdir-m700.ssh)
2. Installation of jdk (each is)
1), decompression
Tar zxf jdk-7u67-linux-x64.tar.gz
Ln-sf jdk1.7.0_67 jdk
2), configuration
Sudo vim / etc/profile
Export JAVA_HOME=/home/richmail/jdk
Export PATH=$JAVA_HOME/bin:$PATH
Export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
3) carry out, bring into effect
Source / etc/profile
3. Installation of zookeeper
1) decompression
Tar zxf zookeeper-3.4.6.tar.gz
Ln-sf zookeeper-3.4.6 zookeeper
2), configuration
Vim zookeeper/bin/zkEnv.sh
ZOO_LOG_DIR= "/ home/richmail/zookeeper/logs"
Cd zookeeper/conf
Cp zoo_sample.cfg zoo.cfg
Vim zoo.cfg
TickTime=2000
InitLimit=10
SyncLimit=5
DataDir=/home/richmail/zookeeper/data
DataLogDir=/home/richmail/zookeeper/logs
ClientPort=2181
Server.1=slave1:2888:3888
Server.2=slave2:2888:3888
Server.3=slave3:2888:3888
Mkdir-p / home/richmail/zookeeper/ {data,logs}
3), copy to slave1,slave2,slave3
Cd
Scp-rv zookeeper slave1:~/
Ssh slave1 'echo 1 > / home/richmail/zookeeper/data/myid'
Scp-rv zookeeper slave2:~/
Ssh slave1 'echo 2 > / home/richmail/zookeeper/data/myid'
Scp-rv zookeeper slave3:~/
Ssh slave1 'echo 3 > / home/richmail/zookeeper/data/myid'
4), start zookeeper
Go to slave1,slave2,slave3 area to start zookeeper respectively.
Cd ~ / zookeeper/bin
. / zkServer.sh start
4. Installation of hadoop
1), decompression
Tar zxf hadoop-2.2.0.tar.gz
Ln-sf hadoop-2.2.0 hadoop
2), configuration
Cd / home/richmail/hadoop/etc/hadoop
Vim core-site.xml
Fs.defaultFS
Hdfs://cluster
Hadoop.tmp.dir
/ home/richmail/hadoop/storage/tmp
Ha.zookeeper.quorum
Slave1:2181,slave2:2181,slave3:2181
Mkdir-p / home/richmail/hadoop/storage/tmp
Vim hadoop-env.sh
Export JAVA_HOME=/home/richmail/jdk
Under export HADOOP_PID_DIR=/var/hadoop/pids / / default / tmp
Vim hdfs-site.xml
Dfs.nameservices
Cluster
Dfs.ha.namenodes.cluster
Master1,master2
Dfs.namenode.rpc-address.cluster.master1
Master1:9000
Dfs.namenode.rpc-address.cluster.master2
Master2:9000
Dfs.namenode.http-address.cluster.master1
Master1:50070
Dfs.namenode.http-address.cluster.master2
Master2:50070
Dfs.namenode.shared.edits.dir
Qjournal://slave1:8485;slave2:8485;slave3:8485/cluster
Dfs.ha.automatic-failover.enabled
True
Dfs.client.failover.proxy.provider.cluster
Org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
Dfs.ha.fencing.methods
Sshfence
Dfs.ha.fencing.ssh.private-key-files
/ home/richmail/.ssh/id_rsa
Dfs.journalnode.edits.dir
/ home/richmail/hadoop/storage/journal
Mkdir-p / home/richmail/hadoop/storage/journal
Vim mapred-site.xml
Mapreduce.framework.name
Yarn
Vim yarn-env.sh
Export YARN_PID_DIR=/var/hadoop/pids
Vim yarn-site.sh
Yarn.resourcemanager.hostname
Master1
Yarn.nodemanager.aux-services
Mapreduce_shuffle
Vim slaves
Slave1
Slave2
Slave3
3), copy to other machines
Cd
Scp-rv hadoop master2:~/
Scp-rv hadoop slaver1:~/
Scp-rv hadoop slaver2:~/
Scp-rv hadoop slaver3:~/
4), start hadoop
1), execute journalnode on slave1,slave2,slave3
Cd ~ / hadoop/sbin
. / hadoop-daemon.sh start journalnode
2), execute on master1
Cd ~ / hadoop/bin
. / hdfs zkfc-formatZK
. / hdfs namenode-format
Cd.. / sbin
. / hadoop-daemon.sh start namenode
. / start-all.sh
3), execute on master2
Cd ~ / hadoop/bin
Hdfs namenode-bootstrapStandby
Cd.. / sbin
Hadoop-daemon.sh start namenode
5), verification
Use the browser to access 192.168.10.40 and 192.168.10.41, and you can see the two nodes. One is active, the other is standny.
Or execute the command on the name node:
Hdfs haadmin-getServiceState master1
Hdfs haadmin-getServiceState master2
Execute hdfs haadmin-failover-forceactive master1 master2 to exchange the states of the two nodes
5. Installation of hbase
1), decompression
Tar zxf hbase-0.98.2-hadoop2-bin.tar.gz
Ln-sf hbase-0.98.2-hadoop2 hbase
2), configuration
Cd ~ / hbase/conf
Vim hbase-env.sh
Export JAVA_HOME=/home/richmail/jdk
Export HBASE_MANAGES_ZK=false
Vim hbase-env.sh
Export HBASE_PID_DIR=/var/hadoop/pids
Vim regionservers
Slave1
Slave2
Slave3
Vim hbase-site.xml
Hbase.rootdir
Hdfs://cluster/hbase
Hbase.master
60000
Hbase.zookeeper.quorum
Slave1,slave2,slave3
Hbase.zookeeper.property.clientPort
2181
Hbase.zookeeper.property.dataDir
/ home/richmail/hbase/zkdata
Hbase.cluster.distributed
True
Hbase.tmp.dir
/ home/richmail/hbase/data
Mkdir ~ / hbase/ {zkdata,data}
Hbase has a startup error that can only be solved by copying the configuration file hdfs-site.xml of hadoop to hbase/conf.
3), copy to other machines
Cd
Scp-rv hbase master2:~/
Scp-rv hbase slaver1:~/
Scp-rv hbase slaver2:~/
Scp-rv hbase slaver3:~/
4), start hbase
Execute on master1
Cd ~ / hbase/bin
. / start-hbase.sh
Execute on master2
. / bin/hbase-daemon.sh start master-- backup
At this point, the cluster will deploy OK.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.