In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Operating system version:
Centos7 64 bit
Hadoop version:
Hadoop-2.7.3
Hbase version:
Hbase-1.2.4
Machine:
192.168.11.131 master1 Namenode ResourceManager QuorumPeerMain Jobhistory HMaster DFSZKFailoverController
192.168.11.132 master2 Namenode HMaster DFSZKFailoverController
192.168.11.133 slave1 Datanode HRegionServer NodeManager JournalNode
192.168.11.134 slave2 Datanode HRegionServer NodeManager JournalNode
192.168.11.135 slave3 Datanode HRegionServer NodeManager JournalNode
All nodes turn off firewall and selinux
# firewall-cmd-state
Running
# systemctl stop firewalld.service
# systemctl disable firewalld.service
# setenforce 0
# vi / etc/sysconfig/selinux
SELINUX=enforcing-- > disabled
All nodes are configured with yum source
# cd
# mkdir apps
Http://mirrors.163.com/centos/7/os/x86_64/Packages/wget-1.14-15.el7.x86_64.rpm
# rpm-I wget-1.14-15.el7.x86_64.rpm
# cd / etc/yum.repos.d
# wget http://mirrors.aliyun.com/repo/Centos-7.repo
# mv Centos-7.repo CentOS-Base.repo
# scp CentOS-Base.repo root@192.168.11.131:/etc/yum.repos.d/
# scp CentOS-Base.repo root@192.168.11.132:/etc/yum.repos.d/
# scp CentOS-Base.repo root@192.168.11.133:/etc/yum.repos.d/
# scp CentOS-Base.repo root@192.168.11.134:/etc/yum.repos.d/
# yum clean all
# yum makecache
# yum update
Configure ntp time synchronization
All nodes install ntp
# yum install-y ntp
Ntp server side:
# date-s "2018-05-27 23:03:30"
# vi / etc/ntp.conf
Add two lines under the comment
# restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
Server 127.127.1.0
Fudge 127.127.1.0 stratum 11
Below the comment
# server 0.centos.pool.ntp.org iburst
# server 1.centos.pool.ntp.org iburst
# server 2.centos.pool.ntp.org iburst
# server 3.centos.pool.ntp.org iburst
# systemctl start ntpd.service
# systemctl enable ntpd.service
Ntp client (the other four are ntp clients):
# vi / etc/ntp.conf
Add two lines under the same comment
# restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
Server 192.168.11.131
Fudge 127.127.1.0 stratum 11
Add comments on four lines
# server 0.centos.pool.ntp.org iburst
# server 1.centos.pool.ntp.org iburst
# server 2.centos.pool.ntp.org iburst
# server 3.centos.pool.ntp.org iburst
# systemctl start ntpd.service
# systemctl enable ntpd.service
# ntpdate 192.168.11.131
28 May 07:04:50 ntpdate [1714]: the NTP socket is in use, exiting
# lsof-iPUR 123
-bash: lsof: command not found
# yum install-y lsof
# lsof-iPUR 123
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
Ntpd 1693 ntp 16u IPv4 25565 0t0 UDP *: ntp
Ntpd 1693 ntp 17u IPv6 25566 0t0 UDP *: ntp
Ntpd 1693 ntp 18u IPv4 25572 0t0 UDP localhost:ntp
Ntpd 1693 ntp 19u IPv4 25573 0t0 UDP localhost.localdomain:ntp
Ntpd 1693 ntp 20u IPv6 25574 0t0 UDP localhost:ntp
Ntpd 1693 ntp 21u IPv6 25575 0t0 UDP localhost.localdomain:ntp
# kill-9 1693
# ntpdate 192.168.11.131
27 May 23:06:14 ntpdate [1728]: step time server 192.168.11.131 offset-28808.035509 sec
# date
Sun May 27 23:06:17 CST 2018
All nodes modify hostname (permanent)
# hostnamectl set-hostname master1~slave3
Temporarily modify the hostname
# hostname master1~slave3
Master node modifies hosts file
# vi / etc/hosts
192.168.11.131 master1
192.168.11.132 master2
192.168.11.133 slave1
192.168.11.134 slave2
192.168.11.135 slave3
Overwrite hosts files to other machines
# scp / etc/hosts root@192.168.11.132~135:/etc/
All nodes create administrative users and groups
Create groups and users
# groupadd hduser
# useradd-g hduser hduser
# passwd hduser
Create a directory and give weight
Create the following folders on each machine
# mkdir / data1
# mkdir / data2
Modify permissions
# chown hudser:hduser / data1
# chown hudser:hduser / data2
# su hduser
$mkdir-p / data1/hadoop_data/hdfs/namenode
$mkdir-p / data2/hadoop_data/hdfs/namenode
$mkdir-p / data1/hadoop_data/hdfs/datanode (NameNode do not)
$mkdir-p / data2/hadoop_data/hdfs/datanode (NameNode do not)
$mkdir-p / data1/hadoop_data/pids
$mkdir-p / data2/hadoop_data/pids
$mkdir-p / data1/hadoop_data/hadoop_tmp
$mkdir-p / data2/hadoop_data/hadoop_tmp
Non-secret verification
Master1 and master2 node operation
# su-hduser
$ssh-keygen-t rsa
$cd ~ / .ssh
$cat id_rsa.pub > > authorized_keys
Master1 node operation
$ssh-copy-id-I ~ / .ssh/id_rsa.pub hadoop@master2
Master2 node operation
$scp ~ / .ssh/authorized_keys hduser@master1:~/.ssh/
The slave1, slave2, and slave3 nodes create .ssh directories
# mkdir / home/hduser/.ssh
# chown hduser:hduser / home/hduser/.ssh
Master1 node operation
$scp ~ / .ssh/authorized_keys hduser@slave1:~/.ssh
$scp ~ / .ssh/authorized_keys hduser@slave2:~/.ssh
$scp ~ / .ssh/authorized_keys hduser@slave3:~/.ssh
Master1 and master2 node authentication
Verification method, respectively, in two nodes, ssh login native (hdusser users) and the other four nodes to see if there is no secret login.
If the verification fails, all machines execute the following command
$chmod 600 ~ / .ssh/authorized_keys
$chmod 700 ~ / .ssh
All nodes are configured with java environment
$mkdir-p / data1/usr/src
Upload the package to the / data1/usr/src directory
$cd / data1/usr/src
$tar xf jdk1.7.0_79.tar-C / data1/usr/
$vi ~ / .bashrc
Export JAVA_HOME=/data1/usr/jdk1.7.0_79
Export JRE_HOME=$JAVA_HOME/jre
Export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib/rt.jar
Export PATH=$PATH:$JAVA_HOME/bin
$source ~ / .bashrc
Mastar1 node configuration hadoop (hdsuer user)
Download hadoop-2.7.3.tar.gz and upload to / data1/usr/src
Http://mirrors.cnnic.cn/apache/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz
$cd / data1/usr/src
$tar-zxf hadoop-2.7.3.tar.gz-C / data1/usr/
$vi / data1/usr/hadoop-2.7.3/etc/hadoop/hadoop-env.sh
Export JAVA_HOME=/data1/usr/jdk1.7.0_79
Export HADOOP_PID_DIR=/data1/hadoop_data/pids
Export HADOOP_PID_DIR=/data2/hadoop_data/pids
Export HADOOP_MAPRED_PID_DIR=/data1/hadoop_data/pids
$vi / data1/usr/hadoop-2.7.3/etc/hadoop/mapred-env.sh
Export HADOOP_MAPRED_PID_DIR=/data2/hadoop_data/pids
$vi / data1/usr/hadoop-2.7.3/etc/hadoop/yarn-env.sh
Export YARN_PID_DIR=/data2/hadoop_data/pids
$vi / data1/usr/hadoop-2.7.3/etc/hadoop/core-site.xml
Fs.defaultFS
Hdfs://masters
Hadoop.tmp.dir
/ data2/hadoop_data/hadoop_tmp
Ha.zookeeper.quorum
Master1:2181,master2:2181,slave1:2181,slave2:2181,slave3:2181
$vi / data1/usr/hadoop-2.7.3/etc/hadoop/hdfs-site.xml
Dfs.nameservices
Masters
Dfs.ha.namenodes.masters
Master1,master2
Dfs.namenode.rpc-address.masters.master1
Master1:9000
Dfs.namenode.http-address.masters.master1
Master1:50070
Dfs.namenode.rpc-address.masters.master2
Master2:9000
Dfs.namenode.http-address.masters.master2
Master2:50070
Dfs.namenode.name.dir
File:///data2/hadoop_data/hdfs/namenode
Dfs.datanode.data.dir
File:///data1/hadoop_data/hdfs/datanode,data2/hadoop_data/hdfs/datanode
Dfs.namenode.shared.edits.dir
Qjournal://slave1:8485;slave2:8485;slave3:8485/masters
Dfs.journalnode.edits.dir
/ data2/hadoop_data/journal
Dfs.ha.automatic-failover.enabled
True
Dfs.client.failover.proxy.provider.masters
Org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
Dfs.ha.fencing.methods
Sshfence
Shell (/ bin/true)
Dfs.ha.fencing.ssh.private-key-files
/ home/hduser/.ssh/id_rsa
Dfs.ha.fencing.ssh.connect-timeout
30000
Dfs.datanode.max.xcievers
8192
Dfs.qjournal.write-txns.timeout.ms
60000
$vi / data1/usr/hadoop-2.7.3/etc/hadoop/yarn-site.xml
Yarn.resourcemanager.ha.enabled
True
Yarn.resourcemanager.cluster-id
RM_HA_ID
Yarn.resourcemanager.ha.rm-ids
Rm1,rm2
Yarn.resourcemanager.hostname.rm1
Master1
Yarn.resourcemanager.hostname.rm2
Master2
Yarn.resourcemanager.recovery.enabled
True
Yarn.resourcemanager.store.class
Org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore
Yarn.resourcemanager.zk-address
Master1:2181,master2:2181,slave1:2181,slave2:2181,slave3:2181
Yarn.nodemanager.aux-services
Mapreduce_shuffle
Yarn.nodemanager.aux-services.mapreduce.shuffle.class
Org.apache.hadoop.mapred.ShuffleHandler
$cp / data1/usr/hadoop-2.7.3/etc/hadoop/mapred-site.xml.template / data1/usr/hadoop-2.7.3/etc/hadoop/mapred-site.xml
$vi / data1/usr/hadoop-2.7.3/etc/hadoop/mapred-site.xml
Mapreduce.framework.name
Yarn
$vi / data1/usr/hadoop-2.7.3/etc/hadoop/slaves
Slave3
Slave4
Slave5
$for ip in `seq 2 5`; do scp-rpq / data1/usr/hadoop-2.7.3 192.168.11.13$ ip:/data1/usr;done
Zookeeper configuration of each node
Http://archive.apache.org/dist/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz
Upload the package to the / data1/usr/src directory
Create a directory
$mkdir-p / home/hduser/storage/zookeeper
$cd / data1/usr/src
$tar-zxf zookeeper-3.4.6.tar.gz-C / data1/usr
$cp / data1/usr/zookeeper-3.4.6/conf/zoo_sample.cfg / data1/usr/zookeeper-3.4.6/conf/zoo.cfg
$vi / data1/usr/zookeeper-3.4.6/conf/zoo.cfg
DataDir=/home/hduser/storage/zookeeper
Server.1=master1:2888:3888
Server.2=master2:2888:3888
Server.3=slave1:2888:3888
Server.4=slave2:2888:3888
Server.5=slave3:2888:3888
Each node of master1-slave3 operates sequentially.
$echo "1" > / home/hduser/storage/zookeeper/myid
$echo "2" > / home/hduser/storage/zookeeper/myid
$echo "3" > / home/hduser/storage/zookeeper/myid
$echo "4" > / home/hduser/storage/zookeeper/myid
$echo "5" > / home/hduser/storage/zookeeper/myid
$cd / data1/usr/zookeeper-3.4.6/bin
$. / zkServer.sh start
Slave1, slave2, and slave3 start journalnode
$cd / data1/usr/hadoop-2.7.3/sbin
$. / sbin/hadoop-daemon.sh start journalnode
Confirm the startup result with jps
Format zookeeper node formatting on master1 (first time)
$cd / data1/usr/hadoop-2.7.3
. / bin/hdfs zkfc-formatZK
Execute the command on master1:
. / bin/hadoop namenode-format
Start namenode on master1
. / sbin/hadoop-daemon.sh start namenode
Data synchronization needs to be performed on master2 (standby node)
. / bin/hdfs namenode-bootstrapStandby
Scp-r / data2/hadoop_data/hdfs/namenode hduser@mster2:/data2/hadoop_data/hdfs/
Start namenode on master2
. / sbin/hadoop-daemon.sh start namenode
Set master1 to active
. / bin/hdfs haadmin-transitionToActive master1
. / bin/hdfs haadmin-getServiceState master1
Start datanode on master1
. / sbin/hadoop-daemons.sh start datanode
Start HDFS (after the second time)
Execute the command on master1:
. / sbin/start-dfs.sh
Start YARN
Execute the command on master1:
. / sbin/start-yarn.sh
Verification
Verify namenode
Http://master1:50070
Overview 'master1:9000' (active)
Http://master2:50070
Overview 'master2:9000' (standby)
Upload files
. / bin/hadoop fs-put / data1/usr/hadoop-2.7.3/etc/hadoop / test
. / bin/hadoop fs-ls / test
Backup Verification of namenode
Kill master1,master2 and become active.
Verify yarn
. / bin/hadoop jar / data1/usr/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar wordcount / test/hadoop / test/out
Install HBASE
Master1 node operation:
Download hbase-1.2.4-bin.tar.gz and extract it
$cd / data1/usr/src
$tar-zxvf hbase-1.2.4-bin.tar.gz-C / data1/usr/
$mkdir-p / data1/hadoop_data/hbase_tmp
$mkdir-p / data2/hadoop_data/hbase_tmp
Configure the hbase environment for master1
Configure hbase-env.sh
$vi / data1/usr/hbase-1.2.4/conf/hbase-env.sh
Export JAVA_HOME=/data1/usr/jdk1.7.0_79
Export HBASE_PID_DIR=/data2/hadoop_data/pids
Export HBASE_MANAGES_ZK=false
Export HADOOP_HOME=/data1/usr/hadoop-2.7.3
Configure hbase-site.xml
$vi / data1/usr/hbase-1.2.4/conf/hbase-site.xml
Hbase.rootdir
Hdfs://masters/hbase
Hbase.cluster.distributed
True
Hbase.master.port
60000
Hbase.tmp.dir
/ data2/hadoop_data/hbase_tmp
Hbase.zookeeper.quorum
Master1,master2,slave1,slave2,slave3
Configure regionservers
$vi / data1/usr/hbase-1.2.4/conf/regionservers
Slave1
Slave2
Slave3
Configure backup-masters
$vi / data1/usr/hbase-1.2.4/conf/backup-masters
Remove unnecessary log4j jar packages from HBase
Cd ${HBASE_HOME} / lib
Mv slf4j-log4j12-1.7.5.jar slf4j-log4j12-1.7.5.jar.bak
Transfer the hbase environment of master1 to other nodes
$for ip in `seq 2 5`; do scp-rpq / data1/usr/hbase-1.2.4 192.168.11.13$ ip:/data1/usr;done
Startup sequence
Follow the startup steps of the hadoop cluster to start the hadoop cluster
Start Hbase on master1
$cd / data1/usr/hbase-1.2.4/bin
$. / start-hbase.sh
Verification
$/ data1/usr/hadoop-2.7.3/bin/hadoop fs-ls / check whether hbase is created successfully in the HDFS file system
Execution: bin/hbase shell can enter the Hbase management interface,
Enter status to view status
Create a tabl
Create 'test',' cf'
Display table information
List 'test'
Insert data into the table
Put 'test',' row1', 'cf:a',' value1'
Put 'test',' row2', 'cf:b',' value2'
Put 'test',' row3', 'cf:c',' value3'
Query table
Scan 'test'
Fetch a row of data
Get 'test',' row1'
Failure table
Disable 'test'
Delete tabl
Drop 'test'
Enter http://master1:16010 in the browser to open the Hbase management interface
Http://192.168.11.131/master-status
Start thrift2
Hbase-daemons.sh start thrift2
Go to the datanode node and use jps to confirm
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.