In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
This configuration continues the previous hadoop pseudo-distributed installation deployment
resource download
http://mirrors.hust.edu.cn/apache/zookeeper/zookeeper-3.4.12/zookeeper-3.4.12.tar.gz
http://mirror.bit.edu.cn/apache/hbase/stable/hbase-1.2.6-bin.tar.gz
1. Configure suduers permissions for hadoop management users and do hostname resolution
[root@master1 hadoop]# vi /etc/sudoers
added
hadoop ALL=(ALL) NOPASSWD: ALL
[root@master1 hadoop]# su hadoop
hostname resolution
[hadoop@master1 ~]$ sudo vi /etc/hosts
added
192.168.120.131 master1
2. Zookeeper environment configuration
[hadoop@master1 src]$ pwd
/home/hadoop/src
[hadoop@master1 src]$ tar -xf zookeeper-3.4.12.tar -C /home/hadoop
[hadoop@master1 src]$ cd ..
[hadoop@master1 ~]$ mv zookeeper-3.4.12 zookeeper
modify environment variables
[hadoop@master1 ~]$ vi ~/.bashrc
added
export ZOOKEEPER_HOME=/home/hadoop/zookeeper
modify
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin:$ZOOKEEPER_HOME/bin
executed the instrument
[hadoop@master1 ~]$ source ~/.bashrc
Modify zookeeper profile
[hadoop@master1 ~]$ cd zookeeper/conf
[hadoop@master1 conf]$ cp zoo_sample.cfg zoo.cfg
modify
dataDir=/home/hadoop/zookeeper/data
added
dataLogDir=/home/hadoop/zookeeper/datalog
[hadoop@master1 conf]$ mkdir /home/hadoop/zookeeper/data
[hadoop@master1 conf]$ mkdir /home/hadoop/zookeeper/datalog
Start ZK service
[hadoop@master1 conf]$ zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper/bin/../ conf/zoo.cfg
Starting zookeeper ... STARTED
3. Hbase environment configuration
[hadoop@master1 src]$ pwd
/home/hadoop/src
[hadoop@master1 src]$ tar zxf hbase-1.2.6-bin.tar -C ../
[hadoop@master1 ~]$ mv hbase-1.2.6 hbase
modify environment variables
[hadoop@master1 ~]$ vi ~/.bashrc
added
export HBASE_HOME=/home/hadoop/hbase
modify
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin:$ZOOKEEPER_HOME/bin:$HBASE_HOME/bin
executed the instrument
[hadoop@master1 ~]$ source ~/.bashrc
Copy hadoop configuration file to hbase
[hadoop@master1 ~]$ cp hadoop/etc/hadoop/core-site.xml hbase/conf/
[hadoop@master1 ~]$ cp hadoop/etc/hadoop/hdfs-site.xml hbase/conf/
Edit hbase profile
[hadoop@master1 ~]$ cd hbase/conf
[hadoop@master1 conf]$ vi hbase-env.sh
added
export JAVA_HOME=/home/hadoop/dk
export HBASE_MANAGES_ZK=false
[hadoop@master1 conf]$ vi hbase-site.xml
hbase.rootdir
hdfs://master1:9000/hbase
hbase.cluster.distributed
false
hbase.zookeeper.quorum
master1:2181
hbase.master.info.port
60010
Description:
The hbase.master.info.port parameter configures the web interface of hbase. If it is not configured, it will not be enabled by default.
Start Hbase service
[hadoop@master1 conf]$ start-hbase.sh
starting master, logging to /home/hadoop/hbase/logs/hbase-hadoop-master-master1.out
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
Do the following for the warning message that appears
[hadoop@master1 conf]$ vi hbase-env.sh
annotation
#export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m"
#export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m"
viewing service
[hadoop@master1 conf]$ jps
1632 SecondaryNameNode
1793 ResourceManager
1362 NameNode
1491 DataNode
1913 NodeManager
5374 QuorumPeerMain
5966 HMaster
6255 Jps
You can open the hbase web interface at http://192.168.120.131:60010.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.