In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Hadoop-HBASE Hot add New Node
Environment:
192.168.137.101 hd1
192.168.137.102 hd2
192.168.137.103 hd3
192.168.137.104 hd4
Four-node hadoop and hbase
1. Set hostname
Vi / etc/sysconfig/network
Hostname hd5
After the setting is completed, you need to log out and log in again to take effect.
View firewall status: service iptables status
Deactivate firewall: service iptables stop
2. Modify / etc/hosts in hd5
Add 192.168.137.105 hd5
3. Distribute it to all hd1, hd2, hd3, hd4
Scp / etc/hosts hd1:/etc
Scp / etc/hosts hd2:/etc
Scp / etc/hosts hd3:/etc
Scp / etc/hosts hd4:/etc
4. Delete the public key and private key file in the original .ssh in the hd5 node and regenerate it.
Cd / .ssh
Rm id_rsa
Rm id_rsa.pub
Ssh-keygen-t rsa
5. Copy the authorized_keys file from the original hd1 node to hd5, and then add the new common key
Cat ~ / .ssh/id_rsa.pub > > authorized_keys
6. Distribute the modified files to other nodes
Scp / .ssh/authorized_keys hd1:/home/hadoop/.ssh
Scp / .ssh/authorized_keys hd2:/home/hadoop/.ssh
Scp / .ssh/authorized_keys hd3:/home/hadoop/.ssh
Scp / .ssh/authorized_keys hd4:/home/hadoop/.ssh
7. Go to each node to log in to ssh for the first time to hd5 (it is better to log in to ssh once locally in hd5)
In hd1, ssh hd5 date
In hd2, ssh hd5 date
In hd3, ssh hd5 date
In hd4, ssh hd5 date
In hd5, ssh hd5 date
8. Copy the hadoop and hbase installation files on a node to the new node, and then modify the configuration file
Modify the slave file of hadoop in hd5
Vim / home/hadoop/hadoop/etc/hadoop/slaves
Join hd5
Distribute it to other nodes
Scp / home/hadoop/hadoop/etc/hadoop/slaves hd1:/home/hadoop/etc/hadoop
Scp / home/hadoop/hadoop/etc/hadoop/slaves hd2:/home/hadoop/etc/hadoop
Scp / home/hadoop/hadoop/etc/hadoop/slaves hd3:/home/hadoop/etc/hadoop
Scp / home/hadoop/hadoop/etc/hadoop/slaves hd4:/home/hadoop/etc/hadoop
9. Start datanode in hd5
. / hadoop-daemon.sh start datanode
10. Start start-balancer.sh to equalize the current hdfs block in hd5
Start-balancer.sh
11. If there is still hbase running on it, you need to deploy hbase's hserver
Modify
Vim / home/hadoop/hbase/conf/regionservers
Join hd5 and copy the regionservers file to hd1,hd2,hd3,hd4
Scp regionservers hd1:/home/hadoop/hbase/conf
Scp regionservers hd2:/home/hadoop/hbase/conf
Scp regionservers hd3:/home/hadoop/hbase/conf
Scp regionservers hd4:/home/hadoop/hbase/conf
13. Start hbase regionserver in hd5
Hbase-daemon.sh start regionserver
14. Start hbase shell in hd1 and hd5
Confirm the cluster situation with the status command
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.