In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Environment:
192.168.137.101 hd1
192.168.137.102 hd2
192.168.137.103 hd3
192.168.137.104 hd4
Four-node hadoop and hbase
1. Modify / etc/hosts in hd5
Join
192.168.137.105hd5
2. Distribute it to all hd1,hd2,hd3,hd4
Scp/etc/hostshd1:/etc
Scp/etc/hostshd2:/etc
Scp/etc/hostshd3:/etc
Scp/etc/hostshd4:/etc
3. Modify the hd5 node
/ home/hadoop/hadoop/etc/hadoop/slaves
Add hd5 as the last node
4. Delete the public key and private key file in the original .ssh in the hd5 node and regenerate it.
Cd~/.ssh
Rmid_rsa
Rmid_rsa.pub
Ssh-keygen-trsa
5. Copy the authorized_keys file from the original hd1 node to hd5, and then add the new common key
Cat~/.ssh/id_rsa.pub > > authorized_keys
6. Distribute the modified files to other nodes
Scp~/.ssh/authorized_keyshd1:/home/hadoop/.ssh
Scp~/.ssh/authorized_keyshd2:/home/hadoop/.ssh
Scp~/.ssh/authorized_keyshd3:/home/hadoop/.ssh
Scp~/.ssh/authorized_keyshd4:/home/hadoop/.ssh
7. Go to each node to log in to the ssh of hd5 for the first time (if you don't confirm it) (it's better to log in to ssh once locally in hd5)
In hd1,ssh hd5 date
In hd2,ssh hd5 date
In hd3,ssh hd5 date
In hd4,ssh hd5 date
In hd5,ssh hd5 date
8. Modify the slave file of hadoop in hd5
Vim / home/hadoop/hadoop/etc/hadoop/slaves
Join hd5
Distribute it to other nodes
Scp / home/hadoop/hadoop/etc/hadoop/slaves hd1:/home/hadoop/etc/hadoop
Scp / home/hadoop/hadoop/etc/hadoop/slaves hd2:/home/hadoop/etc/hadoop
Scp / home/hadoop/hadoop/etc/hadoop/slaves hd3:/home/hadoop/etc/hadoop
Scp / home/hadoop/hadoop/etc/hadoop/slaves hd4:/home/hadoop/etc/hadoop
9. Start yarn in hd5
Start-dfs.sh
Yarn-daemon.shstartdatanode
(if it is the product of virtual machine replication, delete the files in / home/hadoop/tmp and / hoem/hadoop/hdfs, of course, these two paths should be retained.)
10. Start start-balancer.sh to equalize the current hdfs block in hd5
Start-balancer.sh
11. If there is still hbase running on it, you need to deploy hbase's hserver
Modify
Vim/home/hadoop/hbase/conf/regionservers
Join hd5
Vim/home/hadoop/hbase/conf/hbase-site.xml
Join
Hbase.zookeeper.quorum
Hd1,hd2,hd3,hd4,hd5
12. Copy the above two files to hd1,hd2,hd3,hd4
Scpregionservershd1:/home/hadoop/hbase/conf
Scpregionservershd2:/home/hadoop/hbase/conf
Scpregionservershd3:/home/hadoop/hbase/conf
Scpregionservershd4:/home/hadoop/hbase/conf
Scphbase-site.xmlhd1:/home/hadoop/hbase/conf
Scphbase-site.xmlhd2:/home/hadoop/hbase/conf
Scphbase-site.xmlhd3:/home/hadoop/hbase/conf
Scphbase-site.xmlhd4:/home/hadoop/hbase/conf
13. Start hbaseregionserver in hd5
Hbase-daemon.shstartregionserver
14. Start hbaseshell in hd1 and hd5
Confirm the cluster situation with the status command
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.