In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
I. expansion of hadoop
1. Configure the hosts files of each node, and add the IP and hostname of the new two nodes
The newly added node adds the following
192.168.11.131 master1
192.168.11.132 master2
192.168.11.133 slave1
192.168.11.134 slave2
192.168.11.135 slave3
New nodes join each other's hostname resolution
The old node adds the following
192.168.11.136 slave4
192.168.11.137 slave5
2. Preparatory work
Refer to the hadoop HA cluster and Hbase HA cluster configured as newly added nodes in the previous blog production environment, do the following:
Configure Hostnam
Turn off the firewall and selinux
Configure yum Feed
Configure time synchronization (as a ntp service client)
Create groups and users
Create a directory
Login without password
Configure the java environment
3. Modify hadoop configuration file and copy hadoop file
Log in to the master1 node
$cd / data1/usr/hadoop-2.7.3/etc/hadoop
$vi slaves
Add newly added node information
Slave4
Slave5
$for ip in 2 3 4 5 witdo scp / data1/usr/hadoop-2.7.3/etc/hadoop/slaves 192.168.11.13$ ip:/data1/usr/hadoop-2.7.3/etc/hadoop/;done
Copy the hadoop file to the new node
$scp-rpq / data1/usr/hadoop-2.7.3 hduser@slave4:/data1/usr
$scp-rpq / data1/usr/hadoop-2.7.3 hduser@slave5:/data1/usr
4. Clear the log
Log in to the newly added server
$cd / data1/usr/hadoop-2.7.3/logs
$rm * .log.*
$rm * .out.*
$for i in `find. -name "* .log"-o-name "* .out" `; do cat / dev/null > $idone
Delete other projects and files
Compare the old datanode nodes and delete the redundant files and directories under / data1/usr/hadoop-2.7.3.
5. Start the service
Log in to the new node and start datanode
$cd / data1/usr/hadoop-2.7.3/sbin
$. / hadoop-daemon.sh start datanode
Log in to the master1 node and refresh the datanode list
$cd / data1/usr/hadoop-2.7.3/bin
. / hdfs dfsadmin-refreshNodes
View list information
. / hdfs dfsadmin-report
8. Load balancing for hdfs
$cd / data1/usr/hadoop-2.7.3/sbin
$. / start-balancer.sh
If your cluster has separate balance nodes and the bandwidth is large enough, consider doing balance optimization, because the balance process will take a long time by default. Specific optimization methods can refer to another blog I reprinted: optimizing the speed of Hadoop Balancer balance.
Start yarn
$cd / data1/usr/hadoop-2.7.3/sbin
$. / yarn-daemon.sh start nodemanager
Check the situation of the cluster
$cd / data1/usr/hadoop-2.7.3/bin
$yarn rmadmin-refreshNodes
. / yarn node-list
9. If you need to add journalnode nodes, you also need to add journalnode services, but this change has not been added. The steps to add are as follows.
To modify the JournalNode address, an odd number must be guaranteed.
Vim / data1/usr/hadoop-2.7.3/etc/hadoop/hdfs-site.xml
Dfs.namenode.shared.edits.dir
Qjournal://slave1:8485;slave2:8485;slave3:8485;slave4:8485;slave5:8485/mycluster
II. Expansion of hbase
1. Log in to master1 and modify the configuration file
$cd / data1/usr/hbase-1.2.4/conf
$vi regionservers
Add a new node host
Slave4
Slave5
$for ip in 2 3 4 5 witdo scp / data1/usr/hbase-1.2.4/conf/regionservers 192.168.11.13$ ip:/data1/usr/hbase-1.2.4/etc/hadoop/;done
Log in to a new node, create a directory, and modify permissions
2. Copy the hbase file to the new node
Log in to the master1 node
$scp-rpq / data1/usr/hbase-1.2.4 hduser@192.168.11.136:/data
$scp-rpq / data1/usr/hbase-1.2.4 hduser@192.168.11.137:/data
Clean up the log
$cd / data1/usr/hbase-1.2.4/logs
$rm * .out.*
$> hbase-hduser-master-master1.log
$> hbase-hduser-master-master1.out
3. Log in to the new node and start the service
$cd / data1/usr/hbase-1.2.4/bin
$. / hbase-daemon.sh start regionserver
$. / hbase shell
Status
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.