In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Problem description:
Cluster
192.168.22.178 master1
192.168.22.179 master2
192.168.22.40 data1&zk&kafka&es
192.168.22.69 data2&zk&kafka&es
192.168.22.177 data3&kafka&es
192.168.22.180 data4
Hosts 192.168.22.40 and 192.168.22.177 are not only the datanode of the hadoop cluster but also the regionserver of hbase. The data is stored in / data and / data2 directories. The disks mounted in the two directories are lost due to abnormal conditions, and the cluster is abnormal and cannot provide services normally.
Restore steps:
1. Wait for the CVM to resume its status, and then reapply for formatted mounting of the disk.
3. Rebuild zk, kafka and es clusters
2. Copy the directories under / data and / data1 on the host 192.168.22.69 to the two machines, and then delete the log files and data.
3. Synchronize journal data
Copy the / data/hadoop_data/journal/masters directory on host 192.168.22.69 to the / data/hadoop_data/journal directory of the other two machines.
4. Start two hadoop services and switch the hbase data storage directory
$vim / data/hbase/conf/hbase-site.xml
Hbase.rootdir
Hdfs://masters/hbase
> >
Hbase.rootdir
Hdfs://masters/hbase1
Synchronize configuration files to all cluster nodes
The above method is to completely abandon the hbase data, this is because it is a test environment, if it is not a test environment, if the data is more important, you need another way, such as the way shown on the following website:
Https://blog.csdn.net/anyking0520/article/details/79064967
Log in to the zk server 192.168.22.40 and delete the data information of the hbase table saved in the zk cluster
# cd / home/box/zookeeper-3.4.6/bin
#. / zkCli.sh
After entering shell, use the following command to delete
Rmr / hbase/table
5. Turn off hadoop cluster protection mode
On the primary node
$cd / data/hadoop/bin
. / hadoop dfsadmin-safemode leave
6. Pay attention to the order of starting the cluster
6.1. resume and start zookeeper cluster
On the primary node
6.2.Starting hadoop cluster service
After configuring the hadoop cluster, start the entire hadoop cluster service
$cd / data/hadoop/sbin
$. / start-all.sh
Leave the hadoop cluster security mode
Check whether the status of the hdfs cluster is normal
$cd / data/hadoop/bin
. / hdfs dfsadmin-report
6.3. turn off hadoop cluster protection mode
6.4.After hbase is configured, start the hbase cluster
$cd / data/hbase/bin
$start-hbase.sh
Check whether the status of the hbase cluster is normal
$cd / data/hbase/bin
$. / hbase shell
After entering shell, use the status command to check the status
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.