In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
I. Hbase data backup and recovery
Description:
Because the test environment needs to modify the parameter hdfs.rootdir of the hadoop configuration file hdfs-site.xml
Configuration before modification
Hbase.rootdir
Hdfs://masters/hbase1
Modified configuration
Hbase.rootdir
Hdfs://masters/hbase
So after the modification, the tables that originally existed in hbase1 can no longer be used, so you need to make a backup, and then import the table into hbase.
The specific process is as follows:
1. Stop the hbase service
Log in to the hbase master node
$cd $HBASE_HOME/bin
$stop-hbase.sh
2. Backup hbase1
Check the size of the hbase1
$. / hdfs dfs-du-s-h / hbase1
2.8 G / hbase1
Backup
Log in to the hadoop master node
$cd $HADOOP_HOME/bin
$. / hadoop distcp hdfs://192.168.22.178:9000/hbase1 hdfs://192.168.22.178:9000/backuphbase
There is a message that some tasks failed during execution, as follows:
INFO tools.DistCp: Input Options: DistCpOptions {atomicCommit=false, syncFolder=false, deleteMissing=false, ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', copyStrategy='uniformsize', sourceFileListing=null
Caused by: java.io.IOException: Couldn't run retriable-command: Copying
If you don't know what's going on, ignore it first.
View the size of the backup file
Bin] $. / hdfs dfs-du-s-h / backuphbase
2.8 G / backuphbase
Delete hbase1
$. / hdfs dfs-rm-R / hbase1
10:58:10 on 18-06-15 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.
Deleted / hbase1
3. Restore files to hbase
$. / hdfs dfs-mkdir / hbase
$. / hadoop distcp hdfs://192.168.22.178:9000/backuphbase/* hdfs://192.168.22.178:9000/hbase
Second, deal with the problem that some tables cannot be deleted after recovery.
1. Start the hbase service
$cd $HBASE_HOME/bin
$start-hbase.sh
2. Find the problem
Browser input http://192.168.22.178:16010
It is found that the Other Regions column value of two tables (table1 and table2) is 1, and the other is 0, and then it is found that it cannot be successfully executed during the process of deleting the table. The deletion process is as follows:
$cd $HBASE_HOME/bin
$. / hbase shell
Hbase (main): 001disable 0 > table1'
Hbase (main): 001drop 0 > table1'
Here, the command to delete the table gets stuck, and the same is true of the other table.
3. Solve the problem
Delete the files of two tables in the hdfs file system
$cd $HADOOP_HOME/bin
$. / hdfs dfs-rm-r / hbase/data/default/table1
$. / hdfs dfs-rm-r / hbase/data/default/table2
Delete the files of two tables in ZK
Log in to the ZK node
$cd $ZK_HOME/bin
$. / zkCli.sh
[zk: localhost:2181 (CONNECTED) 1] rmr / hbase/table/table1
[zk: localhost:2181 (CONNECTED) 2] rmr / hbase/table/table2
3.3. Restart the service
Restart the service process
Shut down hbase cluster-- > shut down hadoop cluster-- > start hadoop cluster-- > start hbase cluster
After startup, it was found that both tables had disappeared in the web interface.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.