In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces how to back up HA metadata in Hadoop. It is very detailed and has certain reference value. Friends who are interested must read it!
NFS mode
1. Server side configuration (root@hz-search-zookeeper-01)
Su-hbase-c "mkdir / home/hbase/hadoop_nfs & & chmod 777 / home/hbase/hadoop_nfs"
Echo'/ home/hbase/hadoop_nfs 172.37.0.202 (rw)'> > / etc/exports
Service nfs restart
II. Client side configuration (hadoop namenode)
Su-hbase-c "mkdir-p / home/hbase/hadoop_nfs/name"
/ etc/init.d/nfslock start
Mount-t nfs 172.37.0.201:/home/hbase/hadoop_nfs/ / home/hbase/hadoop_nfs/name
Echo'mount-t nfs 172.37.0.201:/home/hbase/hadoop_nfs/ / home/hbase/hadoop_nfs/name' > > / etc/rc.d/rc.local
Configure dfs.name.dir to two copies, and restart hadoop to make it effective
Dfs.name.dir
/ home/admin/name/,/home/admin/hadoop_nfs/name
QJM mode
Http://hadoop.apache.org/docs/r2.4.0/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html
To configure ZK cluster, refer to my other blog post, ZooKeeper cluster installation configuration.
Hdfs-site.xml
Append namenode service name
Dfs.nameservices
Mycluster
The node to which the namenode service is appended, up to two nodes in a nameservice
Dfs.ha.namenodes.mycluster
Nn1,nn2
Dfs.namenode.rpc-address.mycluster.nn1
172.37.0.202:8020
Dfs.namenode.rpc-address.mycluster.nn2
172.37.0.201:8020
Dfs.namenode.http-address.mycluster.nn1
172.37.0.202:50070
Dfs.namenode.http-address.mycluster.nn2
172.37.0.201:50070
Dfs.namenode.shared.edits.dir
Qjournal://172.37.0.201:8485;172.37.0.202:8485;172.37.0.203:8485/mycluster
Dfs.client.failover.proxy.provider.mycluster
Org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
Dfs.ha.fencing.methods
Sshfence
Dfs.ha.fencing.ssh.private-key-files
/ home/root/.ssh/id_rsa
Dfs.ha.automatic-failover.enabled
True
Core-site.xml
Fs.defaultFS
Hdfs://mycluster
Dfs.journalnode.edits.dir
/ path/to/journal/node/local/data
Ha.zookeeper.quorum
172.37.0.201:2181172.37.0.202:2181172.37.0.203:2181
Initialize HA on ZK and execute it on a namenode node
$hdfs zkfc-formatZK
On each node, execute the following command to start journalnode
Hadoop-daemon.sh start journalnode
[root@slave2 logs] # jps
12821 JournalNode
Run on each node
Hadoop namenode-format
Run start-dfs.sh
Start-yarn.sh
View status: namenode
16753 QuorumPeerMain
18743 ResourceManager
18634 DFSZKFailoverController
18014 JournalNode
18234 NameNode
15797 Bootstrap
19571 Jps
18333 DataNode
18850 NodeManager
Datanode
1715 DataNode
1869 NodeManager
1556 JournalNode
1142 QuorumPeerMain
2179 Jps
After setting up, jps looks at the relevant processes, deletes the nn of active with kill-9, and checks the active of the NN of the previous standby through hdfs haadmin-DFSHAAdmin-getServiceState nn1, and all operations such as viewing are normal. To start the namenode dropped by kill, use sbin/hadoop- daemon.sh start namenode
Reset Hadoop HA
Stop on the namenode machine
Stop-dfs.sh
Stop-yarn.sh
Stop ZK on all machines
ZkServer.sh stop
Delete temporary files for all machine zk
Rm-rf / tmp/hadoop-root/zookeeper/version-2
Delete all machine JournalNode temporary files
Rm-rf / home/hadoop/hadoop-root/journal/node/local/data/*
Delete all machine namenode,datanode files
Rm-rf / home/hadoop/hadoop-root/dfs/name/*
Rm-rf / home/hadoop/hadoop-root/dfs/data/*
Start the ZK of all machines
ZkServer.sh start
Initialize HA on ZK and execute it on a namenode node
Hdfs zkfc-formatZK
On each node, execute the following command to start journalnode
Hadoop-daemon.sh start journalnode
Run on the namenode node
Hadoop namenode-format
Start hadoop
Start-dfs.sh
Start-yarn.sh
View node status
Http://172.37.0.201:50070/dfshealth.jsp
Http://172.37.0.202:50070/dfshealth.jsp
The above is all the contents of the article "how to back up HA metadata in Hadoop". Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.