In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
hadoop-daemon.sh is different from hadoop-daemons.sh
hadoop-daemon.sh can only be executed locally
hadoop-daemons.sh can be executed remotely
1. Initiate JN
hadoop-daemons.sh start journalnode
hdfs namenode -initializeSharedEdits //copy edits log file to journalnode node, first created to be used after formatting namenode
http://hadoop-yarn1:8480 to see if journal is normal
2. Format namenode and start Active Namenode
Format namesode on Active NameNode
hdfs namenode -formathdfs namenode -initializeSharedEdits
Initialize journalnode complete
Start Active NameNode
hadoop-daemon.sh start namenode
3. Start Standby namenode
Format Standby nodes on Standby namenode nodes
Copy metadata information on Active Namenode to Standby Namenode
hdfs namenode -bootstrapStandby
2. Start Standby node
hadoop-daemon.sh start namenode
4. Start Automatic Fail
Create a monitoring node (ZNode) such as/hadoop-ha/ns1 on zookeeper
hdfs zkfc -formatZKstart-dfs.sh
5. View namenode status
hdfs haadmin -getServiceState nn1active
6. automatic failover
hdfs haadmin -failover nn1 nn2
profile details
core-site.xml
fs.defaultFS hdfs://ns1 hadoop.tmp.dir /opt/modules/hadoop-2.2.0/data/tmp fs.trash.interval 60*24 ha.zookeeper.quorum hadoop-yarn1:2181,hadoop-yarn2:2181,hadoop-yarn3:2181 hadoop.http.staticuser.user yuanhai
hdfs-site.xml
dfs.replication 3 dfs.nameservices ns1 dfs.ha.namenodes.ns1 nn1,nn2 dfs.namenode.rpc-address.ns1.nn1 hadoop-yarn1:8020 dfs.namenode.rpc-address.ns1.nn2 hadoop-yarn2:8020 dfs.namenode.http-address.ns1.nn1 hadoop-yarn1:50070 dfs.namenode.http-address.ns1.nn2 hadoop-yarn2:50070 dfs.namenode.shared.edits.dir qjournal://hadoop-yarn1:8485;hadoop-yarn2:8485;hadoop-yarn3:8485/ns1 dfs.journalnode.edits.dir /opt/modules/hadoop-2.2.0/data/tmp/journal dfs.ha.automatic-failover.enabled true dfs.client.failover.proxy.provider.ns1 org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider dfs.ha.fencing.methods sshfence dfs.ha.fencing.ssh.private-key-files /home/hadoop/.ssh/id_rsa dfs.permissions.enabled false
slaves
hadoop-yarn1hadoop-yarn2hadoop-yarn3
yarn-site.xml
yarn.nodemanager.aux-services mapreduce_shuffle yarn.resourcemanager.hostname hadoop-yarn1 yarn.log-aggregation-enable true yarn.log-aggregation.retain-seconds 604800
mapred-site.xml
mapreduce.framework.name yarn mapreduce.jobhistory.address hadoop-yarn1:10020 MapReduce JobHistory Server IPC host:port mapreduce.jobhistory.webapp.address hadoop-yarn1:19888 MapReduce JobHistory Server Web UI host:port mapreduce.job.ubertask.enable true
hadoop-env.sh
export JAVA_HOME=/opt/modules/jdk1.6.0_24
Other related articles:
http://blog.csdn.net/zhangzhaokun/article/details/17892857
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.