Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

The environment in which HDFS HA is deployed

2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

= > Environmental Architecture deployment Planning:

Bigdata1 NameNode ResourceManager Zookeeper JournalNode failOverController

Bigdata2 NameNode ResourceManager Zookeeper JournalNode failOverController

Bigdata3 DataNode NodeManager Zookeeper

Bigdata4 DataNode NodeManager

= prepare the environment:

(*) clear the previous configuration

(*) install JDK, modify / etc/hosts file, turn off firewall, login without password

#

Hdfs-site.xml

Dfs.nameservices

Mycluster

Dfs.ha.namenodes.mycluster

Nn1,nn2

Dfs.namenode.rpc-address.mycluster.nn1

Bigdata1:8020

Dfs.namenode.rpc-address.mycluster.nn2

Bigdata2:8020

Dfs.namenode.http-address.mycluster.nn1

Bigdata1:50070

Dfs.namenode.http-address.mycluster.nn2

Bigdata2:50070

Dfs.namenode.shared.edits.dir

Qjournal://bigdata1:8485;bigdata2:8485/mycluster

Dfs.client.failover.proxy.provider.mycluster

Org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider

Dfs.ha.fencing.methods

Sshfence

Shell (/ bin/true)

Dfs.ha.fencing.ssh.private-key-files

/ root/.ssh/id_rsa

Dfs.ha.fencing.ssh.connect-timeout

30000

Dfs.journalnode.edits.dir

/ data/journal

Dfs.ha.automatic-failover.enabled

True

#

Core-site.xml

Hadoop.tmp.dir

/ data/app/hadoop-2.7.1/tmp/

Fs.defaultFS

Hdfs://mycluster

Ha.zookeeper.quorum

Bigdata1,bigdata2,bigdata3

#

Mapred-site.xml

Mapreduce.framework.name

Yarn

#

Yarn-site.xml

Yarn.resourcemanager.ha.enabled

True

Yarn.resourcemanager.cluster-id

Yrc

Yarn.resourcemanager.ha.rm-ids

Rm1,rm2

Yarn.resourcemanager.hostname.rm1

Bigdata1

Yarn.resourcemanager.hostname.rm2

Bigdata2

Yarn.resourcemanager.zk-address

Bigdata1:2181,bigdata2:2181,bigdata3:2181

Yarn.nodemanager.aux-services

Mapreduce_shuffle

#

Slaves

Bigdata3

Bigdata4

#

Copy the configured installation files to several other hosts

Scp-r hadoop-2.7.1 bigdata2:/data/app

Scp-r hadoop-2.7.1 bigdata3:/data/app

Scp-r hadoop-2.7.1 bigdata4:/data/app

= = > start journalnode:

Hadoop-daemon.sh start journalnode

= = > format NameNode

Note that you need to create the directory specified by hadoop.tmp.dir in the core-site.xml file, otherwise an error will be reported.

The directory specified by this profile is / data/app/hadoop-2.7.1/tmp/, so you need to create a directory first.

Mkdir / data/app/hadoop-2.7.1/tmp/

Format NameNode

Hdfs namenode-format

Copy the dfs directory under the tmp directory to the same directory in bigdata2

Scp-r / data/app/hadoop-2.7.1/tmp/dfs bigdata2:/data/app/hadoop-2.7.1/tmp

= = > format zookeeper (bigdata1):

Zookeeper needs to be started to execute successfully, otherwise it will prompt: WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect

Java.net.ConnectException: deny connection

ZkServer.sh start (started on bigdata1,bigdata2,bigdata3, that is, the machine where the zookeeper cluster resides)

Hdfs zkfc-formatZK

= = > at this point, the environment is deployed and the entire cluster environment is started:

1. Start zookeeper (bigdata1,bigdata2,bigdata3):

(if you do not start zookeeper,namenode first, it will all be in standby state.)

ZkServer.sh start

two。 Start the hdfs cluster:

Start-all.sh (started on bigdata1)

Yarn-daemon.sh start resourcemanager (started on bigdata2)

= = > each host executes jps status:

# # #

[root@bigdata1 app] # jps

22224 JournalNode

22400 DFSZKFailoverController

22786 Jps

22019 NameNode

21405 QuorumPeerMain

22493 ResourceManager

# # #

[root@bigdata2 app] # jps

9408 QuorumPeerMain

9747 DFSZKFailoverController

9637 JournalNode

9929 Jps

9850 ResourceManager

9565 NameNode

# # #

[root@bigdata3 app] # jps

7664 DataNode

7531 QuorumPeerMain

7900 Jps

7775 NodeManager

# # #

[root@bigdata4 ~] # jps

7698 NodeManager

7587 DataNode

7823 Jps

# # #

Test: visit the 50070 port web page, which displays the status information (active/ standby) of namenode

You can kill the NameNode process of an activ machine, and then view the status information of another NameNode

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report