Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

HA Construction of HADOOP

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

Enter to execute: / soft/hadoop-2.7.1/etc/hadoop_cluster$ gedit hdfs-site.xml

[configuration section]

[hdfs-site.xml]

1. Configure the name service: dfs.nameservices

Logical name of the name service.

Dfs.nameservices

Mycluster

two。 Configure each namenode in nameservice

Dfs.ha.namenodes. [nameservice ID]

Dfs.ha.namenodes.mycluster

Nn1,nn2

Note: the current hadoop2.7.2 can only be configured with a maximum of 2 namenode.

3. Configure the rpc address for each namede

Dfs.namenode.rpc-address.mycluster.nn1

S1:8020

Dfs.namenode.rpc-address.mycluster.nn2

S8:8020

4. Configure the webui address for each namenode

Dfs.namenode.http-address.mycluster.nn1

Machine1.example.com:50070

Dfs.namenode.http-address.mycluster.nn2

Machine2.example.com:50070

5. Configure namenode's shared edit log directory

Dfs.namenode.shared.edits.dir

Qjournal://s1:8485;s7:8485;s8:8485/mycluster

6. Configure the client disaster recovery agent vendor class

Used by the client to detect which namenode is a loose-leaf node.

Dfs.client.failover.proxy.provider.mycluster

Org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider

7. (optional) configure a collection of HA protection method names.

QJM prevents brain fissure, and there are no two active node.

You can configure sshfence or shell scripts.

8. Configure the file system for hdfs

[core-site.xml]

Fs.defaultFS

Hdfs://mycluster

9. Configure JN's local data storage (edit log) directory

Dfs.journalnode.edits.dir

/ home/ubuntu/hadoop/journal

2. Change the client name of the core-site.xml file to the following and issue it to each virtual machine:

Fs.defaultFS

Hdfs://mycluster

3. Then go back to hdfs-site.xml and put

Fs.defaultFS

Hdfs://mycluster

Delete. Also issued to each virtual machine

4. Next, start the jn process: hadoop-daemon.sh starat journalnode

I started the journalnode process on S1, S7, and S8, respectively. I have eight virtual machines)

5. Copy the dfs under the hadoop directory of S1 to S8. Namely execute: scp-r dfs ubuntu@s8:/home/ubuntu/hadoop

6. With a namenode node enabled, log in: execute hdfs namenode-bootstrapStandby on the virtual machine of ssh S8

7. Stop the namenode node again, and then execute: hdfs namenode-initializeSharedEdits on S1 virtual machine

8. Start namenode on S1 and S8, that is, execute hadoop-daemno.sh start namenode

9. Finally start all data nodes: hadoop-daemons.sh start datanode (hadoop-daemons.sh is to start all nodes)

Manage the ha process

1. Complete the status switch manually:

Hdfs haadmin-transitionToActive nn1

Hdfs haadmin-transitionToStandbby nn1

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report