Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to build HA distributed Cluster

2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

Today, I will talk to you about how to build a HA distributed cluster. Many people may not know much about it. In order to make you understand better, the editor has summarized the following content for you. I hope you can get something according to this article.

One: the advantages of HA distributed configuration:

1. Prevent cluster failure due to the failure of a namenode

2, suitable for the needs of industrial production

Step 2: HA installation: 1. Install the virtual machine.

1, model: VMware_workstation_full_12.5.0.11529.exe linux image: CentOS-7-x86_64-DVD-1611.iso

Note:

1. The network chooses the bridging mode (to prevent route total change), (it is best for desktops or servers to set their own native ip address to static ip)

2. During the installation, the infrastructure mode (infras...) is selected (the mode of reducing memory consumption while ensuring the basic environment)

3, username root password root

4. The network configuration uses manual network fixed network ip4 address (fixed ip)

2 basic Linux environment configuration: (all operations are carried out under root permissions)

1. Verify the network service: ping host ping ping www.baidu.ok verifies ok

Backup ip address: cp / etc/sysconfig/network-scripts/ifcfg-ens33 / etc/sysconfig/network-scripts/ifcfg-ens33.bak

2, firewall settings: turn off and disable the firewall

Turn off the firewall systemctl stop firewalld.service (cetos7 is different from the previous series of iptables)

Disable firewall: systemctl disable firewalld.service

Check firewall status: firewall-cmd-- state

3, set hosts,hostname,network

Vim / etc/hostname

Ha1

Vim / etc/hosts

192.168.1.116 ha1

192.168.1.117 ha2

192.168.1.118 ha3

192.168.1.119 ha4

Vim / etc/sysconfig/network

NETWORKING=yes

HOSTNAME=ha1

4. Install some necessary packages: (not necessarily all)

Yum install-y chkconfig

Yum install-y Python

Yum install-y bind-utils

Yum install-y psmisc

Yum install-y libxslt

Yum install-y zlib

Yum install-y sqlite

Yum install-y cyrus-sasl-plain

Yum install-y cyrus-sasl-gssapi

Yum install-y fuse

Yum install-y portmap

Yum install-y fuse-libs

Yum install-y RedHat-lsb

5. Install Java and Scala

Java version: jdk-7u80-linux-x64.rpm

Scala version: scala-2.11.6.tgz

Verify that there is a java:

Rpm-qa | grep java none

Tar-zxf jdk-8u111-linux-x64.tar.gz

Tar-zxf scala-2.11.6.tgz

Mv jdk1.8.0_111 / usr/java

Mv scala-2.11.6 / usr/scala

Configure environment variables:

Vim / etc/profile

Export JAVA_HOME=/usr/java

Export SCALA_HOME=/usr/scala

Export PATH=$JAVA_HOME/bin:$SCALA_HOME/bin:$PATH

Export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

6. Restart, verify whether ok is set above: restart uses vm snapshot, named: initialize ok java,scala, hostname, firewall, ip

3Get Hadooptics zookeeper cluster configuration

1, cluster machine preparation

Connect clones: clone ha2,ha3,ha4 for ha1

Modify the network address, network, firewall for ha2,ha3,ha4

Vim / etc/sysconfig/network-scripts/ifcfg-ens33

116 117/118/119

Service network restart

Vim / etc/hostname

Vim / etc/sysconfig/network

Systemctl disable firewalld.service

Restart the ha2,ha3,ha4 to verify ip, network, firewall, respectively, on the three machine snapshots, named: initialization ok java,scala, hostname, firewall, ip

2. Cluster framework diagram

Machine

Namenode

DataNode

Zookeeper

ZkFC

JournalNode

RM

DM

Ha1

one

one

one

one

one

Ha2

one

one

one

one

one

one

Ha3

one

one

one

one

Ha4

one

one

3Query ssh communication: snapshot ssh ok after ok

Four machines:

Ssh-keygen-t rsa

Cat ~ / .ssh/id_rsa.pub > > ~ / .ssh/authorized_keys

Chmod 600 ~ / .ssh/authorized_keys

Under ha1:

Scp / .ssh/* root@ha2:~/.ssh/

Scp / .ssh/* root@ha3:~/.ssh/

Scp / .ssh/* root@ha4:~/.ssh/

Verify:

Ssh ha2/ha3/ha4

4Give zookeeper cluster configuration:

1. Configure environment variables

Zook installation:

Tar-zxf zookeeper-3.4.8.tar.gz

Mv zookeeper-3.4.8 / usr/zookeeper-3.4.8

Modify the configuration file:

Export ZK_HOME=/usr/zookeeper-3.4.8

Scp / etc/profile root@ha2:/etc/

Scp / etc/profile root@ha3:/etc/

Source / etc/profile

2the configuration of zoo.cfg (modified in bold)

Cd / usr/zookeeper-3.4.8/conf

Cp zoo_sample.cfg zoo.cfg

Content:

# The number of milliseconds of each tick

TickTime=2000

# The number of ticks that the initial

# synchronization phase can take

InitLimit=10

# The number of ticks that can pass between

# sending a request and getting an acknowledgement

SyncLimit=5

# the directory where the snapshot is stored.

# do not use / tmp for storage, / tmp here is just

# example sakes.

DataDir=/opt/zookeeper/datas

DataLogDir=/opt/zookeeper/logs

# the port at which the clients will connect

ClientPort=2181

# the maximum number of client connections.

# increase this if you need to handle more clients

# maxClientCnxns=60

#

# Be sure to read the maintenance section of the

# administrator guide before turning on autopurge.

#

# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance

#

# The number of snapshots to retain in dataDir

# autopurge.snapRetainCount=3

# Purge task interval in hours

# Set to "0" to disable auto purge feature

# autopurge.purgeInterval=1

Server.1=ha1:2888:3888

Server.2=ha2:2888:3888

Server.3=ha3:2888:3888

3. Start the zookeeper cluster:

# three machines (ha1,ha2,ha3)

Create a new folder:

Mkdir-p / opt/zookeeper/datas

Mkdir-p / opt/zookeeper/logs

Cd / opt/zookeeper/datas

Vim myid wrote 1-2-3

# distribute to ha2,ha3 (note that ha4 is not required)

Cd / usr

Scp-r zookeeper-3.4.8 root@ha2:/usr

Scp-r zookeeper-3.4.8 root@ha3:/usr

# start (three machines)

Cd $ZK_HOME/bin

ZkServer.sh start

ZkServer.sh status A leader and even a follower

5Gen Hadoop cluster configuration

1. Configure environment variables:

Version: hadoop-2.7.3.tar.gz

Tar-zxf hadoop-2.7.3.tar.gz

Mv hadoop2.7.3 / usr/hadoop2.7.3

Export JAVA_HOME=/usr/java

Export SCALA_HOME=/usr/scala

Export HADOOP_HOME=/usr/hadoop-2.7.3

Export PATH=$JAVA_HOME/bin:$SCALA_HOME/bin:$HADOOP_HOME/bin:$PATH

Export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

Source / etc/profile

2Hadoop.env.sh configuration:

Export JAVA_HOME=/usr/javasource hadoop.env.shhadoop version verifies ok

3Give hdfsmursite.xml configuration: send it after subsequent modification (scp hdfs-site.xml root@ha4:/usr/hadoop-2.7.3/etc/hadoop/)

Vim hdfs-site.xml

Dfs.nameservices

Mycluster

Dfs.ha.namenodes.mycluster

Nn1,nn2

Dfs.namenode.rpc-address.mycluster.nn1

Ha1:9000

Dfs.namenode.rpc-address.mycluster.nn2

Ha2:9000

Dfs.namenode.http-address.mycluster.nn1

Ha1:50070

Dfs.namenode.http-address.mycluster.nn2

Ha2:50070

Dfs.namenode.shared.edits.dir

Qjournal://ha2:8485;ha3:8485;ha4:8485/mycluster

Dfs.client.failover.proxy.provider.mycluster

Org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider

Dfs.ha.fencing.methods

Sshfence

Dfs.ha.fencing.ssh.private-key-files

/ root/.ssh/id_rsa

Dfs.journalnode.edits.dir

/ opt/jn/data

Dfs.ha.automatic-failover.enabled

True

4coremursite.xml configuration

Fs.defaultFS

Hdfs://mycluster

Ha.zookeeper.quorum

Ha1:2181,ha2:2181,ha3:2181

Hadoop.tmp.dir

/ opt/hadoop2

A base for other temporary directories.

5ZhenyarnMussite.xml configuration

Vim yarn-site.xml

Yarn.nodemanager.aux-services

Mapreduce_shuffle

Yarn.nodemanager.aux-services.mapreduce.shuffle.class

Org.apache.hadoop.mapred.ShuffleHandler

Yarn.resourcemanager.hostname

Ha1

6. 6 the configuration of mapredMusi site.xml

Mapreduce.framework.name yarn

7Giver slaves configuration:

Vim slaves ha2ha3ha4

8, distribute and start:

# Distribution

Scp-r hadoop-2.7.3 root@ha2:/usr/

Scp-r hadoop-2.7.3 root@ha3:/usr/

Scp-r hadoop-2.7.3 root@ha4:/usr/

# start JN (in ha2,ha3,ha4)

Cd sbin

. / hadoop-daemon.sh start journalnode

[root@ha2 sbin] # jps

JournalNode

Jps

QuorumPeerMain (thread started by # zk)

# ha1:namenode formatting

Cd bin

. / hdfs namenode-format

# zk formatting

. / hdfs zkfc-formatZK

# you can check the cd / opt/hadoop2 file to see if the metadata is formatted properly

# ha2:namenode formatting

1Magnum ha1 should start namenode first:

. / hadoop-daemon.sh start namenode

2 under par 2

. / hdfs namenode-bootstrapStandby

9, verify: http://192.168.1.116:50070/ verifies hadoop+zookeeper installation ok in ok Snapshot ha mode

# hdfs Cluster Verification

[root@ha1 sbin] #. / stop-dfs.sh

Stopping namenodes on [ha1 ha2]

Ha2: no namenode to stop

Ha1: stopping namenode

Ha2: no datanode to stop

Ha3: no datanode to stop

Ha4: no datanode to stop

Stopping journal nodes [ha2 ha3 ha4]

Ha3: stopping journalnode

Ha4: stopping journalnode

Ha2: stopping journalnode

Stopping ZK Failover Controllers on NN hosts [ha1 ha2]

Ha2: no zkfc to stop

Ha1: no zkfc to stop

[root@ha1 sbin] #. / start-dfs.sh

Under ha1:

[root@ha1 sbin] # jps

Jps

NameNode

QuorumPeerMain

DFSZKFailoverController

[root@ha2 dfs] # jps

NameNode

DFSZKFailoverController

Jps

DataNode

JournalNode

QuorumPeerMain

[root@ha3 sbin] # jps

QuorumPeerMain

DataNode

JournalNode

Jps

[root@ha4 sbin] # jps

Jps

DataNode

JournalNode

Configure yarn and mapred

[root@ha1 sbin] # jps

NameNode

DFSZKFailoverController

Jps

QuorumPeerMain

ResourceManager

[root@ha2 hadoop] # jps

DataNode

NameNode

DFSZKFailoverController

JournalNode

NodeManager

Jps

QuorumPeerMain

[root@ha3 ~] # jps

QuorumPeerMain

DataNode

NodeManager

Jps

JournalNode

[root@ha4 ~] # jps

JournalNode

NodeManager

DataNode

Jps

After reading the above, do you have any further understanding of how to build a HA distributed cluster? If you want to know more knowledge or related content, please follow the industry information channel, thank you for your support.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report