Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Hadoop Cluster Construction (1) HA Construction of HDFS's namenode

2025-01-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

Build the HA of HDFS's namenode and prepare the machine.

Hadoop01 IP:192.168.216.203 GATEWAY:192.168.216.2

Hadoop02 IP:192.168.216.204 GATEWAY:192.168.216.2

Hadoop03 IP:192.168.216.205 GATEWAY:192.168.216.2

Configure the network card

[root@hadoop01 ~] # vim / etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0

TYPE=Ethernet

HWADDR=00:0C:29:6B:CD:B3 network card MAC address

ONBOOT=yes yes means boot.

NM_CONTROLLED=yes

BOOTPROTO=none

IPADDR=192.168.216.203 IP address

PREFIX=24

GATEWAY=192.168.216.2 Gateway

DNS1=8.8.8.8 domain name resolution server address 1

DNS2=192.168.10.254 domain name resolution server address domain name resolution server address two

DEFROUTE=yes

IPV4_FAILURE_FATAL=yes

IPV6INIT=no

NAME= "System eth0"

Install java JDK and configure environment variables

[root@hadoop01 jdk1.8.0_152] # vim / etc/profile

# my setting

Export JAVA_HOME=/usr/local/jdk1.8.0_152/

Export PATH=$PATH:$JAVA_HOME/bin:

Configure mutual ssh secret-free login between hadoop01/hadoop02/hadoop03

[root@hadoop01 hadoop-2.7.1] # vim. / etc/hadoop/hadoop-env.sh

# The java implementation to use.

Export JAVA_HOME=/usr/local/jdk1.8.0_152/

[root@hadoop01 ~] # vim / usr/local/hadoop-2.7.1/etc/hadoop/core-site.xml

Fs.defaultFS

Hdfs://qian

Ha.zookeeper.quorum

Hadoop01:2181,hadoop02:2181,hadoop03:2181

[root@hadoop01 ~] # vim / usr/local/hadoop-2.7.1/etc/hadoop/hdfs-site.xml

Dfs.nameservices

Qian

Dfs.ha.namenodes.qian

Nn1,nn2

Dfs.namenode.rpc-address.qian.nn1

Hadoop01:9000

Dfs.namenode.rpc-address.qian.nn2

Hadoop02:9000

Dfs.namenode.http-address.qian.nn1

Hadoop01:50070

Dfs.namenode.http-address.qian.nn2

Hadoop02:50070

Dfs.namenode.shared.edits.dir

Qjournal://hadoop01:8485;hadoop02:8485;hadoop03:8485/qian

Dfs.journalnode.edits.dir

/ home/hadata/journalnode/data

Dfs.ha.automatic-failover.enabled

True

Dfs.client.failover.proxy.provider.qian

Org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider

Dfs.ha.fencing.methods

Sshfence

Dfs.ha.fencing.ssh.private-key-files

/ root/.ssh/id_rsa

Dfs.ha.fencing.ssh.connect-timeout

30000

Dfs.namenode.name.dir

/ home/hadata/dfs/name

Dfs.datanode.data.dir

/ home/hadata/dfs/data

Dfs.blocksize

134217728

Dfs.permissions.enabled

False

Dfs.replication

three

[root@hadoop01 ~] # vim / usr/local/hadoop-2.7.1/etc/hadoop/slaves

Hadoop01

Hadoop02

Hadoop03

Install and configure zookeeper

[root@hadoop01 zookeeper-3.4.10] # tar-zxvf / home/zookeeper-3.4.10.tar.gz-C / usr/local/

[root@hadoop01 zookeeper-3.4.10] # cp. / conf/zoo_sample.cfg. / conf/zoo.cfg

# The number of milliseconds of each tick

TickTime=2000

# The number of ticks that the initial

# synchronization phase can take

InitLimit=5

# The number of ticks that can pass between

# sending a request and getting an acknowledgement

SyncLimit=2

# the directory where the snapshot is stored.

# do not use / tmp for storage, / tmp here is just

# example sakes.

DataDir=/home/zookeeperdata

# the port at which the clients will connect

ClientPort=2181

Server.1=hadoop01:2888:3888

Server.2=hadoop02:2888:3888

Server.3=hadoop03:2888:3888

[root@hadoop01 zookeeper-3.4.10] # scp-r / usr/local/zookeeper-3.4.10 hadoop02:/usr/local/

[root@hadoop01 zookeeper-3.4.10] # scp-r / usr/local/zookeeper-3.4.10 hadoop03:/usr/local/

Configure environment variables for three machines

[root@hadoop01 zookeeper-3.4.10] # vim / etc/profile

# my setting

Export JAVA_HOME=/usr/local/jdk1.8.0_152/

Export HADOOP_HOME=/usr/local/hadoop-2.7.1/

Export ZK_HOME=/usr/local/zookeeper-3.4.10/

Export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZK_HOME/bin:

[root@hadoop01 zookeeper-3.4.10] # scp-r / etc/profile hadoop02:/etc

Profile

[root@hadoop01 zookeeper-3.4.10] # scp-r / etc/profile hadoop03:/etc

Profile

[root@hadoop01 ~] # source / etc/profile

[root@hadoop02 ~] # source / etc/profile

[root@hadoop03 ~] # source / etc/profile

[root@hadoop01 zookeeper-3.4.10] # mkdir / home/zookeeperdata

Enter 1 in [root@hadoop01 zookeeper-3.4.10] # vim / home/zookeeperdata/myid myid file

one

[root@hadoop02 ~] # mkdir / home/zookeeperdata

Enter 2 in [root@hadoop02 ~] # vim / home/zookeeperdata/myid myid file

two

[root@hadoop03 ~] # mkdir / home/zookeeperdata

Enter 3 in [root@hadoop03 ~] # vim / home/zookeeperdata/myid myid file

three

[root@hadoop01 zookeeper-3.4.10] # zkServer.sh status

ZooKeeper JMX enabled by default

Using config: / usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg

Mode: follower

[root@hadoop02 ~] # zkServer.sh status

ZooKeeper JMX enabled by default

Using config: / usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg

Mode: follower

[root@hadoop03 ~] # zkServer.sh status

ZooKeeper JMX enabled by default

Using config: / usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg

Mode: leader

[root@hadoop01 zookeeper-3.4.10] # scp-r / usr/local/hadoop-2.7.1/ hadoop02:/usr/local/

[root@hadoop01 zookeeper-3.4.10] # scp-r / usr/local/hadoop-2.7.1/ hadoop03:/usr/local/

[root@hadoop01 zookeeper-3.4.10] # hadoop-daemon.sh start journalnode

[root@hadoop02 zookeeper-3.4.10] # hadoop-daemon.sh start journalnode

[root@hadoop03 zookeeper-3.4.10] # hadoop-daemon.sh start journalnode

[root@hadoop01 zookeeper-3.4.10] # hadoop namenode-format

[root@hadoop01 zookeeper-3.4.10] # hadoop-daemon.sh start namenode

Starting namenode, logging to / usr/local/hadoop-2.7.1/logs/hadoop-root-namenode-hadoop01.out

Synchronize the metadata of the started namenode to the started nomenode

[root@hadoop02 ~] # hdfs namenode-bootstrapStandby

Confirm whether the zookeeper cluster is started

[root@hadoop01 zookeeper-3.4.10] # zkServer.sh status

ZooKeeper JMX enabled by default

Using config: / usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg

Mode: follower

[root@hadoop02 ~] # zkServer.sh status

ZooKeeper JMX enabled by default

Using config: / usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg

Mode: follower

[root@hadoop03 ~] # zkServer.sh status

ZooKeeper JMX enabled by default

Using config: / usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg

Mode: leader

[root@hadoop01 zookeeper-3.4.10] # hdfs zkfc-formatZK

.

.

.

.

.... INFO ha.ActiveStandbyElector: Successfully created / hadoop-ha/qian in ZK.

.

.

.

[root@hadoop03 ~] # zkCli.sh

WatchedEvent state:SyncConnected type:None path:null

[zk: localhost:2181 (CONNECTED) 0] ls /

[zookeeper, hadoop-ha]

[zk: localhost:2181 (CONNECTED) 1] ls / hadoop-ha

[qian]

[zk: localhost:2181 (CONNECTED) 2] ls / hadoop-ha/qian

[]

Note: exit zkCli and enter quit

[root@hadoop01 zookeeper-3.4.10] # start-dfs.sh

[root@hadoop01 zookeeper-3.4.10] # jps

3281 JournalNode

4433 Jps

3475 NameNode

4068 DataNode

3110 QuorumPeerMain

4367 DFSZKFailoverController

[root@hadoop02 ~] # jps

3489 DataNode

3715 Jps

2970 QuorumPeerMain

3162 JournalNode

3646 DFSZKFailoverController

3423 NameNode

[root@hadoop03 ~] # zkCli.sh

ZkCli.sh

WATCHER::

WatchedEvent state:SyncConnected type:None path:null

[zk: localhost:2181 (CONNECTED) 4] ls / hadoop-ha/qian

[ActiveBreadCrumb, ActiveStandbyElectorLock]

[zk: localhost:2181 (CONNECTED) 2] get / hadoop-ha/qian/ActiveBreadCrumb

Qiannn1hadoop01 / F (floor >)

CZxid = 0x10000000a

Ctime = Sat Jan 13 01:40:21 CST 2018

MZxid = 0x10000000a

Mtime = Sat Jan 13 01:40:21 CST 2018

PZxid = 0x10000000a

Cversion = 0

DataVersion = 0

AclVersion = 0

EphemeralOwner = 0x0

DataLength = 31

NumChildren = 0

[root@hadoop01 hadoop-2.7.1] # hdfs dfs-put. / README.txt hdfs:/

[root@hadoop01 hadoop-2.7.1] # hdfs dfs-ls /

18-01-13 01:58:24 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... Using builtin-java classes where applicable

Found 1 items

-rw-r--r-- 3 root supergroup 1366 2018-01-13 01:57 / README.txt

Test for failover

[root@hadoop01 hadoop-2.7.1] # jps

3281 JournalNode

3475 NameNode

4644 Jps

4068 DataNode

3110 QuorumPeerMain

4367 DFSZKFailoverController

[root@hadoop01 hadoop-2.7.1] # kill-9 3475

[root@hadoop03 ~] # zkCli.sh

ActiveBreadCrumb ActiveStandbyElectorLock

[zk: localhost:2181 (CONNECTED) 6] get / hadoop-ha/qian/ActiveBreadCrumb

Qiannn2hadoop02 / F (floor >)

CZxid = 0x10000000a

Ctime = Sat Jan 13 01:40:21 CST 2018

MZxid = 0x100000011

Mtime = Sat Jan 13 02:01:57 CST 2018

PZxid = 0x10000000a

Cversion = 0

DataVersion = 1

AclVersion = 0

EphemeralOwner = 0x0

DataLength = 31

NumChildren = 0

[root@hadoop02 ~] # jps

3489 DataNode

3989 Jps

2970 QuorumPeerMain

3162 JournalNode

3646 DFSZKFailoverController

3423 NameNode

Note: when a namenode1 dies, it will automatically switch to another namenode2. After namenode2 dies, it will all die, and namenode1 will not be started automatically.

Configure cluster time synchronization

HA has been built

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report