Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Hadoop HA building

2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

Four machines bei1 bei2 bei3 bei4

NNDNZKZKFCJNRM

NM (Task Management)

Bei1Y

YY

Bei2YYYYY

YYbei3

YY

Y

Ybei4

Y

Y

Y

1. Upgrade the components and turn off the firewall

Yum-y update

PS: you can omit this item if you use a local yum source

The newly opened terminal can reduce the waiting time while upgrading the components.

# service iptables stop

# chkconfig iptables off

2. Modify the mapping relationship between IP and host in / etc/hosts file

# vi / etc/hosts

192.168.31.131 bei1

192.168.31.132 bei2

192.168.31.133 bei3

192.168.31.134 bei4

3. If the virtual machine modifies / etc/sysconfig/network-scripts/ifcfg-eth0 deletes UUID and MAC addresses

# vi / etc/sysconfig/network-scripts/ifcfg-eth0

4. Delete / etc/udev/rules.d/70-persistent-net.rules default network card MAC generation rule file

# rm-rf / etc/udev/rules.d/70-persistent-net.rules

PS: items 3 and 4 can be omitted if other NODE nodes are not cloned by virtual machines or replicated by source virtual machines

5. Restart the host after yum upgrade

6. Prepare the environment

6.1, yum-y install gcc gcc-c++ autoconf automake cmake ntp rsync ssh vim

Yum-y install zlib zlib-devel openssl openssl-devel pcre-devel

PS: some of the above programs may not be required for hadoop, but may be used for later installation of other programs, especially source code installation

Three of the important programs must be installed.

Ssh is used for communication between nodes. I chose the version of CentOS6.7. Openssh has been installed by default.

Rsync is used for remote synchronization

Ntp is used for time synchronization

6.2. it is important to synchronize the NTP time of the newly opened terminal after the first yum installation in 6.1 is completed.

6.2.1 configure ntp Startup item

Chkconfig ntpd on

6.2.2 synchronization time

Ntpdate ntp.sjtu.edu.cn

6.2.3 start the ntpd service

/ etc/init.d/ntpd start

6.2.4 verify that the ntp service is running

Pgrep ntpd

6.2.5 initial synchronization

Ntpdate-u ntp.sjtu.edu.cn

6.2.6 confirm the synchronization is successful

Ntpq-p

PS: you can enter the above commands at one time

Chkconfig ntpd on

Ntpdate ntp.sjtu.edu.cn

/ etc/init.d/ntpd start

Pgrep ntpd

Ntpdate-u ntp.sjtu.edu.cn

Ntpq-p

It is recommended to restart the host after waiting for 6.2.1yum to succeed.

7. Install jdk

7.1test jdk into the home directory

7.2 rpm-ivh jdk_xxxxxxxx.rpm

7.3 the jdk installation directory defaults to / usr/java/jdk1.7.0_79

7.4 configure jdk environment variables

# vim ~ / .bash_profile

Add the following four lines

Export JAVA_HOME=/opt/sxt/soft/jdk1.7.0_80

Export PATH=$PATH:$JAVA_HOME/bin

Export HADOOP_HOME=/opt/sxt/soft/hadoop-2.5.1

Export PATH=$PATH:HADOOP_HOME/bin:$HADOOP_HOME/sbin

After editing, use the source command to make the file ~ / .bash_profile effective. Execute the following command

Source / .bash_profile

Check environment variables

Printenv

8. Install tomcat (this step can be omitted, but it will be useful in the future)

Copy tomcat to / opt/sxt and decompress it

# tar-zxvf apache-tomcat-xxxxx.tar.gz

9. Upload Hadoop to / opt/sxt

# tar-zxvf hadoop-2.5.1_x64.tar.gz

9.1 create hadoop.tmp.dir directory and create

# mkdir-p / opt/hadooptmp

9.2 etc/hadoop/core-site.xml:

Fs.defaultFS

Hdfs://bjsxt

Ha.zookeeper.quorum

Bei1:2181,bei2:2181,bei3:2181

Hadoop.tmp.dir

/ opt/hadooptmp

9.3 etc/hadoop/hdfs-site.xml:

Dfs.nameservices

Bjsxt

Dfs.ha.namenodes.bjsxt

Nn1,nn2

Dfs.namenode.rpc-address.bjsxt.nn1

Bei1:8020

Dfs.namenode.rpc-address.bjsxt.nn2

Bei2:8020

Dfs.namenode.http-address.bjsxt.nn1

Bei1:50070

Dfs.namenode.http-address.bjsxt.nn2

Bei2:50070

Dfs.namenode.shared.edits.dir

Qjournal://bei2:8485;bei3:8485;bei4:8485/bjsxt

Dfs.client.failover.proxy.provider.bjsxt

Org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider

Dfs.ha.fencing.methods

Sshfence

Dfs.ha.fencing.ssh.private-key-files

/ root/.ssh/id_dsa

Dfs.journalnode.edits.dir

/ opt/hadooptmp/data

Dfs.ha.automatic-failover.enabled

True

9.4 Clone

9.5 modify hostname IP gateway mac

Modify hostname

Vim / etc/sysconfig/network

Modify IP address

Vi / etc/sysconfig/network-scripts/ifcfg-eth0

Modify DNS

Search, nameserver in vi / etc/resolv.conf

10. Check ssh local password-free login

10.1 first inspection

Ssh localhost

PS: remember to exit exit after remote success

10.2 create a local key and write the public key to the authentication file

# ssh-keygen-t dsa-P''- f ~ / .ssh/id_dsa

# cat ~ / .ssh/id_dsa.pub > > ~ / .ssh/authorized_keys

10.3 check again

Ssh localhost

PS: the same exit exits

Copy the ~ / .ssh/authorized_keys file to each node on NameNode

Scp / .ssh/authorized_keys root@hadoopsnn:~/.ssh/authorized_keys

Scp / .ssh/authorized_keys root@hadoopdn1:~/.ssh/authorized_keys

Scp / .ssh/authorized_keys root@hadoopdn2:~/.ssh/authorized_keys

10.5 write / opt/sxt/soft/hadoop-2.5.1/etc/hadoop/hadoop-env.sh file default hadoop can not get the JAVA_HOME in the user environment variable, so you have to specify it manually

Vim / opt/sxt/soft/hadoop-2.5.1/etc/hadoop/hadoop-env.sh

Find export JAVA_HOME=$ {JAVA_HOME}

Change to export JAVA_HOME=/opt/sxt/soft/jdk1.7.0_80

Add the following line

Export HADOOP_PREFIX=/opt/sxt/soft/hadoop-2.5.1

11. Configure and install zookeeper

11.1 three zookeeper:bei1,bei2,bei3

11.2 Editing the zoo.cfg profile

Modify dataDir=/opt/sxt/zookeeperdatadir

TickTime=2000

DataDir=/opt/sxt/zookeeperdatadir

ClientPort=2181

InitLimit=5

SyncLimit=2

Server.1=bei1:2888:3888

Server.2=bei2:2888:3888

Server.3=bei3:2888:3888

11.3 create a myid file in the dataDir directory with the contents of 1pm 2pm 3

12. Configure slaves in hadoop where NN is placed

* follow the steps carefully at the beginning of this step. If the configuration file is modified, the service needs to be restarted *

13. Start three zookeeper:/opt/sxt/zookeeper-3.4.6/bin/zkServer.sh start

14. Start three JournalNode:./hadoop-daemon.sh start journalnode

15. Format on one of the namenode: bin/hdfs namenode-format

16. Copy the newly formatted metadata to another namenode

Start the newly formatted namenode: hadoop-daemone.sh start namenode

16.2 execute on unformatted namenode: hdfs namenode-bootstrapStandby

16.3 launch the second namenode

17. Initialize zkfc:hdfs zkfc-formatZK on one of the namenode

18. Stop the node above: stop-dfs.sh

19. Full launch: start-dfs.sh

20. Login page jps check login page check

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report