Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to upgrade Hadoop in Hadoop Cluster

2025-02-21 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly introduces how to upgrade Hadoop in the Hadoop cluster, which has a certain reference value. Interested friends can refer to it. I hope you will gain a lot after reading this article. Let's take a look at it.

The cluster installed before Hadoop is version 2.6, and now it is upgraded to version 2.7.

Note that Hbase is running on this cluster, so you need to start and stop Hbase before and after the upgrade.

The upgrade steps are as follows:

Cluster IP list

Namenode:192.168.143.46192.168.143.103Journalnode:192.168.143.101192.168.143.102192.168.143.103Datanode&Hbase regionserver:192.168.143.196192.168.143.231192.168.143.182192.168.143.235192.168.143.41192.168.143.127Hbase master:192.168.143.103192.168.143.101Zookeeper:192.168.143.101192.168.143.102192.168.143.103

1. First determine the path where hadoop is running, distribute the new version of the software to this path of each node, and extract it.

# ll / usr/local/hadoop/total 493244drwxrwxr-x 9 root root 4096 Mar 21 2017 hadoop-release-> hadoop-2.6.0-EDH-0u1-SNAPSHOT-HA-SECURITYdrwxr-xr-x 9 root root 4096 Oct 11 11:06 hadoop-2.7.1-rw-r--r-- 1 root root 194690531 Oct 9 10:55 hadoop-2.7.1.tar.gzdrwxrwxr-x 7 root root 4096 May 21 2016 hbase-1.1.3-rw-r--r- -1 root root 128975247 Apr 10 2017 hbase-1.1.3.tar.gzlrwxrwxrwx 1 root root 29 Apr 10 2017 hbase-release-> / usr/local/hadoop/hbase-1.1.3

Because it is an upgrade, the configuration file is completely unchanged, and the etc/hadoop path under the original hadoop-2.6.0 is completely copied / replaced to hadoop-2.7.1.

At this point, the preparation before the upgrade has been completed.

Let's start the upgrade process. All the commands are executed on a transit machine, which is executed through the shell script, eliminating the frequent ssh login operations.

# # stop hbase,hbase user execution

two。 Stop Hbase master,hbase user execution

Status check, confirm master, stop standby master first

Http://192.168.143.101:16010/master-statusmaster:ssh-t-Q 192.168.143.103 sudo su-l hbase- c "/ usr/local/hadoop/hbase-release/bin/hbase-daemon.sh\ stop\ master" ssh-t-Q 192.168.143.103 sudo su-l hbase- c "jps" ssh-t-Q 192.168.143.101 sudo su-l hbase- c "/ usr/local/hadoop/hbase-release/bin/hbase-daemon. Sh\ stop\ master "ssh-t-Q 192.168.143.101 sudo su-l hbase-c" jps "

3. Stop Hbase regionserver,hbase user execution

Ssh-t-Q 192.168.143.196 sudo su-l hbase- c "/ usr/local/hadoop/hbase-release/bin/hbase-daemon.sh\ stop\ regionserver" ssh-t-Q 192.168.143.231 sudo su-l hbase- c "/ usr/local/hadoop/hbase-release/bin/hbase-daemon.sh\ stop\ regionserver" ssh-t-Q 192.168.143.182 sudo su-l hbase- c "/ usr/local/hadoop/hbase-release/bin/hbase -daemon.sh\ stop\ regionserver "ssh-t-Q 192.168.143.235 sudo su-l hbase- c" / usr/local/hadoop/hbase-release/bin/hbase-daemon.sh\ stop\ regionserver "ssh-t-Q 192.168.143.41 sudo su-l hbase- c" / usr/local/hadoop/hbase-release/bin/hbase-daemon.sh\ stop\ regionserver "ssh-t-Q 192.168.143.127 sudo su-l hbase- c" / usr/local / hadoop/hbase-release/bin/hbase-daemon.sh\ stop\ regionserver "

Check the running status

Ssh-t-Q 192.168.143.196 sudo su-l hbase-c "jps" ssh-t-Q 192.168.143.231 sudo su-l hbase-c "jps" ssh-t-Q 192.168.143.182 sudo su-l hbase-c "jps" ssh-t-Q 192.168.143.235 sudo su-l hbase-c "jps" ssh-t-Q 192.168.143.41 sudo su-l hbase-c "jps" ssh-t-Q 192.168. 143.127 sudo su-l hbase-c "jps"

# # stop Service-HDFS

4. Confirm first, namenode of active, web page confirmation. After that, you need to start the namenode.

Https://192.168.143.46:50470/dfshealth.html#tab-overview

5. Stop NameNode,hdfs user execution

NN: stop standby namenode first

Ssh-t-Q 192.168.143.103 sudo su-l hdfs-c "/ usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ stop\ namenode" ssh-t-Q 192.168.143.46 sudo su-l hdfs-c "/ usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ stop\ namenode" check status ssh-t-Q 192.168.143.103 sudo su-l hdfs-c "jps" ssh-t-Q 192. 168.143.46 sudo su-l hdfs-c "jps"

6. Stop DataNode,hdfs user execution

Ssh-t-Q 192.168.143.196 sudo su-l hdfs-c "/ usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ stop\ datanode" ssh-t-Q 192.168.143.231 sudo su-l hdfs-c "/ usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ stop\ datanode" ssh-t-Q 192.168.143.182 sudo su-l hdfs-c "/ usr/local/hadoop/hadoop-release/sbin/hadoop -daemon.sh\ stop\ datanode "ssh-t-Q 192.168.143.235 sudo su-l hdfs-c" / usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ stop\ datanode "ssh-t-Q 192.168.143.41 sudo su-l hdfs-c" / usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ stop\ datanode "ssh-t-Q 192.168.143.127 sudo su-l hdfs-c" / usr/local / hadoop/hadoop-release/sbin/hadoop-daemon.sh\ stop\ datanode "

7. Stop ZKFC,hdfs user execution

Ssh-t-Q 192.168.143.46 sudo su-l hdfs-c "/ usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ stop\ zkfc" ssh-t-Q 192.168.143.103 sudo su-l hdfs-c "/ usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ stop\ zkfc"

8. Stop JournalNode,hdfs user execution

JN:ssh-t-Q 192.168.143.101 sudo su-l hdfs-c "/ usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ stop\ journalnode" ssh-t-Q 192.168.143.102 sudo su-l hdfs-c "/ usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ stop\ journalnode" ssh-t-Q 192.168.143.103 sudo su-l hdfs-c "/ usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ stop\ journalnode"

# backup the data of NameNode. Due to the production environment, the original data needs to be backed up. In case the upgrade fails to roll back.

9. Backup namenode1

Ssh-t-Q 192.168.143.46 "cp-r / data1/dfs/name / data1/dfs/name.bak.20171011-2 politics ls-al / data1/dfs/;du-sm / data1/dfs/*" ssh-t-Q 192.168.143.46 "cp-r / data2/dfs/name / data2/dfs/name.bak.20171011-2 politics ls-al / data1/dfs/;du-sm / data1/dfs/*"

10. Backup namenode2

Ssh-t-Q 192.168.143.103 "cp-r / data1/dfs/name/data1/dfs/name.bak.20171011-2 politics ls-al / data1/dfs/;du-sm / data1/dfs/*"

11. Backup journal

Ssh-t-Q 192.168.143.101 "cp-r / data1/journalnode / data1/journalnode.bak.20171011;ls-al / data1/dfs/;du-sm / data1/*" ssh-t-Q 192.168.143.102 "cp-r / data1/journalnode / data1/journalnode.bak.20171011;ls-al / data1/dfs/;du-sm / data1/*" ssh-t-Q 192.168.143.103 "cp-r / data1/journalnode / data1/journalnode.bak.20171011 Ls-al / data1/dfs/;du-sm / data1/* "

Journal path, you can view the hdfs-site.xml file

Dfs.journalnode.edits.dir: / data1/journalnode

# related to upgrade

12. Copy file (processed in advance, refer to step 1)

Switch soft connection to version 2.7.1

Ssh-t-Q $h "cd / usr/local/hadoop; rm hadoop-release; ln-s hadoop-2.7.1 hadoop-release"

13. Toggle file soft links, executed by root users

Ssh-t-Q 192.168.143.46 "cd / usr/local/hadoop; rm hadoop-release; ln-s hadoop-2.7.1 hadoop-release" ssh-t-Q 192.168.143.103 "cd / usr/local/hadoop; rm hadoop-release; ln-s hadoop-2.7.1 hadoop-release" ssh-t-Q 192.168.143.101 "cd / usr/local/hadoop; rm hadoop-release Ln-s hadoop-2.7.1 hadoop-release "ssh-t-Q 192.168.143.102" cd / usr/local/hadoop; rm hadoop-release; ln-s hadoop-2.7.1 hadoop-release "ssh-t-Q 192.168.143.196" cd / usr/local/hadoop; rm hadoop-release; ln-s hadoop-2.7.1 hadoop-release "ssh-t-Q 192.168.143.231" cd / usr/local/hadoop; rm hadoop-release Ln-s hadoop-2.7.1 hadoop-release "ssh-t-Q 192.168.143.182" cd / usr/local/hadoop; rm hadoop-release; ln-s hadoop-2.7.1 hadoop-release "ssh-t-Q 192.168.143.235" cd / usr/local/hadoop; rm hadoop-release; ln-s hadoop-2.7.1 hadoop-release "ssh-t-Q 192.168.143.41" cd / usr/local/hadoop; rm hadoop-release Ln-s hadoop-2.7.1 hadoop-release "ssh-t-Q 192.168.143.127" cd / usr/local/hadoop; rm hadoop-release; ln-s hadoop-2.7.1 hadoop-release "

Confirm statu

Ssh-t-Q 192.168.143.46 "cd / usr/local/hadoop; ls-al" ssh-t-Q 192.168.143.103 "cd / usr/local/hadoop; ls-al" ssh-t-Q 192.168.143.101 "cd / usr/local/hadoop; ls-al" ssh-t-Q 192.168.143.102 "cd / usr/local/hadoop; ls-al" ssh-t-Q 192.168.143.196 "cd / usr/local/hadoop Ls-al "ssh-t-Q 192.168.143.231" cd / usr/local/hadoop; ls-al "ssh-t-Q 192.168.143.182" cd / usr/local/hadoop; ls-al "ssh-t-Q 192.168.143.235" cd / usr/local/hadoop; ls-al "ssh-t-Q 192.168.143.41" cd / usr/local/hadoop; ls-al "ssh-t-Q 192.168.143.127" cd / usr/local/hadoop Ls-al "

# start HDFS,hdfs user execution

14. Start JournalNode

JN:ssh-t-Q 192.168.143.101 sudo su-l hdfs-c "/ usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ start\ journalnode" ssh-t-Q 192.168.143.102 sudo su-l hdfs-c "/ usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ start\ journalnode" ssh-t-Q 192.168.143.103 sudo su-l hdfs-c "/ usr/local/hadoop/hadoop-release/sbin / hadoop-daemon.sh\ start\ journalnode "ssh-t-Q 192.168.143.101 sudo su-l hdfs-c" jps "ssh-t-Q 192.168.143.102 sudo su-l hdfs-c" jps "ssh-t-Q 192.168.143.103 sudo su-l hdfs-c" jps "

15. Start the first NameNode

Ssh 192.168.143.46su-hdfs/usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh start namenode-upgrade

16. Confirm the status. Only after the status is fully OK can you start another namenode.

Https://192.168.143.46:50470/dfshealth.html#tab-overview

17. Start the first ZKFC

Su-hdfs/usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh start zkfc192.168.143.46

18. Start the second NameNode

Ssh 192.168.143.103su-hdfs/usr/local/hadoop/hadoop-release/bin/hdfs namenode-bootstrapStandby/usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh start namenode

19. Start the second ZKFC

Ssh 192.168.143.103su-hdfs/usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh start zkfc

20. Start DataNode

Ssh-t-Q 192.168.143.196 sudo su-l hdfs-c "/ usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ start\ datanode" ssh-t-Q 192.168.143.231 sudo su-l hdfs-c "/ usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ start\ datanode" ssh-t-Q 192.168.143.182 sudo su-l hdfs-c "/ usr/local/hadoop/hadoop-release/sbin/hadoop -daemon.sh\ start\ datanode "ssh-t-Q 192.168.143.235 sudo su-l hdfs-c" / usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ start\ datanode "ssh-t-Q 192.168.143.41 sudo su-l hdfs-c" / usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ start\ datanode "ssh-t-Q 192.168.143.127 sudo su-l hdfs-c" / usr/local / hadoop/hadoop-release/sbin/hadoop-daemon.sh\ start\ datanode "

Confirm statu

Ssh-t-Q 192.168.143.196 sudo su-l hdfs-c "jps" ssh-t-Q 192.168.143.231 sudo su-l hdfs-c "jps" ssh-t-Q 192.168.143.182 sudo su-l hdfs-c "jps" ssh-t-Q 192.168.143.235 sudo su-l hdfs-c "jps" ssh-t-Q 192.168.143.41 sudo su-l hdfs-c "jps" ssh-t-Q 192.168. 143.127 sudo su-l hdfs-c "jps"

21. After everything is all right, start hbase and the hbase user executes

To start hbase master, it is best to start the original active master first.

Ssh-t-Q 192.168.143.101 sudo su-l hbase- c "/ usr/local/hadoop/hbase-release/bin/hbase-daemon.sh\ start\ master" ssh-t-Q 192.168.143.103 sudo su-l hbase- c "/ usr/local/hadoop/hbase-release/bin/hbase-daemon.sh\ start\ master"

Start Hbase regionserver

Ssh-t-Q 192.168.143.196 sudo su-l hbase- c "/ usr/local/hadoop/hbase-release/bin/hbase-daemon.sh\ start\ regionserver" ssh-t-Q 192.168.143.231 sudo su-l hbase- c "/ usr/local/hadoop/hbase-release/bin/hbase-daemon.sh\ start\ regionserver" ssh-t-Q 192.168.143.182 sudo su-l hbase- c "/ usr/local/hadoop/hbase-release/bin/hbase -daemon.sh\ start\ regionserver "ssh-t-Q 192.168.143.235 sudo su-l hbase- c" / usr/local/hadoop/hbase-release/bin/hbase-daemon.sh\ start\ regionserver "ssh-t-Q 192.168.143.41 sudo su-l hbase- c" / usr/local/hadoop/hbase-release/bin/hbase-daemon.sh\ start\ regionserver "ssh-t-Q 192.168.143.127 sudo su-l hbase- c" / usr/local / hadoop/hbase-release/bin/hbase-daemon.sh\ start\ regionserver "

twenty-two。 Hbase region needs to be turned on and off manually by Balance

You need to log in to HBase Shell and run the following command

open

Balance_switch true

Close

Balance_switch false

23. If the system is not executed this time, the system will run for a week to ensure the stable operation of the system, and then execute Final.

Note: during this period, disk space may grow rapidly. After the final is executed, some space is freed.

Finallize upgrade: hdfs dfsadmin-finalizeUpgrade

Thank you for reading this article carefully. I hope the article "how to upgrade Hadoop in Hadoop Cluster" shared by the editor will be helpful to everyone. At the same time, I also hope you will support us and pay attention to the industry information channel. More related knowledge is waiting for you to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report