Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the operations related to hadoop cluster

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article is to share with you about the operations related to hadoop clusters. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.

Hadoop cluster

First turn off selinux

Vim / etc/selinux/configSELINUX=disabled

Firewalls

Systemctl stop firewalldsystemctl disable firewalld

Both 1.master and slave machines are modified / etc/hostname

Add

192.168.1.129 hadoop1192.168.1.130 hadoop2192.168.1.132 hadoop3

two。 Password-free login

Master host (hadoop1)

Switch to / root/.ssh

Ssh-keygen-t rsa

Press enter all the time

Generate id_rsa and id_rsa.pub

Cat id_rsa.pub > > master

Save the public key to master and send it to the slave machine

Scp master hadoop2:/root/.ssh/

Log in to slave (hadoop2,hadoop3)

Append master to authorized_keys

Cat master > > authorized_keys

Slave machine is the same

3. Configuration

Extract hadoop-2.6.0.tar.gz to / usr/lib/ directory

Tar-zxvf hadoop-2.6.0.tar.gz-C / usr/lib/cd / usr/lib/hadoop-2.6.0/etc/hadoop

Configuration file

4. Install zookeeper

Configure environment variables

Export JAVA_HOME=/usr/lib/jdk1.7.0_79export MAVEN_HOME=/usr/lib/apache-maven-3.3.3export LD_LIBRARY_PATH=/usr/lib/protobufexport ANT_HOME=/usr/lib/apache-ant-1.9.4export ZOOKEEPER_HOME=/usr/lib/zookeeper-3.4.6export PATH=$JAVA_HOME/bin:$MAVEN_HOME/bin:$LD_LIBRARY_PATH/bin:$ANT_HOME/bin:$ZOOKEEPER_HOME/bin:$PATHexport CLASSPATH=.:$JAVA_HOME/lib/dt. Jar:$JAVA_HOME/lib/tools.jar:$ZOOKERPER_HOME/lib

4.1Configuring zookeeper/conf/ to copy zoo_sample.cfg to zoo.cfg

Cp zoo_sample.cfg zoo.cfg

Modify

DataDir=/usr/lib/zookeeper-3.4.6/datas

Increase

Server.1=hadoop1:2888:3888server.2=hadoop2:2888:3888server.3=hadoop3:2888:3888

Create / usr/lib/zookeeper-3.4.6/datas and create myid to write the corresponding number in myid

Copy zookeeper-3.4.6 to hadoop2 and hadoop3 and / etc/profile

Running

Execute on hadoop1,hadoop2,hadoop3

ZkServer.sh start

View statu

ZkServer.sh status

With Mode: leader,Mode: follower, etc., it is running normally.

5. Install hadoop

Execute on master (hadoop1)

Extract the previously compiled hadoop-2.6.0.tar.gz to / usr/lib/

Configure environment variables

Export JAVA_HOME=/usr/lib/jdk1.7.0_79export MAVEN_HOME=/usr/lib/apache-maven-3.3.3export LD_LIBRARY_PATH=/usr/lib/protobufexport ANT_HOME=/usr/lib/apache-ant-1.9.4export ZOOKEEPER_HOME=/usr/lib/zookeeper-3.4.6export HADOOP_HOME=/usr/lib/hadoop-2.6.0export PATH=$JAVA_HOME/bin:$MAVEN_HOME/bin:$LD_LIBRARY_PATH/bin:$ANT_HOME/bin:$ZOOKEEPER_HOME / bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATHexport CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

(there can be no maven in hadoop2,hadoop3, etc. These are configured when compiling hadoop)

5.1 modify the configuration file

Cd hadoop-2.6.0/etc/hadoop

Configuration files (hadoop-env.sh, core-site.xml, hdfs-site.xml, yarn-site.xml, mapred-site.xml, slaves)

5.1.1 hadoop-env.sh

Export JAVA_HOME=/usr/lib/jdk1.7.0_79

5.1.2 core-site.xml

Fs.defaultFShdfs://cluster1hadoop.tmp.dir/usr/lib/hadoop-2.6.0/tmpha.zookeeper.quorumhadoop1:2181,hadoop2:2181,hadoop3:2181

5.1.3 hdfs-site.xml

Dfs.replication2dfs.nameservicescluster1dfs.ha.namenodes.cluster1hadoop101,hadoop102dfs.namenode.rpc-address.cluster1.hadoop101hadoop1:9000dfs.namenode.http-address.cluster1.hadoop101hadoop1:50070dfs.namenode.rpc-address.cluster1.hadoop102hadoop2:9000dfs.namenode.http-address.cluster1.hadoop102hadoop2:50070dfs.ha.automatic-failover.enabled.cluster1truedfs.namenode.shared.edits.dirqjournal://hadoop2:8485 Hadoop3:8485/cluster1dfs.journalnode.edits.dir/usr/lib/hadoop-2.6.0/tmp/journaldfs.ha.fencing.methodssshfencedfs.ha.fencing.ssh.private-key-files/root/.ssh/id_rsadfs.client.failover.proxy.provider.cluster1org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider

5.1.4 yarn-site.xml

Yarn.resourcemanager.hostnamehadoop1yarn.nodemanager.aux-servicesmapreduce_shuffle

2.1.5 mapred-site.xml

Mapreduce.framework.nameyarn

2.1.6 slaves

Hadoop2hadoop3

6. Cluster startup:

6.1 format zookeeper cluster

Execute in hadoop1

Bin/hdfs zkfc-formatZK

6.2 start the journalnode cluster and execute it in hadoop2 and hadoop3

Sbin/hadoop-daemon.sh start journalnode

6.3Formenting namenode, starting namenode

Execute in hadoop1

Bin/hdfs namenode-formatsbin/hadoop-daemon.sh start namenode

Execute on hadoop2

Bin/hdfs namenode-bootstrapStandbysbin/hadoop-daemon.sh start namenode

Start datanode to execute directly in hadoop1

Sbin/hadoop-daemons.sh start datanode

Start zkfc, and start the process where there is a namenode

Execute in hadoop1 and hadoop2

Sbin/hadoop-daemon.sh start zkfc

Start yarn and resourcemanager and execute them in hadoop1

Sbin/start-yarn.sh start resourcemanager

Enter in the browser

Http://192.168.1.129:50070

Overview 'hadoop1:9000' (active)

Http://192.168.1.130:50070/

Overview 'hadoop2:9000' (standby)

Hadoop-fs ls /

View the hadoop directory

Thank you for reading! This is the end of this article on "what are the operations related to hadoop cluster?". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it out for more people to see!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report