Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the fully distributed problems of hadoop-006

2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

Xiaobian to share with you what hadoop-006 completely distributed problems, I hope you have something to gain after reading this article, let's discuss it together!

Run mapreduce tasks, but don't see tasks in yarn.

bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar wordcount /test /out1

My configured resourceManager is http://192.168.31.136:8088

The reason is that there is no configuration in mapred-site.xml

mapreduce.framework.name

yarn

The auxService:mapreduce_shuffle does not exist

The reason is that the yarn.nodemanager.aux-services node is not configured in yarn-site.xml

yarn.nodemanager.aux-services

mapreduce_shuffle

Error as follows

16/11/29 23:10:45 INFO mapreduce.Job: Task Id : attempt_1480432102879_0001_m_000000_2, Status : FAILED

Container launch failed for container_e02_1480432102879_0001_01_000004 : org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:mapreduce_shuffle does not exist

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

at java.lang.reflect.Constructor.newInstance(Constructor.java:423)

at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)

at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)

at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:155)

at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:375)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

at java.lang.Thread.run(Thread.java:745)

-----------------------------------------------------------------------------------------------------------------

profile record

A zookeeper cluster with three nodes has been set up in advance, and the configuration can realize HDFS HA and YARN HA.

hadoop-env.sh modified

export JAVA_HOME=/usr/lib/jvm/jdk8/jdk1.8.0_111

2. yarn-env.sh modified

export JAVA_HOME=/usr/lib/jvm/jdk8/jdk1.8.0_111

III. core-site.xml

fs.defaultFS

hdfs://mycluster

dfs.journalnode.edits.dir

/home/jxlgzwh/hadoop-2.7.2/data/jn

hadoop.tmp.dir

/home/jxlgzwh/hadoop-2.7.2/data/tmp

ha.zookeeper.quorum

master:2181,slave01:2181,slave02:2181

IV. hdfs-site.xml

dfs.nameservices

mycluster

dfs.ha.namenodes.mycluster

nn1,nn2

dfs.namenode.rpc-address.mycluster.nn1

master:8020

dfs.namenode.rpc-address.mycluster.nn2

slave01:8020

dfs.namenode.http-address.mycluster.nn1

master:50070

dfs.namenode.http-address.mycluster.nn2

slave01:50070

dfs.namenode.shared.edits.dir

qjournal://master:8485;slave01:8485;slave02:8485/mycluster

dfs.client.failover.proxy.provider.mycluster

org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider

dfs.ha.fencing.methods

sshfence

dfs.ha.fencing.ssh.private-key-files

/home/jxlgzwh/.ssh/id_dsa

dfs.replication

3

dfs.permissions.enabled

false

dfs.ha.automatic-failover.enabled

true

V. mapred-site.xml

mapreduce.framework.name

yarn

VI. Slaves

192.168.31.136

192.168.31.130

192.168.31.229

VII. yarn-site.xml

yarn.resourcemanager.ha.enabled

true

yarn.resourcemanager.cluster-id

cluster1

yarn.resourcemanager.ha.rm-ids

rm1,rm2

yarn.resourcemanager.hostname.rm1

master

yarn.resourcemanager.hostname.rm2

slave01

yarn.resourcemanager.webapp.address.rm1

master:8088

yarn.resourcemanager.webapp.address.rm2

slave01:8088

yarn.resourcemanager.zk-address

master:2181,slave01:2181,slave02:2181

yarn.resourcemanager.recovery.enabled

true

yarn.resourcemanager.zk-state-store.parent-path

/rmstore

yarn.resourcemanager.store.class

org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore

yarn.nodemanager.recovery.enabled

true

yarn.nodemanager.address

0.0.0.0:45454

yarn.nodemanager.recovery.dir

/home/jxlgzwh/hadoop-2.7.2/data/tmp/yarn-nm-recovery

yarn.nodemanager.aux-services

mapreduce_shuffle

Configure the/etc/hosts file and configure ssh password-free login

192.168.31.136 master.com master

192.168.31.130 slave01

192.168.31.229 slave02 After reading this article, I believe you have a certain understanding of "hadoop-006 completely distributed problems." If you want to know more about relevant knowledge, please pay attention to the industry information channel. Thank you for reading!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report