Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Haoop task failed. Modify the code setting to solve the problem.

2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

First, run the etl task and report an error:

Container [pid=31306,containerID=container_1479290736535_0004_01_000003] is running beyond physical memory limits. Current usage: 2.7 GB of 2.5 GB physical memory used; 4.4 GB of 7.5 GB virtual memory used. Killing container.

It was found that too many files were opened in one hour task (about 7000, and each process occupied too much buffer, resulting in insufficient memory)

Solution:

Modify program settings

"orc.strip.size": 1024,1024

"orc.block.size": 16,1024,1024

"orc.row.index.stride":

"orc.compress.size": 8: 1024

Second, the problem of automatic shutdown and recovery of nodemanager:

2016-08-19 14 opt/amos/data/hadoop/yarn-local error 5715 1927 WARN org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection: Directory / opt/amos/data/hadoop/yarn-local error, used space above threshold of 90.0%, removing from list of valid directories

2016-08-19 14 opt/amos/data/hadoop/yarn-log error 5715 1927 WARN org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection: Directory / opt/amos/data/hadoop/yarn-log error, used space above threshold of 90.0%, removing from list of valid directories

Nodemanager recovered 10 minutes later. What's the problem?

[root@~] # vim / opt/amos/conf/hadoop/yarn-site.xml

Interval in between cache cleanups.

Yarn.nodemanager.localizer.cache.cleanup.interval-ms

600000

The local disk is only 30g. The cache of yarn is set to 40G, which causes a problem.

[root@~] # df-h

Filesystem Size Used Avail Use% Mounted on

/ dev/sda1 30G 25G 3.9G 87% /

Solution: set the cache of yarn to 20g

[root@~] # vim / opt/amos/conf/hadoop/yarn-site.xml

Yarn.nodemanager.localizer.cache.target-size-mb

20480

3. Hadoop repeatedly submitted the task, but failed because the memory used by map and reduce needs to be modified to 1.5 times that of jvm.

Mapred.child.java.opts-Xmx2048M-Xms8M 2048-1.5-3172

Mapreduce.map.memory.mb 1536 mapred-site.xml changed to 3172

Mapreduce.reduce.memory.mb 2048 changed to 3172

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report