Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Datanode memory and GC optimization

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

Event description:

The datanode memory is occupied too much and the load is too high. Check the hdfs dfsadmin-report found that the status is Dead. Check the datanode log file hadoop-sphuser-datanode-XXX.log and find the following error.

2019-05-11 16 2014 26551 ERROR org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Error compiling report

Java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: GC overhead limit exceeded

At java.util.concurrent.FutureTask.report (FutureTask.java:122)

At java.util.concurrent.FutureTask.get (FutureTask.java:192)

At org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.getDiskReport (DirectoryScanner.java:566)

At org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.scan (DirectoryScanner.java:425)

At org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile (DirectoryScanner.java:406)

At org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.run (DirectoryScanner.java:362)

At java.util.concurrent.Executors$RunnableAdapter.call (Executors.java:511)

At java.util.concurrent.FutureTask.runAndReset (FutureTask.java:308)

Analysis:

GC takes up a lot of time but frees up very little space, which exceeds the GC overhead limit.

Sun's official definition of this: a java.lang.OutOfMemoryError exception is thrown when more than 98% of the time is spent doing GC and less than 2% of the heap memory is reclaimed.

Solution:

Increase heap memory

Optimize GC

1) Edit the configuration file etc/hadoop/hadoop-env.sh of the namenode master node, and modify the HADOOP_DATANODE_OPTS parameter as shown below:

Export HADOOP_LOG_DIR=/data/hadoop-2.7.3/logs

Export HADOOP_DATANODE_OPTS= "- Xmx16G-XX:+UseParNewGC-XX:+UseConcMarkSweepGC-XX:CMSInitiatingOccupancyFraction=80-XX:+CMSParallelRemarkEnabled-XX:+PrintTenuringDistribution"

-Xmx is configured to half of the total memory

Parameter description:

JVM adopts different garbage collection mechanisms for the new generation and the old generation respectively.

The concurrent collector CMS has the characteristics of response time priority, so it has low latency and low pause. CMS is an old collector.

-Xmx16G heap memory is set to 16G

-XX:+UseParNewGC sets the new generation of memory collection to parallel collection

-XX:+UseConcMarkSweepGC uses the CMS garbage collector to collect memory in parallel for the old days

-XX:CMSInitiatingOccupancyFraction=80 sets the Old area to trigger Full GC when the object is stored at 80%

-XX:+CMSParallelRemarkEnabled runs the final marking phase in parallel to speed up the final marking and reduce the marking pause

-XX:+PrintTenuringDistribution displays the size of objects of all ages in the Survivor area each time you Minor GC

2), copy the configuration file to other nodes in the cluster, and then restart the service

Reference:

Https://www.cnblogs.com/hit-zb/p/8651369.html

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report