In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
Hive operation error report running beyond virtual memory error causes and solutions, many novices are not very clear about this, in order to help you solve this problem, the following editor will explain in detail for you, people with this need can come to learn, I hope you can gain something.
Problem: a running beyond virtual memory error occurred when running the application in hive. The tips are as follows:
Container [pid=28920,containerID=container_1389136889967_0001_01_000121] is running beyond virtual memory limits. Current usage: 1.2 GB of 1 GB physical memory used; 2.2 GB of 2.1 GB virtual memory used. Killing container. Reason: the running Container tried to use too much memory and was dropped by NodeManager kill. [extract] The NodeManager is killing your container. It sounds like you are trying to use hadoop streaming which is running as a child process of the map-reduce task. The NodeManager monitors the entire process tree of the task and if it eats up more memory than the maximum set in mapreduce.map.memory.mb or mapreduce.reduce.memory.mb respectively, we would expect the Nodemanager to kill the task, otherwise your task is stealing memory belonging to other containers, which you don't want.
To solve this exception, you need to be familiar with yarn's own virtual memory management rules. In Yarn platform, CPU, memory and disk are all abstracted as resources for use. The main roles for managing resources are Yarn Resource Manager (RM) responsible for overall resource scheduling, and then Nodemanager is used to monitor daemon resources on each node. For each application, resources are assigned to Map or Reduce tasks through Application Master (AM) container. The specific attributes are as follows:
(1) yarn.nodemanager.resource.memory-mb
The total amount of physical memory that can be allocated. The default is 8*1024MB.
(2) yarn.nodemanager.vmem-pmem-ratio
The amount of virtual memory corresponding to the total amount of physical memory per unit of physical memory. The default is 2.1, which means that for each physical memory used in 1MB, the maximum amount of virtual memory in 2.1MB can be used.
The second attribute, ratio control, affects the use of virtual memory. When the virtual memory calculated by yarn is more than 2.1x of mapreduce.map.memory.mb or mapreduce.reduce.memory.mb in mapred-site.xml, the exception in the above screenshot occurs, and the default mapreduce.map.memory.mb or
The initial size of mapreduce.reduce.memory.mb is 1024m, and then compared with the virtual memory calculated from the running environment of the yarn in the exception, it is found that it is even larger than 1024cm 2.1, so the NodeManage daemon will kill the AM container, resulting in the failure of the entire MR job. Now we just need to increase this ratio to avoid this exception. Most of the specific adjustments are small and can be set according to the specific situation.
Two solutions:
1. The above-mentioned modification of the yarn-site.xml configuration file can increase the proportion of yarn.nodemanager.vmem-pmem-ratio, but this modification method needs to restart the cluster to take effect. At the same time, it should be noted that the resourceManager or NodeManager processes of all nodes must be restarted successfully (I have encountered situations where the cluster is restarted with stop-yarn.sh but the resourceManager process does not have stop, resulting in the application not taking effect. Can be confirmed with the jps command after stop-yarn.sh execution)
2. Modify these two values in the mapred-site.xml configuration file, and note that the java.opts value of map and reduce needs to be less than the corresponding mapreduce.decrease value (the memory actually configured in value needs to be modified according to the memory size of your machine and your application).
Mapreduce.map.memory.mb
1536
Mapreduce.map.java.opts
-Xmx1024M
Mapreduce.reduce.memory.mb
3072
Mapreduce.reduce.java.opts
-Xmx2560M
View and set the relevant values on the hive command line:
Is it helpful for you to read the above content? If you want to know more about the relevant knowledge or read more related articles, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.