In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces the parameter configuration of YARN in CDH cluster, which is very detailed and has certain reference value. Friends who are interested must read it!
Parameter configuration of YARN in CDH Cluster
After Hadoop 2.0, the original MapReduce is no longer a simple framework for offline batch processing of MR tasks, but upgraded to MapReduceV2 (Yarn) version, that is, resource scheduling and task distribution are separated. In the latest CDH version, both MapReduceV1 and MapReduceV2 (Yarn) versions are integrated. If you need to use Yarn for unified resource scheduling in the cluster, it is recommended to use Yarn.
CDH has made a few changes to some parameters of Yarn, and added relevant Chinese instructions. This paper focuses on the configuration of some parameter changes in CDH compared with MapReduceV1.
I. CPU configuration
ApplicationMaster virtual CPU kernel
Number of cpu cores occupied by yarn.app.mapreduce.am.resource.cpu-vcores / / ApplicationMaster (Gateway-- resource management)
Container virtual CPU kernel
Yarn.nodemanager.resource.cpu-vcores / / the maximum number of cpu cores allocated per NodeManager (NodeManager-Resource Management)
Conclusion: the total number of ApplicationMaster applied for by nodemanager is less than the maximum number of cpu kernels in nodemanager.
II. Memory configuration
Container memory
Yarn.nodemanager.resource.memory-mb / / maximum memory that a single NodeManager can allocate (NodeManager-Resource Management) / / Memory Total= single NodeManager memory * Node
Conclusion: the memory occupied by submitting task Memory Used is less than Memory Total.
Map task memory
Mapreduce.map.memory.mb / / amount of physical memory allocated for each Map task of the job (Gateway-- resource management)
Conclusion: the memory requirement of map or reduce is not greater than that of appmaster.
Maximum container memory
Yarn.scheduler.maximum-allocation-mb / / maximum memory can be requested for a single task (ResourceManager-- resource management)
Third, execute in parallel with the same Map or Reduce
Map task reasoning execution
Mapreduce.map.speculative / / Gateway
Reduce task reasoning execution
Mapreduce.reduce.speculative / / Gateway
IV. JVM reuse
Enable Ubertask optimization:
Mapreduce.job.ubertask.enable | (default false) / / true means to enable jvm reuse (Gateway-- performance)
The decision parameters for jvm reuse are as follows:
Ubertask Max Map
Mapreduce.job.ubertask.maxmaps / / over how many map enable jvm reuse (Gateway-- performance)
Ubertask Max Reduce
Mapreduce.job.ubertask.maxreduces / / over how many Reduce enables jvm reuse, currently supports 1 (Gateway-- performance)
Maximum job size for Ubertask
Threshold for the input size of mapreduce.job.ubertask.maxbytes / / application, which defaults to block size (Gateway-- performance)
5. Other parameters
Add log editor spark-defaults.conf to spark
Spark.yarn.historyServer.address= http://cloud003:18080/
These are all the contents of the article "what are the parameters of YARN in the CDH cluster?" Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.