In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
In this issue, the editor will bring you what are the relevant parameters of MapReduce. The article is rich in content and analyzes and describes for you from a professional point of view. I hope you can get something after reading this article.
The relevant configuration parameters of MapReduce are divided into two parts, namely JobHistory Server and application parameters. JobHistory can be run on an independent node, while application parameters can be stored in mapred-site.xml as default parameters, or can be specified separately when the application is submitted. If the user specifies the parameters, the default parameters will be overwritten.
The following parameters are all set in mapred-site.xml.
1. MapReduce JobHistory-related configuration parameters
Configure it in the mapred-site.xml of the node where the JobHistory is located.
(1) mapreduce.jobhistory.address
Parameter explanation: MapReduce JobHistory Server address.
Default value: 0.0.0.0purl 10020
(2) mapreduce.jobhistory.webapp.address
Parameter explanation: MapReduce JobHistory Server Web UI address.
Default value: 0.0.0.0purl 19888
(3) mapreduce.jobhistory.intermediate-done-dir
Parameter explanation: the location of the log generated by the MapReduce job.
Default value: / mr-history/tmp
(4) mapreduce.jobhistory.done-dir
Parameter explanation: the location where logs managed by MR JobHistory Server are stored.
Default value: / mr-history/done
2. MapReduce job configuration parameters
It can be configured in the client's mapred-site.xml as the default configuration parameter for the MapReduce job. You can also personalize these parameters when the job is submitted.
The default value of the parameter name indicates the mapreduce.job.name job name, mapreduce.job.priorityNORMAL job priority, the amount of memory consumed by yarn.app.mapreduce.am.resource.mb1536MR ApplicationMaster, the number of virtual CPU occupied by yarn.app.mapreduce.am.resource.cpu-vcores1MR ApplicationMaster, the number of failed attempts by mapreduce.am.max-attempts2MR ApplicationMaster***, the amount of memory required per Map Task, mapreduce.map.cpu.vcores1, the number of virtual CPU required by each Map Task. Number of mapreduce.map.maxattempts4Map Task*** failed attempts mapreduce.reduce.memory.mb1024 amount of memory required per Reduce Task mapreduce.reduce.cpu.vcores1 number of virtual CPU per Reduce Task number of failed attempts mapreduce.reduce.maxattempts4Reduce Task*** enable speculative execution mechanism mapreduce.reduce.speculativefalse enables speculative execution mechanism mapreduce.reduce.speculativefalse enables speculative execution mechanism mapreduce.job.queuenamedefault job submitted queue mapreduce.task. Io.sort.mb100 task internal sort buffer size threshold for overwriting files in mapreduce.map.sort.spill.percent0.8Map phase (percentage of sort buffer size) number of threads of concurrent copy data started by mapreduce.reduce.shuffle.parallelcopies5Reduce Task
Note that MRv2 renames all configuration parameters in MRv1, but is compatible with the old parameters in MRv1, except that a warning log is printed to remind the user that the parameters are out of date. For the comparison table of new and old parameters of MapReduce, please refer to Java class org.apache.hadoop.mapreduce.util.ConfigUtil. Examples are as follows:
Expired parameter name new parameter name mapred.job.namemapreduce.job.namemapred.job.prioritymapreduce.job.prioritymapred.job.queue.namemapreduce.job.queuenamemapred.map.tasks.speculative.executionmapreduce.map.speculativemapred.reduce.tasks.speculative.executionmapreduce.reduce.speculativeio.sort.factormapreduce.task.io.sort.factorio.sort.mbmapreduce.task.io.sort.mb these are the parameters related to MapReduce that the editor shares for you. If you happen to have similar doubts, please refer to the above analysis to understand. If you want to know more about it, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.