In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article will explain in detail the example analysis of local mr in apache-hive-1.2.1. The editor thinks it is very practical, so I share it with you for reference. I hope you can get something after reading this article.
Running sql in hive has a lot of relatively small SQL, small amount of data and small amount of calculation. In this way, small tasks are performed in local mr mode, which is performed locally, by pulling the input data back to the client.
Three parameters are used to determine:
Whether hive.exec.mode.local.auto=true starts local mr mode
The number of hive.exec.mode.local.auto.input.files.max=4 input files. Default is 4.
The size of hive.exec.mode.local.auto.inputbytes.max=134217728 input files. Default is 128m.
Note:
Hive.exec.mode.local.auto is the major premise. Local mr mode can only be enabled if it is set to true.
Hive.exec.mode.local.auto.input.files.max and hive.exec.mode.local.auto.inputbytes.max are related to each other, and local mr will be executed only if they are satisfied at the same time.
Tweak 1 files = > 5 files
Tweak 2 files = > 2 files
Hive > set hive.exec.mode.local.auto=falsehive > select * from tweak 2 order by id Query ID = hadoop_20160125132157_d767beb0-f674-4962-ac3c-8fbdd2949d01Total jobs = 1Launching Job 1 out of 1Number of reduce tasks determined at compile time: 1In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=In order to limit the maximum number of reducers: set hive.exec.reducers.max=In order to set a constant number of reducers: set mapreduce.job.reduces=Starting Job = job_1453706740954_0006 Tracking URL = http://hftest0001.webex.com:8088/proxy/application_1453706740954_0006/Kill Command = / home/hadoop/hadoop-2.7.1/bin/hadoop job-kill job_1453706740954_0006Hadoop job information for Stage-1: number of mappers: 1 Number of reducers: 12016-01-25 13 Stage-1 map 22 reduce = 0%, Stage-1 map = 0%, reduce = 0%, Cumulative CPU 1.47 sec2016-01-25 13 reduce 22 40207 Stage-1 map = 100%, reduce = 100% Cumulative CPU 3.68 secMapReduce Total cumulative CPU time: 3 seconds 680 msecEnded Job = job_1453706740954_0006MapReduce Jobs Launched: Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 3.68 sec HDFS Read: 5465 HDFS Write: 32 SUCCESSTotal MapReduce CPU Time Spent: 3 seconds 680 msecOK.hive > set hive.exec.mode.local.auto=truehive > select * from twee2 order by id Hive > select * from tweak 2 order by id Automatically selecting local only mode for query = = > launch in local mode Query ID = hadoop_20160125132322_9649b904-ad87-47fa-89ad-5e5f67315ac8Total jobs = 1Launching Job 1 out of 1Number of reduce tasks determined at compile time: 1In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=In order to limit the maximum number of reducers: set hive.exec.reducers.max=In order to set a constant number of reducers: set mapreduce.job.reduces=Job running in-process (local Hadoop) 2016-01-25 13 13 13 22 27192 Stage-1 map Reduce = 100%Ended Job = job_local1850780899_0002MapReduce Jobs Launched: Stage-Stage-1: HDFS Read: 1464 HDFS Write: 1618252652 SUCCESSTotal MapReduce CPU Time Spent: 0 msecOK.hive > set hive.exec.mode.local.auto=truehive > select * from twee1 order by id Query ID = hadoop_20160125132411_3ecd7ee9-8ccb-4bcc-8582-6d797c13babdTotal jobs = 1Launching Job 1 out of 1Number of reduce tasks determined at compile time: 1In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=In order to limit the maximum number of reducers: set hive.exec.reducers.max=In order to set a constant number of reducers: set mapreduce.job.reduces=Cannot run job locally: Number of Input Files (= 5) is larger than hive.exec.mode.local.auto.input .files.max (= 4) = > 5 > 4 is still enabled with distributed Starting Job = job_1453706740954_0007 Tracking URL = http://hftest0001.webex.com:8088/proxy/application_1453706740954_0007/Kill Command = / home/hadoop/hadoop-2.7.1/bin/hadoop job-kill job_1453706740954_0007Hadoop job information for Stage-1: number of mappers: 1 Number of reducers: 12016-01-25 13 reduce 24 reduce 38775 Stage-1 map = 0%, reduce = 0% 2016-01-25 13 14 24 reduce 52115 reduce = 100%, reduce = 0%, Cumulative CPU 1.55 sec2016-01-25 13 14 24 reduce 59548 Stage-1 map = 100%, reduce Cumulative CPU 3.84 secMapReduce Total cumulative CPU time: 3 seconds 840 msecEnded Job = job_1453706740954_0007MapReduce Jobs Launched: Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 3.84 sec HDFS Read: 5814 HDFS Write: 56 SUCCESSTotal MapReduce CPU Time Spent: 3 seconds 840 msecOK.hive > set hive.exec.mode.local.auto=truehive > set hive.exec.mode.local.auto.input.files.max=5 = = > set the number of input files max=5hive > select * from tweak 1 order by id Automatically selecting local only mode for query = = > enabled local mode Query ID = hadoop_20160125132558_db2f4fca-f6bf-4b91-9569-c779a3b13386Total jobs = 1Launching Job 1 out of 1Number of reduce tasks determined at compile time: 1In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=In order to limit the maximum number of reducers: set hive.exec.reducers.max=In order to set a constant number of reducers: set mapreduce.job.reduces=Job running in- Process (local Hadoop) 2016-01-25 13 14 26 Stage-1 map 03232 Stage-1 map Reduce = 100%Ended Job = job_local264155444_0003MapReduce Jobs Launched: Stage-Stage-1: HDFS Read: 1920 HDFS Write: 1887961792 SUCCESSTotal MapReduce CPU Time Spent: 0 msecOK on "sample Analysis of local mr in apache-hive-1.2.1" this article ends here Hope that the above content can be helpful to you, so that you can learn more knowledge, if you think the article is good, please share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 259
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.