In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-09 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
1. Software environment RHEL6 role jdk-8u45hadoop-2.8.1.tar.gzsshxx.xx.xx.xx ip address NNhadoop01xx.xx.xx.xx ip address DNhadoop02xx.xx.xx.xx ip address DNhadoop03xx.xx.xx.xx ip address DNhadoop04xx.xx.xx.xx ip address DNhadoop05
When it comes to pseudo-distributed deployment, you only need host hadoop01. Refer to pseudo-distributed deployment for software installation.
2. Configure yarn and mapreduce
[hadoop@hadoop000 hadoop] $cp mapred-site.xml.template mapred-site.xml
Configure yarn
[hadoop@hadoop000 hadoop] $vi mapred-site.xml
Mapreduce.framework.name
Yarn
Configure mapreduce
[hadoop@hadoop000 hadoop] $vi yarn-site.xml:
Yarn.nodemanager.aux-services
Mapreduce_shuffle
3. Submit the test jar to calculate pi
Job_1524804813835_0001 job naming format: job_unix time _ number
[hadoop@hadoop01 sbin] $. / start-yarn.sh
[hadoop@hadoop01 hadoop] $find. / *-name * examples*
. / lib/native/examples
. / share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-2.8.1-sources.jar
. / share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-2.8.1-test-sources.jar
. / share/hadoop/mapreduce/lib-examples
. / share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.1.jar
. / share/doc/hadoop/hadoop-auth-examples
. / share/doc/hadoop/hadoop-mapreduce-examples
. / share/doc/hadoop/api/org/apache/hadoop/examples
. / share/doc/hadoop/api/org/apache/hadoop/security/authentication/examples
[hadoop@hadoop01 hadoop] $hadoop jar. / share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.1.jar pi 5 10
Number of Maps = 5
Samples per Map = 10
Wrote input for Map # 0
Wrote input for Map # 1
Wrote input for Map # 2
Wrote input for Map # 3
Wrote input for Map # 4
Starting Job
18-04-27 12:58:49 INFO client.RMProxy: Connecting to ResourceManager at / 0.0.0.0:8032
18-04-27 12:58:50 INFO input.FileInputFormat: Total input files to process: 5
18-04-27 12:58:50 INFO mapreduce.JobSubmitter: number of splits:5
18-04-27 12:58:50 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1524804813835_0001
18-04-27 12:58:51 INFO impl.YarnClientImpl: Submitted application application_1524804813835_0001
12:58:51 on 18-04-27 INFO mapreduce.Job: The url to track the job: http://hadoop01:8088/proxy/application_1524804813835_0001/
18-04-27 12:58:51 INFO mapreduce.Job: Running job: job_1524804813835_0001
18-04-27 12:59:03 INFO mapreduce.Job: Job job_1524804813835_0001 running in uber mode: false
18-04-27 12:59:03 INFO mapreduce.Job: map 0 reduce 0
18-04-27 12:59:18 INFO mapreduce.Job: map 100% reduce 0
18-04-27 12:59:25 INFO mapreduce.Job: map 100 reduce 100%
18-04-27 12:59:26 INFO mapreduce.Job: Job job_1524804813835_0001 completed successfully
18-04-27 12:59:27 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=116
FILE: Number of bytes written=819783
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=1350
HDFS: Number of bytes written=215
HDFS: Number of read operations=23
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=5
Launched reduce tasks=1
Data-local map tasks=5
Total time spent by all maps in occupied slots (ms) = 64938
Total time spent by all reduces in occupied slots (ms) = 4704
Total time spent by all map tasks (ms) = 64938
Total time spent by all reduce tasks (ms) = 4704
Total vcore-milliseconds taken by all map tasks=64938
Total vcore-milliseconds taken by all reduce tasks=4704
Total megabyte-milliseconds taken by all map tasks=66496512
Total megabyte-milliseconds taken by all reduce tasks=4816896
Map-Reduce Framework
Map input records=5
Map output records=10
Map output bytes=90
Map output materialized bytes=140
Input split bytes=760
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=140
Reduce input records=10
Reduce output records=0
Spilled Records=20
Shuffled Maps = 5
Failed Shuffles=0
Merged Map outputs=5
GC time elapsed (ms) = 1428
CPU time spent (ms) = 5740
Physical memory (bytes) snapshot=1536856064
Virtual memory (bytes) snapshot=12578734080
Total committed heap usage (bytes) = 1152385024
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=590
File Output Format Counters
Bytes Written=97
Job Finished in 37.717 seconds
Estimated value of Pi is 3.28000000000000000000
[hadoop@hadoop01 hadoop] $
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 261
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.