Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to run wordcount in Hadoop-1.2.1

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces "how to run wordcount in Hadoop-1.2.1". In daily operation, I believe many people have doubts about how to run wordcount in Hadoop-1.2.1. The editor consulted all kinds of materials and sorted out simple and easy-to-use methods of operation. I hope it will be helpful to answer the doubts of "how to run wordcount in Hadoop-1.2.1"! Next, please follow the editor to study!

1. Create two text files in the home directory

[wukong@bd01 ~] $mkdir test [wukong@bd01 ~] $cd test [wukong@bd01 test] $ls [wukong@bd01 test] $echo "hello world" > text1 [wukong@bd01 test] $echo "hello hadoop" > text2 [wukong@bd01 test] $cat text1hello world [wukong@bd01 test] $cat text2hello hadoop

2. Start Hadoop

[wukong@bd01 bin] $. / start-all.shstarting namenode, logging to / home/wukong/a_usr/hadoop-1.2.1/libexec/../logs/ha doop-wukong-namenode-bd01.outbd02: starting datanode, logging to / home/wukong/a_usr/hadoop-1.2.1/libexec/../logs/ hadoop-wukong-datanode-bd02.outbd01: starting secondarynamenode Logging to / home/wukong/a_usr/hadoop-1.2.1/libexec/../logs/ hadoop-wukong-secondarynamenode-bd01.outstarting jobtracker, logging to / home/wukong/a_usr/hadoop-1.2.1/libexec/../logs/ hadoop-wukong-jobtracker-bd01.outbd02: starting tasktracker, logging to / home/wukong/a_usr/hadoop-1.2.1/libexec/. . / logs/hadoop-wukong-tasktracker-bd02.out [wukong@bd01 bin] $jps1440 Jps1132 NameNode1280 SecondaryNameNode1364 JobTracker

3. Put the newly created folder on hdfs

[wukong@bd01 ~] $a_usr/hadoop-1.2.1/bin/hadoop fs-put. / test test_ in [Wukong @ bd01 ~] $a_usr/hadoop-1.2.1/bin/hadoop fs-ls. / test_inFound 2 items-rw-r--r-- 1 wukong supergroup 12 2014-07-31 15:38 / user/wukong/test_i user/wukong/test_i Rwlashi wukong supergroup-1 wukong supergroup 13 2014- 07-31 15:38 / user/wukong/test_i n/text2 [wukong@bd01 ~] $a_usr/hadoop-1.2.1/bin/hadoop fs-lsFound 1 itemsdrwxr-xr-x-wukong supergroup 0 2014-07-31 15:38 / user/wukong/test_i n

4. Run the wordcount program

[wukong@bd01 hadoop-1.2.1] $bin/hadoop jar hadoop-examples-1.2.1.jar wordcount test_ in test_out14/07/31 15:43:44 INFO input.FileInputFormat: Total input paths to process: 214-07-31 15:43:44 INFO util.NativeCodeLoader: Loaded the native-hadoop library14/07/31 15:43:44 WARN snappy.LoadSnappy: Snappy native library not loaded14/07/31 15:43:46 INFO mapred.JobClient: Running job: job_201407311530_000114/07 / 3115: 43:47 INFO mapred.JobClient: map 0 reduce 0 reduce 07 reduce 31 15:44:11 INFO mapred.JobClient: map 100% reduce 0 map 07 reduce 31 15:44:27 INFO mapred.JobClient: map 100% reduce 100-07-31 15:44:29 INFO mapred.JobClient: Job complete: job_201407311530_000114/07/31 15:44:29 INFO mapred.JobClient: Counters: 2914-07-31 15:44:29 INFO mapred.JobClient: Job Counters14/07/31 15: 44:29 INFO mapred.JobClient: Launched reduce tasks=114/07/31 15:44:29 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=4340614/07/31 15:44:29 INFO mapred.JobClient: Total time spent by all reduces wai ting after reserving slots (ms) = 014 INFO mapred.JobClient 07 15:44:29 INFO mapred.JobClient: Total time spent by all maps waitin g after reserving slots (ms) = 014 15:44:29 INFO mapred.JobClient: Launched map tasks=214/07/31 15 : 44:29 INFO mapred.JobClient: Data-local map tasks=214/07/31 15:44:29 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=1468814/07/31 15:44:29 INFO mapred.JobClient: File Output Format Counters14/07/31 15:44:29 INFO mapred.JobClient: Bytes Written=2514/07/31 15:44:29 INFO mapred.JobClient: FileSystemCounters14/07/31 15:44:29 INFO mapred.JobClient: FILE_BYTES_READ=5514/07/31 15:44:29 INFO mapred.JobClient: HDFS_BYTES_READ=23914/07/31 15:44:29 INFO mapred.JobClient: FILE_BYTES_WRITTEN=17669414/07/31 15:44:29 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=2514/07/31 15:44:29 INFO mapred.JobClient: File Input Format Counters14/07/31 15:44:29 INFO mapred.JobClient: Bytes Read=2514/07/31 15:44:29 INFO mapred.JobClient: Map-Reduce Framework14/07/31 15:44:29 INFO Mapred.JobClient: Map output materialized bytes=6114/07/31 15:44:29 INFO mapred.JobClient: Map input records=214/07/31 15:44:29 INFO mapred.JobClient: Reduce shuffle bytes=6114/07/31 15:44:29 INFO mapred.JobClient: Spilled Records=814/07/31 15:44:29 INFO mapred.JobClient: Map output bytes=4114/07/31 15:44:29 INFO mapred.JobClient: Total committed heap usage (bytes) = 41743974414 15:44:29 INFO Mapred.JobClient: CPU time spent (ms) = 288014 * 07 * * INFO mapred.JobClient: Combine input records=414/07/31 15:44:29 INFO mapred.JobClient: SPLIT_RAW_BYTES=21414/07/31 15:44:29 INFO mapred.JobClient: Reduce input records=414/07/31 15:44:29 INFO mapred.JobClient: Reduce input groups=314/07/31 15:44:29 INFO mapred.JobClient: Combine output records=414/07/31 15:44:29 INFO Mapred.JobClient: Physical memory (bytes) snapshot=41 805004814 *

You can check it after running.

[wukong@bd01 hadoop-1.2.1] $bin/hadoop fs-lsFound 2 itemsdrwxr-xr-x-wukong supergroup 0 2014-07-31 15:38 / user/wukong/test_indrwxr-xr-x-wukong supergroup 0 2014-07-31 15:44 / user/wukong/test_ out [Wukong @ bd01 hadoop-1.2.1] $a_usr/hadoop-1.2.1/bin/hadoop fs-ls. / test_out-bash: a _ Usr/hadoop-1.2.1/bin/hadoop: No such file or directory [wukong@bd01 hadoop-1.2.1] $bin/hadoop fs-ls. / test_outFound 3 items-rw-r--r-- 1 wukong supergroup 0 2014-07-31 15:44 / user/wukong/test_out/_SUCCESSdrwxr-xr-x-wukong supergroup 0 2014-07-31 15:43 / user/wukong/test_out/_logs-rw-r-- Rmuri-1 wukong supergroup 25 2014-07-31 15:44 / user/wukong/test_out/part-r-00000

5. The final result is in part-r-00000!

[wukong@bd01 hadoop-1.2.1] $bin/hadoop fs-cat. / test_out/part-r-00000hadoop 1hello 2world 1 at this point, the study on "how to run wordcount in Hadoop-1.2.1" is over. I hope I can solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report