In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
How to use Hadoop in Python to achieve statistical function, in view of this problem, this article introduces the corresponding analysis and solution in detail, hoping to help more partners who want to solve this problem to find a more simple and feasible method.
Python map/reduce
Write map.py
#! / usr/bin/env pythonimport sysdef read_inputs (file): for line in file: line = line.strip () yield line.split () def main (): file = sys.stdin lines = read_inputs (file) for words in lines: for word in words: print ("{}\ t {}" .format (word, 1)) if _ _ name__ = = "_ main__": main ()
test
Echo "Hello world Bye world" |. / map.py Hello 1world 1Bye 1world 1
Write reduce.py
#! / usr/bin/env pythonimport sysdef read_map_outputs (file): for line in file: yield line.strip (). Split ("\ t", 1) def main (): current_word = None word_count = 0 lines = read_map_outputs (sys.stdin) for word Count in lines: try: count = int (count) except ValueError: continue if current_word = = word: word_count + = count else: if current_word: print ("{}\ t {}" .format (current_word, word_count)) current_word = word word_count = count if current_word: print ("{}\ t {}" .format (current_word Word_count)) if _ _ name__ = = "_ _ main__": main ()
test
Echo "Hello World Bye World Hello" |. / map.py | sort |. / reduce.pyBye 1Hello 2World 2
The above statistics are carried out using Python's own features, and the following shows the process of using Hadoop to execute
Execute Python scripts using MapReduce
Find the location of the hadoop-stream library
Find. /-name "hadoop-streaming*.jar". / local/hadoop/share/hadoop/tools/sources/hadoop-streaming-2.7.3-test-sources.jar./local/hadoop/share/hadoop/tools/sources/hadoop-streaming-2.7.3-sources.jar./local/hadoop/share/hadoop/tools/lib/hadoop-streaming-2.7.3.jar
Create a read-in folder input on HDFS
Hadoop-fs mkdir input
Put pending files into HDFS
Hadoop-fs put allfiles input
Run command processing
Hadoop jar ~ / local/hadoop/share/hadoop/tools/lib/hadoop-streaming-2.7.3.jar-input input-output output-mapper. / map.py-reducer. / reduce.py
Processed file
Bye 1Goodbye 1Hadoop 2Hello 2World 2 on how to use Hadoop in Python to achieve statistical function questions are shared here, I hope the above content can be of some help to you, if you still have a lot of doubts to be solved, you can follow the industry information channel to learn more related knowledge.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.