Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to solve some classic MapReduce problems with Spark

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

This article is about how to use Spark to solve some classic MapReduce problems, the editor thinks it is very practical, so I share it with you to learn. I hope you can get something after reading this article.

Spark is an Apache project, which is billed as "lightning fast cluster computing". It has a thriving open source community and is by far the most active Apache project. Spark provides a faster and more general data processing platform. Compared to Hadoop, Spark can make your program run 100 times faster in memory or 10 times faster on disk. At the same time, spark also makes traditional map reduce job development easier and faster. This article will briefly introduce several classic hadoop mr with spark implementation, to let you familiar with the development of spark.

Minimum value

Minimization has always been a classic case of Hadoop. Let's use Spark to implement it, so we can feel the idea and implementation of mr in spark. Don't say too much and go to code:

Expected results:

Max: 1001min: 2

The idea is similar to mr in hadoop, setting a key,value as a set that requires * and minimum values, and then groupBykey aggregates to deal with it. The second method is simpler and has better performance.

Average value problem

Finding the average value of each key is a common case. The function combineByKey is often used to deal with similar problems in spark. For more information, please google the usage. See the code below:

We ask each partiton to find out the sum sum of all integers corresponding to each key in a single partition and the number of count, and then return a pair (sum, count) to accumulate all the sum and count corresponding to each key after shuffle, and then divide it to get the mean.

TopN problem

Top n problem is also a classic case of hadoop that embodies the idea of mr, so how to solve it conveniently and quickly in spark:

The idea is simple: groupBykey the data into groups according to key, and then take 2 of each group. Expected results:

The above is how to use Spark to solve some classic MapReduce problems. The editor believes that there are some knowledge points that we may see or use in our daily work. I hope you can learn more from this article. For more details, please follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report