In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article introduces Spark's HashPartitioner way of Python implementation is like this, the content is very detailed, interested friends can refer to, hope to be helpful to you.
The default partition method in spark is org.apache.spark.HashPartitioner, and the specific code is as follows:
Class HashPartitioner (partitions: Int) extends Partitioner {require (partitions > = 0, s "Number of partitions ($partitions) cannot be negative.") Def numPartitions: Int = partitions def getPartition (key: Any): Int = key match {case null = > 0 case _ = > Utils.nonNegativeMod (key.hashCode, numPartitions)} override def equals (other: Any): Boolean = other match {case h: HashPartitioner = > h.numPartitions = numPartitions case _ = > false} override def hashCode: Int = numPartitions}
If you want to get a partition of key in Python, you just need to implement hashCode, and then take the module.
HashCode is implemented as follows:
Def java_string_hashcode (s): h = 0 for c in s: h = (31 * h + ord (c)) & 0xFFFFFFFF return ((h + 0x80000000) & 0xFFFFFFFF)-0x80000000
Verification
Scala implementation
Python implementation
On the HashPartitioner way of Spark Python implementation is so shared here, I hope the above content can be of some help to you, can learn more knowledge. If you think the article is good, you can share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.