Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

The Foundation of spark-- the Generation of rdd

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

Use parallelize to create a RDD or you can use makeRDD to create a RDD.

By looking at the source code, you can see that when makeRDD executes, it is also calling the parallelize function, and there is no difference between the two.

Through .textFile, you can read the project path and hdfs file path through the file

*

The second parameter of makeRDD and parallelize is the number of parallelism processed.

No timing is given, the default value is through

Conf.getInt ("spark.default.parallelism", math.max (totalCoreCount.get (), 2)) get

That is to get the value of the spark.default.parallelism parameter

Use the parameter configured by spark.default.parallelism when the parameter value exists

When the parameter does not exist, the system compares the total number of available cores with 2, which uses which

*

The second parameter through .textFile is the number of parallelism processed (the textFile data segmentation rule is the same as the hadoop file segmentation rule)

No timing is given. Default is conf.getInt ("spark.default.parallelism", math.min (totalCoreCount.get (), 2)).

That is, when the parameter does not exist, the comparison system has a total number of available cores and 2, which is smaller and which is not necessarily the number of partitions, depending on the slicing rules when hadoop reads the file.

Looking at the source code, we can see that the underlying call is hadoopFile, so the default value of the parameter is assumed to be 2.

After hadoop slicing, the file will be sliced by hadoop. If the data is 5 pieces, it will be divided into 221 pieces after hadoop slicing.

*

The number of partitions stored by the saveRDD function, that is, the amount of data text, depends on the degree of parallelism of the run

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report