Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to parse the method of getting a list from a string in PySpark

2025-02-21 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

How to parse the method of getting a list of strings from PySpark? for this problem, this article introduces the corresponding analysis and solution in detail, hoping to help more partners who want to solve this problem to find a more simple and feasible method.

Whether there is a function similar to eval in PySpark.

I am trying to convert Python code to PySpark

I am querying a data box, and one of the columns has data, as shown below, but in string format.

[{upright datebooks: upright abcroomgg.compositions, upright valuestones: upright NA'}, {upright datedates: upright 2016-02-08, upright bybacks: upright dfgharmyaa.compositions, upright valuestones: upright applicable`}, {upright dates: upright 2017-02-08, upright bylaws: upright writings hot.compositions, upright valuestones: usuch ufc`}]

Suppose "x" is the column that holds this value in the data box.

Now, I want to pass the string column "x" and get the list so that I can pass it to the mapPartition function.

I want to avoid iterating every line on the driver, which is why I think so.

Using the eval () function in Python (if used): I get the following output:

X = "[{upright datebooks: upright 2015-02-08 stories, upright dates: upright abcroomgg.compositions, utilised valuestones: ubiquitous NA'}, {upright datedates: ubiquitous 2016-02-08 cycles, upright bysides: upright dfgharmyaa.compositions, upright valueholders: upright applicable`}, {upright dates: upright 2017-02-08 cycles, utilisation bylines: utilisation wrwewedating hot.com' "list = eval (x) for i in list: print I

Output: (this is also what I want in PySpark)

{upright dates: ubiquitous 2015-02-08, upright bylines: upright abcroomgg.compositions, ubiquitous valueholders: upright NA'}

{upright dates: upright 2016-02-08, upright bylines: upright dfgroomyaa.compositions, ubiquitous valueworthy: upright applicable'}

{upright datekeeper: ubiquitous 2017-02-08 hours, upright bylines: upright writings hot.compositions, upright valueholders: upright ufc'}

How do you do this in PySpark?

Instance expansion:

Df.schema: StructType (List (StructField (id,StringType,true), StructField (recs,StringType,true)) | id | recs | | ABC | [66, [["AB", 10] | XYZ | [66, [[XY ", 10], [" YZ ", 20]] | DEF | [66, [[DE", 10], ["EF", 20], ["FG", 30]

I'm trying to flatten these lists.

| | id | like_id |

| | ABC | AB |

| | XYZ | XY |

| | XYZ | YZ |

| | DEF | DE |

| | DEF | EF |

| | DEF | FG |

This is the answer to the question on how to parse the method of getting a list from strings in PySpark. I hope the above content can be of some help to you. If you still have a lot of doubts to be solved, you can follow the industry information channel for more related knowledge.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report