2

I have a dataframe df containing a struct-array column properties (array column whose elements are struct fields having keys x and y) and I want to create a new array column by extracting the x values from column properties.

A sample input dataframe would be something like this

import pyspark.sql.functions as F
from pyspark.sql.types import *

data = [
  (1, [{'x':11, 'y':'str1a'}, ]), 
  (2, [{'x':21, 'y':'str2a'}, {'x':22, 'y':0.22, 'z':'str2b'}, ]), 
    ]
my_schema = StructType([
    StructField('id', LongType()),
    StructField('properties', ArrayType(
      StructType([
        StructField('x', LongType()),
        StructField('y', StringType()),
                  ])
    )           
               ),
])

df = spark.createDataFrame(data, schema=my_schema)
df.show()
# +---+--------------------+
# | id|          properties|
# +---+--------------------+
# |  1|       [[11, str1a]]|
# |  2|[[21, str2a], [22...|
# +---+--------------------+

df.printSchema()
# root
#  |-- id: long (nullable = true)
#  |-- properties: array (nullable = true)
#  |    |-- element: struct (containsNull = true)
#  |    |    |-- x: long (nullable = true)
#  |    |    |-- y: string (nullable = true)

On the other hand, the desired output df_new should look like

df_new.show()
# +---+--------------------+--------+
# | id|          properties|x_values|
# +---+--------------------+--------+
# |  1|       [[11, str1a]]|    [11]|
# |  2|[[21, str2a], [22...|[21, 22]|
# +---+--------------------+--------+

df_new.printSchema()
# root
#  |-- id: long (nullable = true)
#  |-- properties: array (nullable = true)
#  |    |-- element: struct (containsNull = true)
#  |    |    |-- x: long (nullable = true)
#  |    |    |-- y: string (nullable = true)
#  |-- x_values: array (nullable = true)
#  |    |-- element: long (containsNull = true)

Does anybody know a solution for such task?

Ideally, I am looking for a solution which operates row-by-row without relying on F.explode. In fact in my actual database I have not identified an equivalent to id column yet, and after calling F.explode I would not be sure of how to coalesce back together the exploded values.

1 Answer 1

6

Try using properties.x then all values are extracted from the properties array.

Example:

df.withColumn("x_values",col("properties.x")).show(10,False)

#or by using higher order functions
df.withColumn("x_values",expr("transform(properties,p -> p.x)")).show(10,False)

#+---+-------------------------+--------+
#|id |properties               |x_values|
#+---+-------------------------+--------+
#|1  |[[11, str1a]]            |[11]    |
#|2  |[[21, str2a], [22, 0.22]]|[21, 22]|
#+---+-------------------------+--------+
Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.