I am looking for an efficient way to explode the rows in the pyspark dataframe df_input into columns. I dont understand that format '@{name...}' and don't know where to start in order to decode it. Thanks for help!
df_input = sqlContext.createDataFrame(
[
(1, '@{name= Hans; age= 45}'),
(2, '@{name= Jeff; age= 15}'),
(3, '@{name= Elona; age= 23}')
],
('id', 'firstCol')
)
expected result:
+---+-----+---+
| id| name|age|
+---+-----+---+
| 1| Hans| 45|
| 2| Jeff| 15|
| 3|Elona| 23|
+---+-----+---+
df.printSchema()on your real dataframe?