I have a new-line delimited json file that looks like
{"id":1,"nested_col": {"key1": "val1", "key2": "val2", "key3": ["arr1", "arr2"]}}
{"id":2,"nested_col": {"key1": "val1_2", "key2": "val2_2", "key3": ["arr1_2", "arr2"]}}
Once I read the file using df = spark.read.json(path_to_file), I end up with a dataframe whose schema looks like:
DataFrame[id: bigint,nested_col:struct<key1:string,key2:string,key3:array<string>>]
What I want to do is cast nested_col to see it a string without setting primitivesAsString to true (since I actually have 100+ columns and need the types of all my other columns to be inferred). I also don't know what nested_col looks like before hand. In other words, I'd like my DataFrame to look like
DataFrame[id: bigint,nested_col:string]
I tried to do
df.select(df['nested_col'].cast('string')).take(1)`
but it doesn't return the correct string representation of the JSON:
[Row(nested_col=u'[0,2000000004,2800000004,3000000014,316c6176,326c6176,c00000002,3172726100000010,32727261]')]`
whereas I was hoping for:
[Row(nested_col=u'{"key1": "val1", "key2": "val2", "key3": ["arr1", "arr2"]}')]
Does anyone know how I can get the desired result (aka cast a nested JSON field / StructType to a String)?