How to flatten Array of Strings into multiple rows of a dataframe in Spark 2.2.0?
Input Row ["foo", "bar"]
val inputDS = Seq("""["foo", "bar"]""").toDF
inputDS.printSchema()
root
|-- value: string (nullable = true)
Input Dataset inputDS
inputDS.show(false)
value
-----
["foo", "bar"]
Expected output dataset outputDS
value
-------
"foo" |
"bar" |
I tried explode function like below but it didn't quite work
inputDS.select(explode(from_json(col("value"), ArrayType(StringType))))
and I get the following error
org.apache.spark.sql.AnalysisException: cannot resolve 'jsontostructs(`value`)' due to data type mismatch: Input schema string must be a struct or an array of structs
Also tried the following
inputDS.select(explode(col("value")))
And I get the following error
org.apache.spark.sql.AnalysisException: cannot resolve 'explode(`value`)' due to data type mismatch: input to function explode should be array or map type, not StringType
from_jsonpart. Simply tryinputDS.select(explode(col("value"))).org.apache.spark.sql.AnalysisException: cannot resolve 'explode(value)' due to data type mismatch: input to function explode should be array or map type, not StringTypesplitfunction and use that together withexplode. Can you check the input again and update the question?