I need to define a test sample with ArrayType for Spark to read this data. Here is how the schema of data looks like:
|-- data: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- id: integer (nullable = true)
| | |-- stat: float (nullable = true)
|-- naming: string (nullable = true)
My current definition of data field shows null values for all rows so how can I define structurally this data in a CSV file?
Here is how my CSV files structure looks like now:
"data1_id","data1_stat","data2_id","data2_stat","data3_id","data3_stat","naming"
"1","0.76","2","0.55","3","0.16","Default1"
"1","0.2","2","0.41","3","0.89","Default2"
"1","0.96","2","0.12","3","0.4","Default3"
"1","0.28","2","0.15","3","0.31","Default4"
"1","0.84","2","0.41","3","0.15","Default5"
When I call show on input dataframe I get this result:
+-------+-----------+
|data |naming |
+-------+-----------+
|null |Default1 |
|null |Default2 |
|null |Default3 |
|null |Default4 |
|null |Default5 |
+-------+-----------+
Expected result:
+----------------------------+-----------+
|data |naming |
+----------------------------+-----------+
|[[1,0.76],[2,0.55],[3,0.16]]|Default1 |
|[[1,0.2],[2,0.41],[3,0.89]] |Default2 |
|[[1,0.96],[2,0.12],[3,0.4]] |Default3 |
|[[1,0.28],[2,0.15],[3,0.31]]|Default4 |
|[[1,0.84],[2,0.41],[3,0.15]]|Default5 |
+----------------------------+-----------+