you can try this way spark
scala> var dfdd = Seq((123,"USD","Principal" ,1000,100),(123,"EUR","Principal",2000,50),(123,"USD","Interest",2000,100)).toDF("ID","currency","account_name","principal","interest")
scala> dfdd.show()
+---+--------+------------+---------+--------+
| ID|currency|account_name|principal|interest|
+---+--------+------------+---------+--------+
|123| USD| Principal| 1000| 100|
|123| EUR| Principal| 2000| 50|
|123| USD| Interest| 2000| 100|
+---+--------+------------+---------+--------+
scala> var dfdd2 = dfdd.groupBy("ID","account_name").pivot("currency").agg(collect_list("principal"))
+---+------------+------+------+
| ID|account_name| EUR| USD|
+---+------------+------+------+
|123| Interest| []|[2000]|
|123| Principal|[2000]|[1000]|
+---+------------+------+------+
//added .show() only for understanding purpose
scala> var dfdd3 = dfdd2.withColumn("account_type",struct($"account_name",$"EUR",$"USD")).drop("EUR","USD","account_name").groupBy("id").agg(collect_list("account_type").as("test"))
scala> dfdd3.toJSON.show(false)
+----------------------------------------------------------------------------------------------------------------------------+
|value |
+----------------------------------------------------------------------------------------------------------------------------+
|{"id":123,"test":[{"account_name":"Interest","EUR":[],"USD":[2000]},{"account_name":"Principal","EUR":[2000],"USD":[1000]}]}|
+----------------------------------------------------------------------------------------------------------------------------+
equal JSON format as your desired output

have look do let me know if you have any question related to same