The following code reads csv into a dataframe in scala:
val mDF: DataFrame = spark.read.csv("src/test/resources/knimeMerged.csv")
However, it treats the first row of the imported data as a data row. In fact, the first row is headers. It uses the default headers for dataframe as headers (e.g., _c0, _c1)
I assume there is an Option to allow the import of headers for a csv file but cannot find it in the Scala API docs (I'm new to scala and their documentation).
Any hints would be appreciated both on what the option is and how to implement