6

Initially I have a matrix

 0.0  0.4  0.4  0.0 
 0.1  0.0  0.0  0.7 
 0.0  0.2  0.0  0.3 
 0.3  0.0  0.0  0.0

The matrix matrix is converted into a normal_array by

`val normal_array = matrix.toArray`  

and I have an array of string

inputCols : Array[String] = Array(p1, p2, p3, p4)

I need to convert this matrix into a following data frame. (Note: The number of rows and columns in the matrix will be the same as the length of the inputCols)

index  p1   p2   p3   p4
 p1    0.0  0.4  0.4  0.0 
 p2    0.1  0.0  0.0  0.7 
 p3    0.0  0.2  0.0  0.3 
 p4    0.3  0.0  0.0  0.0

In python, this can be easily achieved by pandas library.

arrayToDataframe = pandas.DataFrame(normal_array,columns = inputCols, index = inputCols)

But how can I do this in Scala?

3
  • Can you provide sample output for your requirement as you explained in below comment? Commented Jun 26, 2018 at 7:30
  • Yes, I have edited my question with example. Commented Jun 26, 2018 at 9:11
  • It is very simple to do in Python. I provided my solution using Scala. Commented Jun 26, 2018 at 13:23

3 Answers 3

4

You can do something like below

 //convert your data to Scala Seq/List/Array

 val list = Seq((0.0,0.4,0.4,0.0),(0.1,0.0,0.0,0.7),(0.0,0.2,0.0,0.3),(0.3,0.0,0.0,0.0))

  //Define your Array of desired columns

  val inputCols : Array[String] = Array("p1", "p2", "p3", "p4")

  //Create DataFrame from given data, It will create dataframe with its own column names like _c1,_c2 etc

  val df = sparkSession.createDataFrame(list)

  //Getting the list of column names from dataframe

  val dfColumns=df.columns

  //Creating query to rename columns

  val query=inputCols.zipWithIndex.map(index=>dfColumns(index._2)+" as "+inputCols(index._2))

  //Firing above query  

  val newDf=df.selectExpr(query:_*)

 //Creating udf which get index(0,1,2,3) as input and returns corresponding column name from your given array of columns

  val getIndexUDF=udf((row_no:Int)=>inputCols(row_no))

  //Adding temporary column row_no which contains index of row and removing after adding index column

  val dfWithRow=newDf.withColumn("row_no",monotonicallyIncreasingId).withColumn("index",getIndexUDF(col("row_no"))).drop("row_no")

  dfWithRow.show

Sample Output:

+---+---+---+---+-----+
| p1| p2| p3| p4|index|
+---+---+---+---+-----+
|0.0|0.4|0.4|0.0|   p1|
|0.1|0.0|0.0|0.7|   p2|
|0.0|0.2|0.0|0.3|   p3|
|0.3|0.0|0.0|0.0|   p4|
+---+---+---+---+-----+
Sign up to request clarification or add additional context in comments.

1 Comment

monotonicallyIncreasingId - The generated ID is guaranteed to be monotonically increasing and unique, but not consecutive.
4

Here is another way:

val data = Seq((0.0,0.4,0.4,0.0),(0.1,0.0,0.0,0.7),(0.0,0.2,0.0,0.3),(0.3,0.0,0.0,0.0))
val cols = Array("p1", "p2", "p3", "p4","index")

Zip the collection and convert it into DataFrame.

data.zip(cols).map { 
  case (col,index) => (col._1,col._2,col._3,col._4,index)
}.toDF(cols: _*)

Output:

+---+---+---+---+-----+
|p1 |p2 |p3 |p4 |index|
+---+---+---+---+-----+
|0.0|0.4|0.4|0.0|p1   |
|0.1|0.0|0.0|0.7|p2   |
|0.0|0.2|0.0|0.3|p3   |
|0.3|0.0|0.0|0.0|p4   |
+---+---+---+---+-----+

1 Comment

I liked this. .
1

Newer and shorter version should look like for Spark version > 2.4.5. Please find the inline description of the statements

 val spark = SparkSession.builder()
      .master("local[*]")
      .getOrCreate()
 import spark.implicits._
 val cols = (1 to 4).map( i => s"p$i")

    val listDf = Seq((0.0,0.4,0.4,0.0),(0.1,0.0,0.0,0.7),(0.0,0.2,0.0,0.3),(0.3,0.0,0.0,0.0))
      .toDF(cols: _*)   // Map the data to new column names
      .withColumn("index",   // Create a column with auto increasing id
        functions.concat(functions.lit("p"),functions.monotonically_increasing_id())) 

    listDf.show()

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.