16

I'm getting the following error trying to build a ML Pipeline:

pyspark.sql.utils.IllegalArgumentException: 'requirement failed: Column features must be of type org.apache.spark.ml.linalg.VectorUDT@3bfc3ba7 but was actually ArrayType(DoubleType,true).'

My features column contains an array of floating point values. It sounds like I need to convert those to some type of vector (it's not sparse, so a DenseVector?). Is there a way to do this directly on the DataFrame or do I need to convert to an RDD?

1
  • I was stuck on the same thing while calculating the vector norm. I used array_to_vector from pyspark.ml.functions to convert the array column to a vector type. Only available in pyspark>=3.1.0. More details here: stackoverflow.com/a/48333361/2650427. Commented Jun 1, 2023 at 14:08

1 Answer 1

28

You can use UDF:

udf(lambda vs: Vectors.dense(vs), VectorUDT())

In Spark < 2.0 import:

from pyspark.mllib.linalg import Vectors, VectorUDT

In Spark 2.0+ import:

from pyspark.ml.linalg import Vectors, VectorUDT

Please note that these classes are not compatible despite identical implementation.

It is also possible to extract individual features and assemble with VectorAssembler. Assuming input column is called features:

from pyspark.ml.feature import VectorAssembler

n = ... # Size of features

assembler = VectorAssembler(
    inputCols=["features[{0}]".format(i) for i in range(n)], 
    outputCol="features_vector")

assembler.transform(df.select(
    "*", *(df["features"].getItem(i) for i in range(n))
))
Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.