I have a dataframe yeadDF, created by reading an RDBMS table as below:
val yearDF = spark.read.format("jdbc").option("url", connectionUrl)
.option("dbtable", s"(${query}) as year2017")
.option("user", devUserName)
.option("password", devPassword)
.option("numPartitions",15)
.load()
I have to apply a regex pattern to the above dataframe before ingesting it into Hive table on HDFS. Below is the regex pattern:
regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(%s, E'[\\\\n]+', ' ', 'g' ), E'[\\\\r]+', ' ', 'g' ), E'[\\\\t]+', ' ', 'g' ), E'[\\\\cA]+', ' ', 'g' ), E'[\\\\ca]+', ' ', 'g' )
I should be applying this regex only on the columns that are of datatype String in the dataframe: yearDF. I tried the following way:
val regExpr = yearDF.schema.fields
.map(x =>
if(x.dataType == String)
"regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(%s, E'[\\\\n]+', ' ', 'g' ), E'[\\\\r]+', ' ', 'g' ), E'[\\\\t]+', ' ', 'g' ), E'[\\\\cA]+', ' ', 'g' ), E'[\\\\ca]+', ' ', 'g' ) as %s".format(x,x)
)
yearDF.selectExpr(regExpr:_*)
But it gives me a compilation error: Type mismatch, expected: Seq[String], actual: Array[Any]
I cannot use yearDF.columns.map as this will act on all the columns and
I am unable to properly form the logic here.
Could anyone let me know how can I apply the regex mentioned above on the dataframe:yearDF only on the columns that are of String type ?