I have the following function that worked fine in the REPL, essentially what it is doing is checking the datatype of a schema and matching it to the column when I flatten the file out later and zipWithIndex:
//Match a Schema to a Column value
def schemaMatch(x: Array[String]) = {
var accum = 0
for(i <- 0 until x.length) {
val convert = x(i).toString.toUpperCase
println(convert)
val split = convert.split(' ')
println(split.mkString(" "))
matchTest(split(1), accum)
accum += 1
}
def matchTest(y:String, z:Int) = y match{
case "STRING" => strBuf += z
case "INTEGER" => decimalBuf += z
case "DECIMAL" => decimalBuf += z
case "DATE" => dateBuf += z
}
}
schemaMatch(schema1)
The error I am getting:
Exception in thread "main" java.lang.NoSuchMethodError: scala.runtime.IntRef.create(I)Lscala/runtime/IntRef;
at com.capitalone.DTS.dataProfiling$.schemaMatch$1(dataProfiling.scala:112)
at com.capitalone.DTS.dataProfiling$.main(dataProfiling.scala:131)
at com.capitalone.DTS.dataProfiling.main(dataProfiling.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Line 112:
var accum = 0
Any ideas why its no longer working when compiled but worked in the REPL, and how to correct it?