4

Some basic info up front:

  • Python: 2.7
  • OS: Mac 10.13.2 High Sierra
  • Anaconda-Navigator: version 1.7.0

My basic workflow is as follows:

  1. Do some initial data pulls and transformations from HDFS using pySpark and Spark dataframes
  2. Turn the Spark dataframe into a Panda dataframe to graph using libraries like Seaborn. Here I use the function .toPandas() but it throws a pretty wonky error.

As an example, here is an extremely small Spark Dataframe I tested, which throws the same error as my larger Dataframe:

sampleList = [('john', 10000.0),('sally', 3.0),('dude', 10.0)]

sparkTestDF = sqlContext.createDataFrame(sampleList, schema=['name','denominator'])

sparkTestDF.toPandas()

This ends up throwing the following error. Any ideas on (a) what this means and (b) how to fix it/work around it?

    Py4JJavaErrorTraceback (most recent call last)
<ipython-input-15-b151034bf9ad> in <module>()
      1 sampleList = [('john', 10000.0),('sally', 3.0),('dude', 10.0)]
      2 sparkTestDF = sqlContext.createDataFrame(sampleList, schema=['name','denominator'])
----> 3 sparkTestDF.toPandas()

/anaconda2/lib/python2.7/site-packages/pyspark/sql/dataframe.pyc in toPandas(self)
   1964                 raise RuntimeError("%s\n%s" % (_exception_message(e), msg))
   1965         else:
-> 1966             pdf = pd.DataFrame.from_records(self.collect(), columns=self.columns)
   1967 
   1968             dtype = {}

/anaconda2/lib/python2.7/site-packages/pyspark/sql/dataframe.pyc in collect(self)
    464         """
    465         with SCCallSiteSync(self._sc) as css:
--> 466             port = self._jdf.collectToPython()
    467         return list(_load_from_socket(port, BatchedSerializer(PickleSerializer())))
    468 

/anaconda2/lib/python2.7/site-packages/py4j/java_gateway.pyc in __call__(self, *args)
   1158         answer = self.gateway_client.send_command(command)
   1159         return_value = get_return_value(
-> 1160             answer, self.gateway_client, self.target_id, self.name)
   1161 
   1162         for temp_arg in temp_args:

/anaconda2/lib/python2.7/site-packages/pyspark/sql/utils.pyc in deco(*a, **kw)
     61     def deco(*a, **kw):
     62         try:
---> 63             return f(*a, **kw)
     64         except py4j.protocol.Py4JJavaError as e:
     65             s = e.java_exception.toString()

/anaconda2/lib/python2.7/site-packages/py4j/protocol.pyc in get_return_value(answer, gateway_client, target_id, name)
    318                 raise Py4JJavaError(
    319                     "An error occurred while calling {0}{1}{2}.\n".
--> 320                     format(target_id, ".", name), value)
    321             else:
    322                 raise Py4JError(

Py4JJavaError: An error occurred while calling o155.collectToPython.
: java.lang.IllegalArgumentException
    at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
    at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
    at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
    at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
    at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
    at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
    at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
    at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
    at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
    at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
    at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
    at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
    at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
    at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432)
    at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
    at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
    at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
    at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
    at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262)
    at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:261)
    at scala.collection.immutable.List.foreach(List.scala:381)
    at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261)
    at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159)
    at org.apache.spark.SparkContext.clean(SparkContext.scala:2292)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2066)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
    at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:297)
    at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply$mcI$sp(Dataset.scala:3195)
    at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply(Dataset.scala:3192)
    at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply(Dataset.scala:3192)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
    at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:3225)
    at org.apache.spark.sql.Dataset.collectToPython(Dataset.scala:3192)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:564)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.base/java.lang.Thread.run(Thread.java:844)
6
  • 1
    I think its something to do with your system setup as I am running the same example and its working fine Commented Mar 12, 2018 at 12:30
  • Yeah, I tried the following and it threw the same error: spark.range(10).collect() If anyone knows what I might need to change in any configuration, I'd really appreciate it. Commented Mar 12, 2018 at 12:52
  • I'm running into this error as well, using anaconda as well, but with pyspark 2.3.0 and python 3.6 Commented Mar 14, 2018 at 15:18
  • @StevenFines if you come across a fix, please let me know! :) Commented Mar 16, 2018 at 13:17
  • I think it has something to do with running via anaconda, as that's what we have in common Commented Mar 19, 2018 at 16:05

1 Answer 1

1

I had this exact same problem and solved it by setting the JAVA_HOME environmental variable to point to Java SDK 8. The key part of this error is

at org.apache.spark.sql.Dataset.collectToPython(Dataset.scala:3192)

which then breaks into a Java error. This is a known issue (see this related Stack Overflow link).

You can set JAVA_HOME in your bashrc, the conf file for spark, or even right in your notebook, e.g. for Ubuntu:

%env JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-amd64/

For Macs, it would be something along the lines of:

%env JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_162.jdk/Contents/Home/
Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.