9

While running my spark-submit code, I get this error when I execute.

Scala file which performs joins.

I am just curious to know what is this TreeNodeException error.

Why do we have this error?

Please share your ideas on this TreeNodeException error:

Exception in thread “main” org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:
3
  • 1
    I also have the same issue. Commented Sep 27, 2018 at 14:22
  • Even i have the same is this fixed ? please answer the question Commented Sep 30, 2018 at 17:54
  • Is there any solution for this and how tis could be fixed ? Commented Sep 30, 2018 at 18:02

2 Answers 2

3

Ok so the stack trace given above is not sufficient to understand the root cause, but as you mentioned you are using the join the most probably it's happening because of that. I faced the same issue for join, if you dig down your stack trace you would see something like -

+- *HashAggregate(keys=[], functions=[partial_count(1)], output=[count#73300L])
+- *Project
+- *BroadcastHashJoin 
...
Caused by: java.util.concurrent.TimeoutException: Futures timed out after [300 seconds]

This gives hint why it's failing, Spark tries to join using "Broadcast Hash Join", which has Timeout and Broadcast size threshold, either of which causes above error.To fix this depending on underlying error -

Increase the "spark.sql.broadcastTimeout", default is 300 sec -

spark = SparkSession
  .builder
  .appName("AppName")
  .config("spark.sql.broadcastTimeout", "1800")
  .getOrCreate()

Or increase the broadcast threshold,default is 10 MB -

spark = SparkSession
      .builder
      .appName("AppName")
      .config("spark.sql.autoBroadcastJoinThreshold", "20485760 ")
      .getOrCreate()

Or disable the Broadcast join by setting value to -1

spark = SparkSession
          .builder
          .appName("AppName")
          .config("spark.sql.autoBroadcastJoinThreshold", "-1")
          .getOrCreate()

More details can be found here - https://spark.apache.org/docs/latest/sql-performance-tuning.html

Sign up to request clarification or add additional context in comments.

Comments

1

I encountered this exception when joining dataframes too

Exception in thread “main” org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:

To fix it, I simply reversed the order of the join. That is, instead of doing df1.join(df2, on_col="A"), I did df2.join(df1, on_col="A"). Not sure why this is the case but my intuition tells me the logic tree that Spark must follow is messy when you use the former command but not the with the latter. You can think of it as the number of comparisons Spark would have to make with column "A" in my toy example to join both dataframes. I know it's not a definite answer but I hope it helps.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.