[ 
https://issues.apache.org/jira/browse/SPARK-51691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-51691.
------------------------------
    Fix Version/s: 4.1.0
       Resolution: Fixed

Issue resolved by pull request 50489
[https://github.com/apache/spark/pull/50489]

> SerializationDebugger should swallow exception when try to find the reason of 
> serialization problem
> ---------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-51691
>                 URL: https://issues.apache.org/jira/browse/SPARK-51691
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 3.5.5, 4.1.0
>            Reporter: zhoubin
>            Assignee: zhoubin
>            Priority: Minor
>              Labels: pull-request-available
>             Fix For: 4.1.0
>
>
> I made a serialization mistake when develop a feature for our production 
> enviroment.
> However the Exception and Stack trace is Confusion.
> We only get the root serialization cause,  but the `Serialization stack` is 
> not shown, it is not easy to find the real problem
>  
> ```
> 13:38:31.443 WARN org.apache.spark.serializer.SerializationDebugger: 
> Exception in serialization debugger
> org.apache.spark.SparkRuntimeException: Cannot get SQLConf inside scheduler 
> event loop thread.
>     at 
> org.apache.spark.sql.errors.QueryExecutionErrors$.cannotGetSQLConfInSchedulerEventLoopThreadError(QueryExecutionErrors.scala:2002)
>     at org.apache.spark.sql.internal.SQLConf$.get(SQLConf.scala:225)
>     at 
> org.apache.spark.sql.execution.ScalarSubquery.toString(subquery.scala:69)
>     at java.lang.String.valueOf(String.java:2994)
>     at scala.collection.mutable.StringBuilder.append(StringBuilder.scala:203)
>     at scala.collection.immutable.Stream.addString(Stream.scala:701)
>     at scala.collection.TraversableOnce.mkString(TraversableOnce.scala:377)
> [info] - SPARK-35874: AQE Shuffle should wait for its subqueries to finish 
> before materializing *** FAILED *** (1 second, 660 milliseconds)
> [info]   org.apache.spark.SparkException: Job aborted due to stage failure: 
> Task not serializable: java.io.NotSerializableException: 
> org.apache.spark.SimpleFutureAction
> [info]   at 
> org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2865)
> [info]   at 
> org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2800)
> [info]   at 
> org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2799)
> [info]   at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
> [info]   at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
> [info]   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
> ```



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to