The solution is to strip it out in a hook on your threadpool, by overriding
beforeExecute. See:
https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ThreadPoolExecutor.html
On Fri, Sep 30, 2016 at 7:08 AM, Grant Digby wrote:
> Thanks for the link. Yeah if there's no need to copy execut
Thanks for the link. Yeah if there's no need to copy execution.id from parent
to child then I agree, you could strip it out, presumably in this part of
the code using some kind of configuration as to which properties shouldn't
go across
SparkContext:
protected[spark] val localProperties = new
Inh
And that PR as promised: https://github.com/apache/spark/pull/12456
On Thu, Sep 29, 2016 at 5:18 AM, Grant Digby wrote:
> Yeah that would work although I was worried that they used
> InheritableThreadLocal vs Threadlocal because they did want the child
> threads to inherit the parent's execution
That's not possible because inherited primitive values are copied, not
shared. Clearing problematic values on thread creation should eliminate
this problem.
As to your idea as a design goal, that's also not desirable, because Java
thread pooling is implemented in a very surprising way. The standar
Yeah that would work although I was worried that they used
InheritableThreadLocal vs Threadlocal because they did want the child
threads to inherit the parent's executionId, maybe to stop the child threads
from kicking off their own queries whilst working for the parent. I think
the fix would be to
t; all subsequent queries fail with the same exception until we bounce the
> instance:
>
> IllegalArgumentException: spark.sql.execution.id is already set
> at
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(
> SQLExecution.scala:77)
> at
Hi,
We've received the following error a handful of times and once it's occurred
all subsequent queries fail with the same exception until we bounce the
instance:
IllegalArgumentException: spark.sql.execution.id is already set
at
org.apache.spark.sql.execution.SQ