Thanks Josh and Yin.
Created following JIRA for the same :-
https://issues.apache.org/jira/browse/SPARK-7970
Thanks
-Nitin
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/ClosureCleaner-slowing-down-Spark-SQL-queries-tp12466p12515.html
Sent from the
For Spark SQL internal operations, probably we can just
create MapPartitionsRDD directly (like
https://github.com/apache/spark/commit/5287eec5a6948c0c6e0baaebf35f512324c0679a
).
On Fri, May 29, 2015 at 11:04 AM, Josh Rosen wrote:
> Hey, want to file a JIRA for this? This will make it easier to
Hey, want to file a JIRA for this? This will make it easier to track
progress on this issue. Definitely upload the profiler screenshots there,
too, since that's helpful information.
https://issues.apache.org/jira/browse/SPARK
On Wed, May 27, 2015 at 11:12 AM, Nitin Goyal wrote:
> Hi Ted,
>
Hi Ted,
Thanks a lot for replying. First of all, moving to 1.4.0 RC2 is not easy for
us as migration cost is big since lot has changed in Spark SQL since 1.2.
Regarding SPARK-7233, I had already looked at it few hours back and it
solves the problem for concurrent queries but my problem is just fo
Can you try your query using Spark 1.4.0 RC2 ?
There have been some fixes since 1.2.0
e.g.
SPARK-7233 ClosureCleaner#clean blocks concurrent job submitter threads
Cheers
On Wed, May 27, 2015 at 10:38 AM, Nitin Goyal wrote:
> Hi All,
>
> I am running a SQL query (spark version 1.2) on a table c