I have a hunch I want to share: I feel that data is not being deallocated in
memory (at least like in 1.3). Once it goes in-memory it just stays there.

Spark SQL works fine, the same query when run on a new shell won't throw
that error, but when run on a shell which has been used for other queries
before, throws that error.

Also I read on the spark blog, that project Tungsten is making changes in
memory management. And first changes would land in 1.4. Maybe it is related
to that.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/1-4-0-regression-out-of-memory-errors-on-small-data-tp23595p23608.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to