I see, what does http://localhost:4040/executors/ show for memory usage?
I personally find it easier to work with a standalone cluster with a single worker by using the sbin/start-master.sh and then connecting to the master. On Tue, Sep 16, 2014 at 6:04 PM, francisco <ftanudj...@nextag.com> wrote: > Thanks for the reply. > > I doubt that's the case though ... the executor kept having to do a file > dump because memory is full. > > ... > 14/09/16 15:00:18 WARN ExternalAppendOnlyMap: Spilling in-memory map of 67 > MB to disk (668 times so far) > 14/09/16 15:00:21 WARN ExternalAppendOnlyMap: Spilling in-memory map of 66 > MB to disk (669 times so far) > 14/09/16 15:00:24 WARN ExternalAppendOnlyMap: Spilling in-memory map of 70 > MB to disk (670 times so far) > 14/09/16 15:00:31 WARN ExternalAppendOnlyMap: Spilling in-memory map of 127 > MB to disk (671 times so far) > 14/09/16 15:00:43 WARN ExternalAppendOnlyMap: Spilling in-memory map of 67 > MB to disk (672 times so far) > ... > > > > -- > View this message in context: > http://apache-spark-user-list.1001560.n3.nabble.com/Memory-under-utilization-tp14396p14399.html > Sent from the Apache Spark User List mailing list archive at Nabble.com. > > --------------------------------------------------------------------- > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org > For additional commands, e-mail: user-h...@spark.apache.org > >