Thanks Steve for insights into design choices of spark AM. Here's counter
arguments:
2. on Killing. I don't think using Virtual Memory (swaps ) for one
application will downgrade performance of entire cluster and other
applications drastically. For that given application, cluster will only use
res
On 17 Feb 2016, at 01:29, Nirav Patel
mailto:npa...@xactlycorp.com>> wrote:
I think you are not getting my question . I know how to tune executor memory
settings and parallelism . That's not an issue. It's a specific question about
what happens when physical memory limit of given executor is r
I think you are not getting my question . I know how to tune executor memory
settings and parallelism . That's not an issue. It's a specific question about
what happens when physical memory limit of given executor is reached. Now yarn
nodemanager has specific setting about provisioning virtual m
Looks like your executors are running out of memory. YARN is not kicking
them out. Just increase the executor memory. Also considering increasing
the parallelism ie the number of partitions.
Regards
Sab
On 11-Feb-2016 5:46 am, "Nirav Patel" wrote:
> In Yarn we have following settings enabled so
you can also activate detail GC prints to get more infos
2016-02-11 7:43 GMT+01:00 Shiva Ramagopal :
> How are you submitting/running the job - via spark-submit or as a plain
> old Java program?
>
> If you are using spark-submit, you can control the memory setting via the
> configuration paramete
How are you submitting/running the job - via spark-submit or as a plain old
Java program?
If you are using spark-submit, you can control the memory setting via the
configuration parameter spark.executor.memory in spark-defaults.conf.
If you are running it as a Java program, use -Xmx to set the ma