1. worker memory caps executor.
2. With default config, every job gets one executor per worker. This
executor runs with all cores available to the worker.


On Wed, Oct 1, 2014 at 11:04 AM, Akshat Aranya <aara...@gmail.com> wrote:

> Hi,
>
> What's the relationship between Spark worker and executor memory settings
> in standalone mode?  Do they work independently or does the worker cap
> executor memory?
>
> Also, is the number of concurrent executors per worker capped by the
> number of CPU cores configured for the worker?
>

Reply via email to