On Tue, Jul 14, 2015 at 11:13 AM, Shushant Arora <shushantaror...@gmail.com>
wrote:

> spark-submit --class classname --num-executors 10 --executor-cores 4
> --master masteradd jarname
>
> Will it allocate 10 containers throughout the life of streaming
> application on same nodes until any node failure happens and
>

It will allocate 10 containers somewhere in the cluster (wherever YARN
tells the application to). If a container dies (not necessarily because of
node failure), Spark will start a new one, which may start somewhere else.

And these 10 containers will be released only at end of streaming
> application never in between if none of them fails.
>

Correct. If you don't want that behavior you should look at enabling
dynamic allocation in Spark (see docs).

-- 
Marcelo

Reply via email to