Thanks for your reply guys.
@Alex : Hope in the future releases we might get a way to this.
@Saisai : The concern regarding the Node Manager restart is that, if in a
shared YARN cluster running other applications as well apart from Spark,
for enabling spark shuffle service, other running applica
If you want to avoid existing job failure while restarting NM, you could
enable work preserving for NM, in this case, the restart of NM will not
affect the running containers (containers can still run). That could
alleviate NM restart problem.
Thanks
Saisai
On Wed, Mar 16, 2016 at 6:30 PM, Alex D
Hi Vinay,
I believe it's not possible as the spark-shuffle code should run in the
same JVM process as the Node Manager. I haven't heard anything about on the
fly bytecode loading in the Node Manger.
Thanks, Alex.
On Wed, Mar 16, 2016 at 10:12 AM, Vinay Kashyap wrote:
> Hi all,
>
> I am using *
Hi all,
I am using *Spark 1.5.1* in *yarn-client* mode along with *CDH 5.5*
As per the documentation to enable Dynamic Allocation of Executors in Spark,
it is required to add the shuffle service jar to YARN Node Manager's
classpath and restart the YARN Node Manager.
Is there any way to to dynami