Lets say that yarn has 53GB memory available on each slave

spark.am container needs 896MB.  (512 + 384)

I see two options to configure spark:

1. configure spark executors to use 52GB and leave 1 GB on each box. So,
some box will also run am container. So, 1GB memory will not be used on all
slaves but one.

2. configure spark to use all 53GB and add additional 53GB box which will
run only am container. So, 52GB on this additional box will do nothing

I do not like both options. Is there a better way to configure yarn/spark?


Alex

Reply via email to