This is perhaps more of a YARN question than a Spark question but i was
just curious to how is memory allocated in YARN via the various
configurations.  For example, if I spin up my cluster with 4GB with a
different number of executors as noted below

 4GB executor-memory x 10 executors = 46GB  (4GB x 10 = 40 + 6)
 4GB executor-memory x 4 executors = 19GB (4GB x 4 = 16 + 3)
 4GB executor-memory x 2 executors = 10GB (4GB x 2 = 8 + 2)

The pattern when observing the RM is that there is a container for each
executor and one additional container.  From the basis of memory, it looks
like there is an additional (1GB + (0.5GB x # executors)) that is allocated
in YARN.

Just wondering why is this  - or is this just an artifact of YARN itself?

Thanks!

Reply via email to