Hi, My worker nodes have more memory than the host that I’m submitting my driver program, but it seems that SPARK_MEM is also setting the Xmx of the spark shell?
$ SPARK_MEM=100g MASTER=spark://XXX:7077 bin/spark-shell Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f736e130000, 205634994176, 0) failed; error='Cannot allocate memory' (errno=12) # # There is insufficient memory for the Java Runtime Environment to continue. # Native memory allocation (malloc) failed to allocate 205634994176 bytes for committing reserved memory. I want to allocate at least 100GB of memory per executor. The allocated memory on the executor seems to depend on the -Xmx heap size of the driver? Thanks!