executor.memory only sets the maximum heap size of executor and the JVM needs
non-heap memory to store class metadata, interned strings and other native
overheads coming from networking libraries, off-heap storage levels, etc. These
are (of course) legitimate usage of resources and you'll have t
Hi all:
I have a question about why spark on yarn will need extra memory
I apply for 10 executors, executor memory 6g, I find that it will allocate 1g
more for 1 executor, totally 7g for 1 executor.
I try to set spark.yarn.executor.memoryOverhead, but it did not help.
1g for 1 executor is too muc