Github user aarondav commented on the pull request:

    https://github.com/apache/spark/pull/99#issuecomment-37213831
  
    You might also look at my comment summarizing how one can configure memory 
for each of the components here: 
https://github.com/apache/incubator-spark/pull/615#issuecomment-35818768. There 
is also sometimes a Client JVM which simply submits a driver to be run inside a 
cluster, but right now we can't configure this memory separately (it's 
transient and low-memory anyway). Also sometimes there may be more than 1 
master for fault tolerance purposes.
    
    SPARK_DAEMON_MEMORY does control both the master and worker JVM heap sizes 
-- I think they were mainly introduced so that the daemons would not use 
SPARK_MEM rather than to really make the daemon memory configurable, since they 
use little memory.
    
    More documentation is always useful! Sometimes PRs and code aren't the best 
place for informing users...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to