Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/99#issuecomment-37155320
I'm used to thinking of Spark standalone clusters as having 4 different
"types" of JVMs -- 1 driver, 1 master, N workers, and M executors. Does
SPARK_DAEMON
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/110#issuecomment-37154887
I find that new users often wonder why Spark is only using 1 core, and it's
because they expected local to use all their cores rather than defaulting to
jus