Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/86#issuecomment-36828689
Thanks for taking a look, Matei. If we use system properties instead of
env variables, the remaining reason we'd want to start a second JVM is to be
able to have a --driver-memory property. The only way around this I can think
of would be to require users to set this with an environment variable instead
of a command line option. One small weird thing about this is that the client
would still be given the max heap specified in driver SPARK_DRIVER_MEMORY even
when the driver is being run on the cluster.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---