I have a few questions about managing Spark memory: 1) In a standalone setup, is their any cpu prioritization across users running jobs? If so, what is the behavior here?
2) With Spark 1.1, users will more easily be able to run drivers/shells from remote locations that do not cause firewall headaches. Is there a way to kill an individual user's job from the console without killing workers? We are in Mesos and are not aware of an easy way to handle this, but I imagine standalone mode may handle this.