What about Spark on Kubernetes, is there a way to manage dynamic resource allocation?
Regards,
Mihai Iacob |
----- Original message -----
From: Michael Gummelt <mgumm...@mesosphere.io>
To: Ji Yan <ji...@drive.ai>
Cc: user <user@spark.apache.org>
Subject: Re: Dynamic resource allocation to Spark on Mesos
Date: Fri, Jan 27, 2017 2:14 PM
Dynamic Allocation is supported in Spark on Mesos, but we here at Mesosphere haven't been testing it much, and I'm not sure what the community adoption is. So I can't yet speak to its robustness, but we will be investing in it soon. Many users want it.> The way I understand is that the Spark job will not run if the CPU/Mem requirement is not met.Spark jobs will still run if they only have a subset of the requested resources. Tasks begin scheduling as soon as the first executor comes up. Dynamic allocation yields increased utilization by only allocating as many executors as a job needs, rather than a single static amount set up front.
On Fri, Jan 27, 2017 at 9:35 AM, Ji Yan <ji...@drive.ai> wrote:Dear Spark Users,Currently is there a way to dynamically allocate resources to Spark on Mesos? Within Spark we can specify the CPU cores, memory before running job. The way I understand is that the Spark job will not run if the CPU/Mem requirement is not met. This may lead to decrease in overall utilization of the cluster. An alternative behavior is to launch the job with the best resource offer Mesos is able to give. Is this possible with the current implementation?ThanksJiThe information in this email is confidential and may be legally privileged. It is intended solely for the addressee. Access to this email by anyone else is unauthorized. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful.
--MesosphereSoftware EngineerMichael Gummelt
--------------------------------------------------------------------- To unsubscribe e-mail: user-unsubscr...@spark.apache.org