Github user pwendell commented on a diff in the pull request:

    https://github.com/apache/spark/pull/2840#discussion_r19451628
  
    --- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
    @@ -1501,8 +1533,13 @@ object SparkContext extends Logging {
         res
       }
     
    -  /** Creates a task scheduler based on a given master URL. Extracted for 
testing. */
    -  private def createTaskScheduler(sc: SparkContext, master: String): 
TaskScheduler = {
    +  /**
    +   * Create a task scheduler based on a given master URL.
    +   * Return a 2-tuple of the scheduler backend and the task scheduler.
    +   */
    +  private def createTaskScheduler(
    +      sc: SparkContext,
    +      master: String): (SchedulerBackend, TaskScheduler) = {
    --- End diff --
    
    It might be better here to just add methods to `TaskScheduler` that relate 
to increasing and decreasing the number of executors and have it triage these 
to the backend. How it is now we are adding another pointer between components 
by exposing the backend to the SparkContext. @kayousterhout what do you think 
about these alternatives?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to