Github user pwendell commented on a diff in the pull request:

    https://github.com/apache/spark/pull/3204#discussion_r20325675
  
    --- Diff: 
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
    @@ -217,14 +223,24 @@ private[spark] class ExecutorAllocationManager(sc: 
SparkContext) extends Logging
           numExecutorsToAdd = 1
           return 0
         }
    +    val maxExecutorsPending = 
maxPendingExecutors(sc.taskScheduler.numPendingTasks)
    --- End diff --
    
    Hey @sryza it's possible to estimate the size of the queue pretty well from 
the data in `ExecutorAllocationListener`, we would just ignore the effect of 
speculated and re-submitted tasks (and this I think is a small enough margin 
that it's not a big deal). I think you can just add a function called 
`getPendingTasks` to ExecutorAllocationListener and it would go through each 
stage and subtract the number of distinct indices started from the number of 
tasks in the stage. This would retain the isolation of these two components and 
only sacrifice a small amount of accuracy.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to