Thanks for the responses all.  I may have worded my original email poorly
-- I don't want to focus too much on SPARK-14649 and SPARK-13669 in
particular, but more on how we should be approaching these changes.

On Mon, Mar 27, 2017 at 9:01 PM, Kay Ousterhout <k...@eecs.berkeley.edu>
wrote:

> (1) I'm pretty hesitant to merge these larger changes, even if they're
> feature flagged, because:
>    (a) For some of these changes, it's not obvious that they'll always
> improve performance. e.g., for SPARK-14649, it's possible that the tasks
> that got re-started (and temporarily are running in two places) are going
> to fail in the first attempt (because they haven't read the missing map
> output yet).  In that case, not re-starting them will lead to worse
> performance.
>    (b) The scheduler already has some secret flags that aren't documented
> and are used by only a few people.  I'd like to avoid adding more of these
> (e.g., by merging these features, but having them off by default), because
> very few users use them (since it's hard to learn about them), they add
> complexity to the scheduler that we have to maintain, and for users who are
> considering using them, they often hide advanced behavior that's hard to
> reason about anyway (e.g., the point above for SPARK-14649).
>

We definitely need to evaluate each change on a case-by-case basis, and
decide whether there really is a potential benefit worth the complexity.

But I'm actually wondering whether we need to be much more open on point
(b).  The model spark currently uses has been sufficient so far, but we're
increasingly see spark used on bigger clusters, with more varied deployment
types, and I'm not convinced we really know what the right answer is.  its
really hard to find behavior which will be optimal for everyone -- its not
just looking at complexity or doing microbenchmarks.  Lots of
configurations are a pain as you say -- complexity we have to manage in the
code, and even for the user they have to be pretty advanced to understand
how to use them.  But I'm just not confident that we know the right
behavior for setups that range from small clusters; large clusters with
heterogenous hardware, ongoing maintenance, and various levels of inhouse
ops support; and spot instances in the cloud.

For example, lets say users start pushing spark on to more spot instances,
and see 75% of executors die, but still want spark to make reasonable
progress.  I can imagine wanting some large changes in behavior to support
that.  I dont' think we're going to have enough info on the exact
characteristics of spot instances and what the best behavior is -- there
will probably be some point where we will turn users loose with some knobs,
and hopefully after some experience in the wild, we can come up with
recommended settings.  Furthermore, given that our tests alone dont' lead
to a ton of confidence, it feels like we *have* to turn some of these
things over to users to bang on for a while, with the old behavior still
available to most users.

(I agree the benefit of SPARK-14649 in particular is unclear, and I'm not
advocating one way or the other on that particular change here, just trying
to setup the right frame of mind for considering it.)


>    (c) The worst performance problem is when jobs just hang or crash;
> we've seen a few cases of that in recent bugs, and I'm worried that merging
> these complex performance improvements trades better performance in a small
> number of cases for the possibility of worse performance via job
> crashes/hangs in other cases.
>

I completely agree with this.   Obviously I'm contradicting my response to
(b) above, which is why I'm torn.  I am not sure of a way to get those
simultaneously, given the current state of the code.  Even under a
feature-flag, the changes I'm thinking of are invasive enough that it
introduces a risk of bugs even for the old behavior.


> Roughly I think our standards for merging performance fixes to the
> scheduler should be that the performance improvement either (a) is simple /
> easy to reason about or (b) unambiguously fixes a serious performance
> problem.  In the case of SPARK-14649, for example, it is complex, and
> improves performance in some cases but hurts it in others, so doesn't fit
> either (a) or (b).
>

In the past, I would totally agree with you, in fact I think I've been one
to advocate caution with changes.  But I'm slowly thinking that we need to
be less strict about "unambiguously fixes a serious performance problem".
I would phrase it more like "clear demonstration of a use case with a
significant performance problem".  I know I'm now quibbling over vague
adjectives, but hopefully conveying my sentiment.

(2) I do think there are some scheduler re-factorings that would improve
> testability and our ability to reason about correctness, but think there
> are some what surgical, smaller things we could do in the vein of Imran's
> comment about reducing shared state.  Right now we have these super wide
> interfaces between different components of the scheduler, and it means you
> have to reason about the TSM, TSI, CGSB, and DAGSched to figure out whether
> something works.
>

don't forget the OutputCommitCoordinator :)


>   I think we could have an effort to make each component have a much
> narrower interface, so that each part hides a bunch of complexity from
> other components.  The most obvious place to do this in the short term is
> to remove a bunch of info tracking from the DAGScheduler; I filed a JIRA
> for that here <https://issues.apache.org/jira/browse/SPARK-20116>.  I
> suspect there are similar things that could be done in other parts of the
> scheduler.
>

yeah this seems like a good idea.  Hopefully that would also improve
testability.


> On Mon, Mar 27, 2017 at 11:06 AM, Tom Graves <tgraves...@yahoo.com> wrote:
>
>> I don't know whether it needs an entire rewrite but I think there need to
>> be some major changes made especially in the handling of reduces and fetch
>> failures.  We could do a much better job of not throwing away existing work
>> and more optimally handling the failure cases.  For this would it make
>> sense for us to start with a jira that has a bullet list of things we would
>> like to improve and get more cohesive view and see really how invasive it
>> would be?
>>
>
I agree that sounds like a good idea.  I think Sital is going to take a
shot at a design doc coming out of the discussion for changes related
to SPARK-14649.  But it would be great to get multiple folks thinking about
that list as everyone will have different use cases in mind.

thanks everyone for the input.

Imran

Reply via email to