On Wed, Sep 26, 2012 at 12:41:10PM -0700, Philip Guenther wrote:
> On Wed, Sep 26, 2012 at 12:01 PM, Ted Unangst <[email protected]> wrote:
> > I don't see what we gain by killing jobs.  If the scheduler dice had
> > come down differently, maybe those jobs would finish.
> >
> > Here's a downside, albeit maybe a stretch.  What if the job doesn't
> > like being killed?  You're changing behavior here.  Previously, the
> > only way a job was interrupted was if the operator did it.  In that
> > case, I will pick up the pieces.  I think letting the running jobs
> > finish is actually a better match to the sequential make's behavior.
> 
> +1

Actually, I do see what we gain by killing jobs.

You probably don't see the difference, because you run short stuff with
not enough jobs. But for long running stuff, you will sometimes have an
error, and notice it only a few minutes afterwards, 5000 lines of scrollback
later, when they other jobs that were running finally reach completion...

I see this all the time when working on large stuff.

Seriously, try both. You have access to the "quick death" behavior right now.
Try it on your normal work. Try the normal behavior as well.

Yep, there's the downside that you have to pick up the pieces... happens
almost never to me. Not any more often than the normal "pick up the pieces
because the build crashed, the makefile is bad, and you have to erase lots
of shit because you restart".

The way I see it, there are actually arguments both ways. I'm not talking
hard killing of job, just SIGINT... build programs usually react correctly
to SIGINT (gcc does)... and we're also talking about error detection and
fixing.

Play with both, please. This is definitely not a theoretical argument I'm
trying to make...

Reply via email to