> On Mar 6, 2017, at 1:17 PM, Andrew Wang <andrew.w...@cloudera.com> wrote:
> 
> Do you have a link to your old job somewhere?

        Nope, but it’s trivial to write.  single job that only runs on H9 that 
removes that other job’s workspace dir.  You can also try using the “Wipe out 
current workspace” button.

> I'm also wondering what causes this; does this issue surface in the same way 
> each time?

        It’s usually a job that writes non-exec dirs and got aborted in a weird 
way, so that the chmod doesn’t trigger.  Then git can’t delete on the next job. 
 If it is that, it’s fundamentally a bug in the Hadoop unit tests.

> Also wondering, should we nuke the workspace before every run, for improved 
> reliability?

        It would mean a clone every time, which would put a considerable load 
on the ASF git servers on busy days.


---------------------------------------------------------------------
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

Reply via email to