Thanks Allen,

Some of our nodes are only 364GB in total size, so you can see that this is
an issue.
For the H0-H12 nodes we are pretty fine currently with 2.4/2.6TB disks -
therefore  the
urgency is on the Hadoop nodes H13 - H18 and the non Hadoop nodes.

I propose therefore H0-H12 be trimmed on a monthly basis got mtime +31 in
the workspace
and the H13-H18 + the remaining nodes with 500GB disk and less by done
weekly

Sounds reasonable ?

Thanks

Gav...


On Tue, Jul 24, 2018 at 9:30 AM Allen Wittenauer
<a...@effectivemachines.com.invalid> wrote:

>
> > On Jul 23, 2018, at 4:07 PM, Gav <ipv6g...@gmail.com> wrote:
> >
> > You are trading latency for disk space here.
>
>         For the builds I’m aware of, without a doubt.  But that’s not
> necessarily true for all jobs.  As Jason Kuster pointed out, in some cases
> one may be choosing reliability for disk space. (But I guess that
> reliability depends upon the node. :) )
>
> > Just how long are you proposing that workspaces be kept for  -
> considering
> > that the non hadoop nodes are running out of disk every day and
> workspaces
> > of projects are exceeding 300GB in size, that seems totally over the top
> in
> > order to keep a local cache around to save a bit of time.
>
>         300GB spread across how many jobs though?  All of them? If 300
> jobs are using 1G each, that sounds amazingly good given the size of just
> git repos may eat that much space on super active ones.  If it’s a single
> workspace hitting those numbers, then yes, that’s problematic.
>
>         Have you tried to talking to the owners of the bigger space hogs
> individually?  I’d be greatly surprised if the majority of people relying
> upon Jenkins actual read builds@.  They are likely unaware their stuff is
> breaking the universe.
>
>
>

-- 
Gav...

Reply via email to