> On Jul 23, 2018, at 4:07 PM, Gav <ipv6g...@gmail.com> wrote: > > You are trading latency for disk space here.
For the builds I’m aware of, without a doubt. But that’s not necessarily true for all jobs. As Jason Kuster pointed out, in some cases one may be choosing reliability for disk space. (But I guess that reliability depends upon the node. :) ) > Just how long are you proposing that workspaces be kept for - considering > that the non hadoop nodes are running out of disk every day and workspaces > of projects are exceeding 300GB in size, that seems totally over the top in > order to keep a local cache around to save a bit of time. 300GB spread across how many jobs though? All of them? If 300 jobs are using 1G each, that sounds amazingly good given the size of just git repos may eat that much space on super active ones. If it’s a single workspace hitting those numbers, then yes, that’s problematic. Have you tried to talking to the owners of the bigger space hogs individually? I’d be greatly surprised if the majority of people relying upon Jenkins actual read builds@. They are likely unaware their stuff is breaking the universe.