* Linus Torvalds <[EMAIL PROTECTED]> wrote: > > Also, with a 'replicate the full object on every 8th commit' > > rule the risk would be somewhat mitigated as well. > > ..but not the complexity. > > The fact is, I want to trust this thing. Dammit, one reason I like GIT > is that I can mentally visualize the whole damn tree, and each step is > so _simple_. That's extra important when the object database itself is > so inscrutable - unlike CVS or SCCS or formats like that, it's damn > hard to visualize from looking at a directory listing.
ok. Meanwhile i found another counter-argument: the average committed file size is 36K, which with gzip -9 would compress down to roughly 8K, with the commit message being another block. That's 2+1 blocks used per commit, while with deltas one could at most cut this down to 1+1+1 blocks - just as much space! So we would be almost even with the more complex delta approach, just by increasing the default compression ratio from 6 to 9. (but even with the default we are not that bad.) case closed i guess. (The network bandwith issue can/could indeed be solved independently, without any impact to the fundamentals, as you suggested.) Ingo - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/