Taylor Blau wrote:
> On Sun, Sep 16, 2018 at 03:05:48PM -0700, Jonathan Nieder wrote:
> > On Sun, Sep 16, 2018 at 11:17:27AM -0700, John Austin wrote:
> > > Taylor Blau wrote:

>>>> Right, though this still subjects the remote copy to all of the
>>>> difficulty of packing large objects (though Christian's work to support
>>>> other object database implementations would go a long way to help this).
>>>
>>> Ah, interesting -- I didn't realize this step was part of the
>>> bottleneck. I presumed git didn't do much more than perhaps gzip'ing
>>> binary files when it packed them up. Or do you mean the growing cost
>>> of storing the objects locally as you work? Perhaps that could be
>>> solved by allowing the client more control (ie. delete the oldest
>>> blobs that exist on the server).
>>
>> John, I believe you are correct.  Taylor, can you elaborate about what
>> packing overhead you are referring to?
>
> Jonathan, you are right. I was also referring about the increased time
> that Git would spend trying to find good packfile chains with larger,
> non-textual objects. I haven't done any hard benchmarking work on this,
> so it may be a moot point.

Ah, thanks.  See git-config(1):

        core.bigFileThreshold
                Files larger than this size are stored deflated,
                without attempting delta compression.

                Default is 512 MiB on all platforms.

If that's failing on your machine then it would be a bug, so we'd
definitely want to know.

Jonathan

Reply via email to