On 6/5/2017 8:04 AM, Lars Schneider wrote:

On 01 Jun 2017, at 15:33, Ben Peart <peart...@gmail.com> wrote:



On 6/1/2017 8:48 AM, Lars Schneider wrote:
Hi,
we occasionally see "The remote end hung up unexpectedly" (pkt-line.c:265)
on our `git fetch` calls (most noticeably in our automations). I expect
random network glitches to be the cause.
In some places we added a basic retry mechanism and I was wondering
if this could be a useful feature for Git itself.

Having a configurable retry mechanism makes sense especially if it allows 
continuing an in-progress download rather than aborting and trying over.  I 
would make it off by default so that any existing higher level retry mechanism 
doesn't trigger a retry storm if the problem isn't a transient network glitch.

Agreed.


Internally we use a tool 
(https://github.com/Microsoft/GVFS/tree/master/GVFS/FastFetch) to perform fetch 
for our build machines.  It has several advantages including retries when 
downloading pack files.

That's a "drop-in" replacement for "git fetch"?! I looked a bit through the
"git fetch" code and retry (especially with continuing in-progress downloads)
looks like a bigger change than I expected because of the current "die()
in case of error" implementation.


No, not a drop in replacement. We only use this on build machines which don't need history so it only pulls down the tip commit on the initial clone. This is a big win on large repos with a lot of history but not so great for a developer machines where history may be desired.


It's biggest advantage is that it uses multiple threads to parallelize the 
entire fetch and checkout operation from end to end (ie the download happens in 
parallel as well as checkout happening in parallel with the download) which 
makes it take a fraction of the overall time.

Interesting. Do you observe noticeable speed improvements with fetch delta 
updates,
too? This is usually fast enough for us.

Since we have our build machines setup to use it for the clone, we kept using it for delta updates. When deltas get large (and with thousands of developers pushing that can happen pretty quickly) it is still a nice perf win.


The people I work with usually complain that the "clone operation" is slow. The
reason is that they clone over and over again to get a "clean checkout". I try
to explain to them in that case that every machine should clone only once and
that there are way more efficient ways to get a clean checkout.


When time permits, I hope to bring some of these enhancements over into git 
itself.

That would be great!


- Lars

Reply via email to