> So we process the packfile URIs one by one as we receive them from the
> server? If we expect these packfiles to be large (otherwise why are we
> bothering to offload them to the CDN), is there a risk that the
> connection to the server might time out while we're downloading from the
> CDN?

You're right that this is undesirable - this is one of the things that I
will fix, as I mention in the cover letter ("starting CDN downloads...")
[1].

> Please take a look. Feel free to comment on anything, but I prefer
> comments on the major things first (e.g. my usage of a separate process
> (http-fetch) to fetch packfiles, since as far as I know, Git doesn't
> link to libcurl; any of the design decisions I described above). I know
> that there are some implementation details that could be improved (e.g.
> parallelization of the CDN downloads, starting CDN downloads *after*
> closing the first HTTP request, holding on to the .keep locks until
> after the refs are set), but will work on those once the overall design
> is more or less finalized.

[1] https://public-inbox.org/git/20190301000954.ga47...@google.com/

Reply via email to