Hi Mark, Mark H Weaver <m...@netris.org> skribis:
> For these reasons, I'm inclined to think that parallel downloads is the > wrong approach. If a single download process is not making efficient > use of the available bandwidth, I'd be more inclined to look carefully > at why it's failing to do so. For example, I'm not sure if this is the > case (and don't have time to look right now), but if the current code > waits until a NAR has finished downloading before asking for the next > one, that's an issue that could be fixed by use of HTTP pipelining, > without multiplying the memory usage. I agree. There’s HTTP pipelining for narinfos but not for nars. Worse, before fetching a nar, we do a GET /nix-cache-info, and in fact we spawn a new ‘guix substitute’ process for each download (for “historical reasons”). So there’s room for optimization there! Ludo’.