Hi, Nicolò Balzarotti <another...@gmail.com> skribis:
> I guess this benchmark follows the distri talk, doesn't it? :) Yes, that and my own quest for optimization opportunities. :-) > File size with zstd vs zstd -9 vs current lzip: > - 71M uc.nar.lz > - 87M uc.nar.zst-9 > - 97M uc.nar.zst-default > >> Where to go from here? Several options: > >> 1. Since ci.guix.gnu.org still provides both gzip and lzip archives, >> ‘guix substitute’ could automatically pick one or the other >> depending on the CPU and bandwidth. Perhaps a simple trick would >> be to check the user/wall-clock time ratio and switch to gzip for >> subsequent downloads if that ratio is close to one. How well would >> that work? > > I'm not sure using heuristics (i.e., guessing what should work better, > like in 1.) is the way to go, as temporary slowdowns to the network/cpu > will during the first download would affect the decision. I suppose we could time each substitute download and adjust the choice continually. It might be better to provide a command-line flag to choose between optimizing for bandwidth usage (users with limited Internet access may prefer that) or for speed. >> 2. Use Zstd like all the cool kids since it seems to have a much >> higher decompression speed: <https://facebook.github.io/zstd/>. >> 630 MB/s on ungoogled-chromium on my laptop. Woow. > > I know this means more work to do, but it seems to be the best > alternative. However, if we go that way, will we keep lzip substitutes? > The 20% difference in size between lzip/zstd would mean a lot with slow > (mobile) network connections. A lot in what sense? In terms of bandwidth usage, right? In terms of speed, zstd would probably reduce the time-to-disk as soon as you have ~15 MB/s peak bandwidth or more. Anyway, we’re not there yet, but I suppose if we get zstd support, we could configure berlin to keep lzip and zstd (rather than lzip and gzip as is currently the case). Ludo’.