Hi Pierre, Pierre Neidhardt <m...@ambrevar.xyz> skribis:
> Ludovic Courtès <l...@gnu.org> writes: > >> Well, ‘guix publish’ would first need to create multi-member archives, >> right? > > Correct, but it's trivial once the bindings have been implemented. OK. >> Also, lzlib (which is what we use) does not implement parallel >> decompression, AIUI. > > Yes it does, multi-member archives is a non-optional part of the Lzip > specs, and lzlib implemetns all the specs. Nice. >> Even if it did, would we be able to take advantage of it? Currently >> ‘restore-file’ expects to read an archive stream sequentially. > > Yes it works, I just tried this: > > cat big-file.lz | plzip -d -o big-file - > > Decompression happens in parallel. > >> Even if I’m wrong :-), decompression speed would at best be doubled on >> multi-core machines (wouldn’t help much on low-end ARM devices), and >> that’s very little compared to the decompression speed achieved by zstd. > > Why doubled? If the archive has more than CORE-NUMBER segments, then > the decompression duration can be divided by CORE-NUMBER. My laptop has 4 cores, so at best I’d get a 4x speedup, compared to the 10x speedup with zstd that also comes with much lower resource usage, etc. > All that said, I think we should have both: > > - Parallel lzip support is the easiest to add at this point. > It's the best option for people with low bandwidth. This can benefit > most of the planet I suppose. > > - zstd is best for users with high bandwidth (or with slow hardware). > We need to write the necessary bindings though, so it will take a bit > more time. > > Then the users can choose which compression they prefer, mostly > depending on their hardware and bandwidth. Would you like to give parallel lzip a try? Thanks! Ludo’.