On Thu, Sep 25, 2014 at 03:09:16PM +0800, Thomas Goirand wrote: > On 09/25/2014 02:18 AM, Henrique de Moraes Holschuh wrote: > > On Thu, 25 Sep 2014, Thomas Goirand wrote: > >> On 09/02/2014 09:39 PM, Henrique de Moraes Holschuh wrote: > >>> For -z9, it is as bad as ~670MiB to > >>> compress, and ~65MiB to decompress. > >> > >> I'd say this really depends on what you do. For what I do (eg: OpenStack > >> packages), I don't see how 65MB could be a problem. I do compress with > >> -z9, and have no intention to change this, because it makes sense for > >> these packages, where the bottleneck for large deployments will more be > >> the network transfers than uncompressing on each individual nodes. > > > > OTOH, using -z9 on datasets smaller than the -z8 dictionary size *is* a > > waste of memory > > Exactly why should I care when there's all the chances in the world that > my users will have plenty of RAM?
Because you can't know what your users *actually* use? Let's say someone wants to use openstack on a bunch of ARM devices or some such, and they *don't* have two gigs of RAM? What about the buildd machines that your packages are being built on? 670M is a lot of memory, especially if you don't need it. The "memory is cheap nowadays" argument is a fallacy, because that'll always be true (RAM has been getting cheaper since the 1940s, essentially; that doesn't mean you should just waste it for no particularly good reason other than "I'm lazy") -- It is easy to love a country that is famous for chocolate and beer -- Barack Obama, speaking in Brussels, Belgium, 2014-03-26 -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: https://lists.debian.org/20140925100221.gc7...@grep.be