On Mon, Jan 7, 2013 at 10:16 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Takeshi Yamamuro <yamamuro.take...@lab.ntt.co.jp> writes:
>> The attached is a patch to improve compression speeds with loss of
>> compression ratios in backend/utils/adt/pg_lzcompress.c.
>
> Why would that be a good tradeoff to make?  Larger stored values require
> more I/O, which is likely to swamp any CPU savings in the compression
> step.  Not to mention that a value once written may be read many times,
> so the extra I/O cost could be multiplied many times over later on.

I disagree.  pg compression is so awful it's almost never a net win.
I turn it off.

> Another thing to keep in mind is that the compression area in general
> is a minefield of patents.  We're fairly confident that pg_lzcompress
> as-is doesn't fall foul of any, but any significant change there would
> probably require more research.

A minefield of *expired* patents.  Fast lz based compression is used
all over the place -- for example by the lucene.

lz4.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to