On 8/5/2007 6:30 PM, Tom Lane wrote:
Gregory Stark <[EMAIL PROTECTED]> writes:
(Incidentally, this means what I said earlier about uselessly trying to
compress objects below 256 is even grosser than I realized. If you have a
single large object which even after compressing will be over the toast
Gregory Stark <[EMAIL PROTECTED]> writes:
> (Incidentally, this means what I said earlier about uselessly trying to
> compress objects below 256 is even grosser than I realized. If you have a
> single large object which even after compressing will be over the toast target
> it will force *every* va
"Tom Lane" <[EMAIL PROTECTED]> writes:
> This whole structure seems a bit broken, independently of whether the
> particular parameter values are good. If the compressor is given an
> input of 100 bytes and manages to compress it to 99 bytes,
> we'll store it compressed, and pay for decom
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Tom Lane wrote:
>
> I'm inclined to think that the concept of force_input_size is wrong.
> Instead I suggest that we have a min_comp_rate (minimum percentage
> savings) and a min_savings (minimum absolute savings), and compress
> if either one is met.
Greg complained here
http://archives.postgresql.org/pgsql-patches/2007-07/msg00342.php
that the default strategy parameters used by the TOAST compressor
might need some adjustment. After thinking about it a little I wonder
whether they're not even more broken than that. The present behavior
is: