On 07/03/14 22:51, David Wall wrote:
> 
> On 7/3/2014 11:47 AM, Eduardo Morras wrote:
>> No, there's nothing wrong. All transparent compressed objects stored 
>> in database, toast, lo, etc.. is transparently decompressed while 
>> pg_dump access them and then you gzip it again. I don't know why it 
>> doesn't dump the compressed data directly. 
> 
> That sounds odd, but if pg_dump decompresses the large objects and then 
> I gzip them on backup, doesn't the same more or less happen in reverse 
> when I pg_restore them?  I mean, I gunzip the backup and then pg_restore 
> must compress the large objects when it writes them back.

I believe there is a little confusion here.  (If not, I apologize in
advance :-) ).

There are at least 3 forms of (de)compression going on here:
   1) Application level:  The data itself is presented to the
      application in a compressed format (image file, etc.) or
      it is compressed by the application prior to presentation to PG.
      The fact that this data is compressed is not relevant to the
      following;

   2) PG server-side: Automated TOAST (de)compression which may or may
      not be enabled for a table/column.

   3) Dump/restore/tools:  the pg dump/restore and any tools you
      use to (de)compress the DB backup.  Without due care this can
      end up in multiple attempts to (de)compress the same file, as
      we've seen. :-)

Regardless of what happens in 2) & 3) the application-level compression
state in 1) will not be affected.

Thus a blob that represents a gzip'd file presented to PG in 1) will
remain compressed through all 3 stages.  It is *not* auto-magically
uncompressed at any stage.

I believe Eduardo was referring to 2) which, because it is an
automated, transparent, server-side operation needs to be performed
even as part of a dump/restore.

HTH,
Bosco.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to