On 02/24/11 10:55 PM, Yang Zhang wrote:
For various workloads, compression could be a win on both disk space
and speed (see, e.g.,
http://blog.oskarsson.nu/2009/03/hadoop-feat-lzo-save-disk-space-and.html).
  I realize Postgresql doesn't have general table compression a la
InnoDB's row_format=compressed (there's TOAST for large values and
there's some old discussion on
http://wiki.postgresql.org/wiki/CompressedTables), but I thought I'd
ask: anybody tried to compress their PG data somehow?  E.g., any
positive experiences running PG on a compressed filesystem (and any
caveats)?  Anecdotal stories of the effects of app-level large-field
compression in analytical workloads (though I'd be curious about
transactional workloads as well)?  Thanks in advance.


compressed file systems tend to perform poorly on random 8K block writes, which transactional databases do a lot of.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to