On Wed, May 17, 2006 at 10:06:04AM +0200, Martijn van Oosterhout wrote:
> On Wed, May 17, 2006 at 09:45:35AM +0200, Albe Laurenz wrote:
> > Oracle's compression seems to work as follows:
> > - At the beginning of each data block, there is a 'lookup table'
> >   containing frequently used values in table entries (of that block).
> > - This lookup table is referenced from within the block.
> 
> Clever idea, pity we can't use it (what's the bet it's patented?). I'd
> wager anything beyond simple compression is patented by someone.
> 
> The biggest issue is really that once postgres reads a block from disk
> and uncompresses it, this block will be much larger than 8K. Somehow
> you have to arrange storage for this.

It's entirely possible that the best performance would be found from not
un-compressing blocks when putting them into shared_buffers, though.
That would mean you'd "only" have to deal with compression when pulling
individual tuples. Simple, right? :)
-- 
Jim C. Nasby, Sr. Engineering Consultant      [EMAIL PROTECTED]
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461

---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

               http://www.postgresql.org/docs/faq

Reply via email to