On Tue, Jul 16, 2019 at 9:14 PM Binguo Bao <djydew...@gmail.com> wrote:
> In the compressed beginning case, the test result is different from yours 
> since the patch is ~1.75x faster
> rather than no improvement. The interesting thing is that the patch if 4% 
> faster than master in the uncompressed end case.
> I can't figure out reason now.

Probably some differences in our test environments. I wouldn't worry
about it too much, since we can show improvement in more realistic
tests.

>> I'm not an expert on TOAST, but maybe one way to solve both problems
>> is to work at the level of whole TOAST chunks. In that case, the
>> current patch would look like this:
>> 1. The caller requests more of the attribute value from the de-TOAST 
>> iterator.
>> 2. The iterator gets the next chunk and either copies or decompresses
>> the whole chunk into the buffer. (If inline, just decompress the whole
>> thing)
>
>
> Thanks for your suggestion. It is indeed possible to implement 
> PG_DETOAST_DATUM_SLICE using the de-TOAST iterator.
> IMO the iterator is more suitable for situations where the caller doesn't 
> know the slice size. If the caller knows the slice size,
> it is reasonable to fetch enough chunks at once and then decompress it at 
> once.

That sounds reasonable for the reason of less overhead.

In the case where we don't know the slice size, how about the other
aspect of my question above: Might it be simpler and less overhead to
decompress entire chunks at a time? If so, I think it would be
enlightening to compare performance.

-- 
John Naylor                https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Reply via email to