On Tue, Oct 15, 2024 at 11:15 PM Masahiko Sawada <sawada.m...@gmail.com> wrote:
>
> On Sun, Oct 13, 2024 at 11:00 PM Amit Kapila <amit.kapil...@gmail.com> wrote:
> >
> > On Fri, Oct 11, 2024 at 3:40 AM Masahiko Sawada <sawada.m...@gmail.com> 
> > wrote:
> > >
> > > Please find the attached patches.
> > >
>
> Thank you for reviewing the patch!
>
> >
> > @@ -343,9 +343,9 @@ ReorderBufferAllocate(void)
> >   */
> >   buffer->tup_context = GenerationContextCreate(new_ctx,
> >     "Tuples",
> > -   SLAB_LARGE_BLOCK_SIZE,
> > -   SLAB_LARGE_BLOCK_SIZE,
> > -   SLAB_LARGE_BLOCK_SIZE);
> > +   SLAB_DEFAULT_BLOCK_SIZE,
> > +   SLAB_DEFAULT_BLOCK_SIZE,
> > +   SLAB_DEFAULT_BLOCK_SIZE);
> >
> > Shouldn't we change the comment atop this change [1] which states that
> > we should benchmark the existing values?
>
> Agreed.
>

Can we slightly tweak the comments as follows so that it doesn't sound
like a fix for a bug?

/* To minimize memory fragmentation caused by long-running
transactions with changes spanning multiple memory blocks, we use a
single fixed-size memory block for decoded tuple storage. The tests
showed that the default memory block size maintains logical decoding
performance without causing fragmentation due to concurrent
transactions. One might think that we can use the max size as
SLAB_LARGE_BLOCK_SIZE but the test also showed it doesn't help resolve
the memory fragmentation.

Other than that the changes in the patch look good to me. Note that I
haven't tested the patch by myself but the test results shown by you
and others in the thread seem sufficient to proceed with this change.

-- 
With Regards,
Amit Kapila.


Reply via email to