On Wed, Oct 2, 2024 at 9:42 PM Hayato Kuroda (Fujitsu)
<kuroda.hay...@fujitsu.com> wrote:
>
> Dear Sawada-san, Amit,
>
> > > So, decoding a large transaction with many smaller allocations can
> > > have ~2.2% overhead with a smaller block size (say 8Kb vs 8MB). In
> > > real workloads, we will have fewer such large transactions or a mix of
> > > small and large transactions. That will make the overhead much less
> > > visible. Does this mean that we should invent some strategy to defrag
> > > the memory at some point during decoding or use any other technique? I
> > > don't find this overhead above the threshold to invent something
> > > fancy. What do others think?
> >
> > I agree that the overhead will be much less visible in real workloads.
> > +1 to use a smaller block (i.e. 8kB). It's easy to backpatch to old
> > branches (if we agree) and to revert the change in case something
> > happens.
>
> I also felt okay. Just to confirm - you do not push rb_mem_block_size patch 
> and
> just replace SLAB_LARGE_BLOCK_SIZE -> SLAB_DEFAULT_BLOCK_SIZE, right?

Right.

> It seems that
> only reorderbuffer.c uses the LARGE macro so that it can be removed.

I'm going to keep the LARGE macro since extensions might be using it.

Regards,

-- 
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com


Reply via email to