On Saturday, March 7, 2020, Dilip Kumar <dilipbal...@gmail.com> wrote:

> On Sat, Mar 7, 2020 at 9:59 AM Dilip Kumar <dilipbal...@gmail.com> wrote:
> >
> > On Sat, Mar 7, 2020 at 12:30 AM Andres Freund <and...@anarazel.de>
> wrote:
> > >
> > > Hi,
> > >
> > > On 2020-01-08 18:06:52 +0530, Dilip Kumar wrote:
> > > > On Wed, 8 Jan 2020 at 5:28 PM, Heikki Linnakangas <hlinn...@iki.fi>
> wrote:
> > > >
> > > > > On 25/11/2019 05:52, Dilip Kumar wrote:
> > > > > > In logical decoding, while sending the changes to the output
> plugin we
> > > > > > need to arrange them in the LSN order.  But, if there is only one
> > > > > > transaction which is a very common case then we can avoid
> building the
> > > > > > binary heap.  A small patch is attached for the same.
> > > > >
> > > > > Does this make any measurable performance difference? Building a
> > > > > one-element binary heap seems pretty cheap.
> > > >
> > > >
> > > > I haven’t really measured the performance for this.  I will try to
> do that
> > > > next week.  Thanks for looking into this.
> > >
> > > Did you do that?
> >
> > I tried once in my local machine but could not produce consistent
> > results.  I will try this once again in the performance machine and
> > report back.
>
> I have tried to decode changes for the 100,000 small transactions and
> measured the time with head vs patch.  I did not observe any
> significant gain.
>
> Head
> -------
> 519ms
> 500ms
> 487ms
> 501ms
>
> patch
> ------
> 501ms
> 492ms
> 486ms
> 489ms
>
> IMHO, if we conclude that because there is no performance gain so we
> don't want to add one extra path in the code then we might want to
> remove that TODO from the code so that we don't spend time for
> optimizing this in the future.
>

Would you be able to share your test setup? It seems like it’d helpful to
get a larger sample size to better determine if it’s measurable or not.
Visually those 4 runs look to me like it’s possible, but objectively I’m
not sure we can yet conclude one way or the other.

James

Reply via email to