On Tue, Apr 18, 2023 at 05:50:56PM +0200, Morten Brørup wrote:
> > From: Tyler Retzlaff [mailto:roret...@linux.microsoft.com]
> > Sent: Tuesday, 18 April 2023 17.45
> > 
> > On Tue, Apr 18, 2023 at 05:30:27PM +0200, Morten Brørup wrote:
> > > > From: Tyler Retzlaff [mailto:roret...@linux.microsoft.com]
> > > > Sent: Tuesday, 18 April 2023 17.15
> > > >
> > > > On Tue, Apr 18, 2023 at 12:06:42PM +0100, Bruce Richardson wrote:
> > > > > On Tue, Apr 11, 2023 at 08:48:45AM +0200, Morten Brørup wrote:
> > > > > > When getting objects from the mempool, the number of objects to
> > get
> > > > is
> > > > > > often constant at build time.
> > > > > >
> > > > > > This patch adds another code path for this case, so the compiler
> > can
> > > > > > optimize more, e.g. unroll the copy loop when the entire request
> > is
> > > > > > satisfied from the cache.
> > > > > >
> > > > > > On an Intel(R) Xeon(R) E5-2620 v4 CPU, and compiled with gcc
> > 9.4.0,
> > > > > > mempool_perf_test with constant n shows an increase in
> > rate_persec
> > > > by an
> > > > > > average of 17 %, minimum 9.5 %, maximum 24 %.
> > > > > >
> > > > > > The code path where the number of objects to get is unknown at
> > build
> > > > time
> > > > > > remains essentially unchanged.
> > > > > >
> > > > > > Signed-off-by: Morten Brørup <m...@smartsharesystems.com>
> > > > >
> > > > > Change looks a good idea. Some suggestions inline below, which you
> > may
> > > > want to
> > > > > take on board for any future version. I'd strongly suggest adding
> > some
> > > > > extra clarifying code comments, as I suggest below.
> > > > > With those exta code comments:
> > > > >
> > > > > Acked-by: Bruce Richardson <bruce.richard...@intel.com>
> > > > >
> > > > > > ---
> > > > > >  lib/mempool/rte_mempool.h | 24 +++++++++++++++++++++---
> > > > > >  1 file changed, 21 insertions(+), 3 deletions(-)
> > > > > >
> > > > > > diff --git a/lib/mempool/rte_mempool.h
> > b/lib/mempool/rte_mempool.h
> > > > > > index 9f530db24b..ade0100ec7 100644
> > > > > > --- a/lib/mempool/rte_mempool.h
> > > > > > +++ b/lib/mempool/rte_mempool.h
> > > > > > @@ -1500,15 +1500,33 @@ rte_mempool_do_generic_get(struct
> > > > rte_mempool *mp, void **obj_table,
> > > > > >     if (unlikely(cache == NULL))
> > > > > >             goto driver_dequeue;
> > > > > >
> > > > > > -   /* Use the cache as much as we have to return hot objects
> > first */
> > > > > > -   len = RTE_MIN(remaining, cache->len);
> > > > > >     cache_objs = &cache->objs[cache->len];
> > > > > > +
> > > > > > +   if (__extension__(__builtin_constant_p(n)) && n <= cache-
> > >len) {
> > > >
> > > > don't take direct dependency on compiler builtins. define a macro so
> > we
> > > > don't have to play shotgun surgery later.
> > > >
> > > > also what is the purpose of using __extension__ here? are you
> > annotating
> > > > the use of __builtin_constant_p() or is there more? because if
> > that's
> > > > the only reason i see no need to use __extension__ when already
> > using a
> > > > compiler specific builtin like this, that it is not standard is
> > implied
> > > > and enforced by a compile break.
> > >
> > > ARM 32-bit memcpy() [1] does it this way, so I did the same.
> > >
> > > [1]:
> > https://elixir.bootlin.com/dpdk/v23.03/source/lib/eal/arm/include/rte_me
> > mcpy_32.h#L122
> > 
> > i see thanks.
> > 
> > >
> > > While I agree that a macro for __builtin_constant_p() would be good,
> > it belongs in a patch to fix portability, not in this patch.
> > 
> > i agree it isn't composite of this change.
> > 
> > would you mind introducing it as a separate patch and depending on it or
> > do you feel that would delay this patch too much? i wouldn't mind doing
> > it myself but there is a long merge time on my patches which means i end
> > up having to carry the adaptations locally for weeks at a time.
> 
> I would rather not.
> 
> Introducing global macros in rte_common.h usually triggers a lot of 
> discussion and pushback, and I don't want it to hold back this patch.

yeah, no kidding. i wish the process was a bit more friendly being on
the receiving end. it's unfortunate because it is discouraging
improvements.

i'll bring a patch for it then.

Reply via email to