PING mempool maintainers - ack/review or further comments to this series?

> > From: Kamalakshitha Aligeri [mailto:kamalakshitha.alig...@arm.com]
> > Sent: Friday, 24 February 2023 19.11
> >
> > From: = Morten Brørup <m...@smartsharesystems.com>
> 
> This should be:
> 
> From: Morten Brørup <m...@smartsharesystems.com>
> 
> It could be fixed while merging. This is the only complaint in patchwork.
> 
> >
> > Zero-copy access to mempool caches is beneficial for PMD performance,
> > and
> > must be provided by the mempool library to fix [Bug 1052] without a
> > performance regression.
> >
> > [Bug 1052]: https://bugs.dpdk.org/show_bug.cgi?id=1052
> >
> > Bugzilla ID: 1052
> >
> > Signed-off-by: Morten Brørup <m...@smartsharesystems.com>
> > Signed-off-by: Kamalakshitha Aligeri <kamalakshitha.alig...@arm.com>
> 
> Can we get some reviews/acks on this please? I would like to see mempool zero-
> copy go into DPDK before 23.11.
> 
> Please also note that the warnings and errors in patchwork regarding patch 2/2
> are bogus or unrelated.
> 
> > ---
> > v10:
> > * Added mempool test cases with zero-copy API's
> 
> For the parts not provided by myself, i.e. the test cases:
> 
> Acked-by: Morten Brørup <m...@smartsharesystems.com>
> 
> [...]
> 
> > diff --git a/lib/mempool/version.map b/lib/mempool/version.map
> > index dff2d1cb55..06cb83ad9d 100644
> > --- a/lib/mempool/version.map
> > +++ b/lib/mempool/version.map
> > @@ -49,6 +49,15 @@ EXPERIMENTAL {
> >     __rte_mempool_trace_get_contig_blocks;
> >     __rte_mempool_trace_default_cache;
> >     __rte_mempool_trace_cache_flush;
> > +   __rte_mempool_trace_ops_populate;
> > +   __rte_mempool_trace_ops_alloc;
> > +   __rte_mempool_trace_ops_free;
> > +   __rte_mempool_trace_set_ops_byname;
> > +
> > +   # added in 23.03
> 
> Time is passing, so now this should be updated to 23.07
> 
> It could be fixed while merging.
> 
> > +   __rte_mempool_trace_cache_zc_put_bulk;
> > +   __rte_mempool_trace_cache_zc_put_rewind;
> > +   __rte_mempool_trace_cache_zc_get_bulk;
> >  };
> >
> >  INTERNAL {

Reply via email to