> From: Bruce Richardson [mailto:bruce.richard...@intel.com]
> Sent: Friday, 14 January 2022 10.11
> 
> On Fri, Jan 14, 2022 at 09:56:50AM +0100, Morten Brørup wrote:
> > Dear ARM/POWER/x86 maintainers,
> >
> > The architecture specific rte_memcpy() provides optimized variants to
> copy aligned data. However, the alignment requirements depend on the
> hardware architecture, and there is no common definition for the
> alignment.
> >
> > DPDK provides __rte_cache_aligned for cache optimization purposes,
> with architecture specific values. Would you consider providing an
> __rte_memcpy_aligned for rte_memcpy() optimization purposes?
> >
> > Or should I just use __rte_cache_aligned, although it is overkill?
> >
> >
> > Specifically, I am working on a mempool optimization where the objs
> field in the rte_mempool_cache structure may benefit by being aligned
> for optimized rte_memcpy().
> >
> For me the difficulty with such a memcpy proposal - apart from probably
> adding to the amount of memcpy code we have to maintain - is the
> specific meaning
> of what "aligned" in the memcpy case. Unlike for a struct definition,
> the
> possible meaning of aligned in memcpy could be:
> * the source address is aligned
> * the destination address is aligned
> * both source and destination is aligned
> * both source and destination are aligned and the copy length is a
> multiple
>   of the alignment length
> * the data is aligned to a cacheline boundary
> * the data is aligned to the largest load-store size for system
> * the data is aligned to the boundary suitable for the copy size, e.g.
>   memcpy of 8 bytes is 8-byte aligned etc.
> 
> Can you clarify a bit more on your own thinking here? Personally, I am
> a
> little dubious of the benefit of general memcpy optimization, but I do
> believe that for specific usecases there is value is having their own
> copy
> operations which include constraints for that specific usecase. For
> example, in the AVX-512 ice/i40e PMD code, we fold the memcpy from the
> mempool cache into the descriptor rearm function because we know we can
> always do 64-byte loads and stores, and also because we know that for
> each
> load in the copy, we can reuse the data just after storing it (giving
> good
> perf boost). Perhaps something similar could work for you in your
> mempool
> optimization.
> 
> /Bruce

I'm going to copy array of pointers, specifically the 'objs' array in the 
rte_mempool_cache structure.

The 'objs' array starts at byte 24, which is only 8 byte aligned. So it always 
fails the ALIGNMENT_MASK test in the x86 specific rte_memcpy(), and thus cannot 
ever use the optimized rte_memcpy_aligned() function to copy the array, but 
will use the rte_memcpy_generic() function.

If the 'objs' array was optimally aligned, and the other array that is being 
copied to/from is also optimally aligned, rte_memcpy() would use the optimized 
rte_memcpy_aligned() function.

Please also note that the value of ALIGNMENT_MASK depends on which vector 
instruction set DPDK is being compiled with.

The other CPU architectures have similar stuff in their rte_memcpy() 
implementations, and their alignment requirements are also different.

Please also note that rte_memcpy() becomes even more optimized when the size of 
the memcpy() operation is known at compile time.

So I am asking for a public #define __rte_memcpy_aligned I can use to meet the 
alignment requirements for optimal rte_memcpy().

Reply via email to