> -----Original Message-----
> From: Morten Brørup <m...@smartsharesystems.com>
> Sent: Thursday, August 10, 2023 3:03 PM
> To: Amit Prakash Shukla <amitpraka...@marvell.com>; Chengwen Feng
> <fengcheng...@huawei.com>; Kevin Laatz <kevin.la...@intel.com>; Bruce
> Richardson <bruce.richard...@intel.com>
> Cc: dev@dpdk.org; Jerin Jacob Kollanukkaran <jer...@marvell.com>;
> conor.wa...@intel.com; Vamsi Krishna Attunuru <vattun...@marvell.com>;
> g.si...@nxp.com; sachin.sax...@oss.nxp.com; hemant.agra...@nxp.com;
> cheng1.ji...@intel.com; Nithin Kumar Dabilpuram
> <ndabilpu...@marvell.com>; Anoob Joseph <ano...@marvell.com>
> Subject: [EXT] RE: [RFC PATCH] dmadev: offload to free source buffer
> 
> External Email
> 
> ----------------------------------------------------------------------
> > From: Amit Prakash Shukla [mailto:amitpraka...@marvell.com]
> > Sent: Wednesday, 9 August 2023 20.12
> >
> > > From: Morten Brørup <m...@smartsharesystems.com>
> > > Sent: Wednesday, August 9, 2023 8:19 PM
> > >
> > > > From: Amit Prakash Shukla [mailto:amitpraka...@marvell.com]
> > > > Sent: Wednesday, 9 August 2023 16.27
> > > >
> > > > > From: Morten Brørup <m...@smartsharesystems.com>
> > > > > Sent: Wednesday, August 9, 2023 2:37 PM
> > > > >
> > > > > > From: Amit Prakash Shukla [mailto:amitpraka...@marvell.com]
> > > > > > Sent: Wednesday, 9 August 2023 08.09
> > > > > >
> > > > > > This changeset adds support in DMA library to free source DMA
> > > > > > buffer by hardware. On a supported hardware, application can
> > > > > > pass on the mempool information as part of vchan config when
> > > > > > the DMA transfer direction is configured as
> RTE_DMA_DIR_MEM_TO_DEV.
> > > > >
> > > > > Isn't the DMA source buffer a memory area, and what needs to be
> > > > > freed
> > > > is
> > > > > the mbuf holding the memory area, i.e. two different pointers?
> > > > No, it is same pointer. Assume mbuf created via mempool, mempool
> > > > needs to be given via vchan config and iova passed to
> > > > rte_dma_copy/rte_dma_copy_sg's can be any address in mbuf area of
> > > > given mempool element.
> > > > For example, mempool element size is S. dequeued buff from
> mempool
> > > > is at X. Any address in (X, X+S) can be given as iova to rte_dma_copy.
> > >
> > > So the DMA library determines the pointer to the mbuf (in the given
> > > mempool) by looking at the iova passed to
> > > rte_dma_copy/rte_dma_copy_sg, and then calls rte_mempool_put with
> that pointer?
> >
> > No. DMA hardware would determine the pointer to the mbuf using iova
> > address and mempool. Hardware will free the buffer, on completion of
> data transfer.
> 
> OK. If there are any requirements to the mempool, it needs to be
> documented in the source code comments. E.g. does it work with mempools
> where the mempool driver is an MP_RTS/MC_RTS ring, or a stack?

I think adding a comment, related to type of supported mempool, in dma library 
code might not be needed as it is driver implementation dependent. Call to 
dev->dev_ops->vchan_setup for the driver shall check and return error for 
unsupported type of mempool.

> 
> >
> > >
> > > >
> > > > >
> > > > > I like the concept. Something similar might also be useful for
> > > > > RTE_DMA_DIR_MEM_TO_MEM, e.g. packet capture. Although such a
> use
> > > > > case might require decrementing the mbuf refcount instead of
> > > > > freeing
> > > > the
> > > > > mbuf directly to the mempool.
> > > > This operation is not supported in our hardware. It can be
> > > > implemented in future if any hardware supports it.
> > >
> > > OK, I didn't expect that - just floating the idea. :-)
> > >
> > > >
> > > > >
> > > > > PS: It has been a while since I looked at the DMA library, so
> > > > > ignore my comments if I got this wrong.

Reply via email to