> From: Amit Prakash Shukla [mailto:amitpraka...@marvell.com]
> Sent: Thursday, 10 August 2023 12.28
> 
> > From: Morten Brørup <m...@smartsharesystems.com>
> > Sent: Thursday, August 10, 2023 3:03 PM
> >
> > > From: Amit Prakash Shukla [mailto:amitpraka...@marvell.com]
> > > Sent: Wednesday, 9 August 2023 20.12
> > >
> > > > From: Morten Brørup <m...@smartsharesystems.com>
> > > > Sent: Wednesday, August 9, 2023 8:19 PM
> > > >
> > > > > From: Amit Prakash Shukla [mailto:amitpraka...@marvell.com]
> > > > > Sent: Wednesday, 9 August 2023 16.27
> > > > >
> > > > > > From: Morten Brørup <m...@smartsharesystems.com>
> > > > > > Sent: Wednesday, August 9, 2023 2:37 PM
> > > > > >
> > > > > > > From: Amit Prakash Shukla [mailto:amitpraka...@marvell.com]
> > > > > > > Sent: Wednesday, 9 August 2023 08.09
> > > > > > >
> > > > > > > This changeset adds support in DMA library to free source DMA
> > > > > > > buffer by hardware. On a supported hardware, application can
> > > > > > > pass on the mempool information as part of vchan config when
> > > > > > > the DMA transfer direction is configured as
> > RTE_DMA_DIR_MEM_TO_DEV.
> > > > > >
> > > > > > Isn't the DMA source buffer a memory area, and what needs to be
> > > > > > freed
> > > > > is
> > > > > > the mbuf holding the memory area, i.e. two different pointers?
> > > > > No, it is same pointer. Assume mbuf created via mempool, mempool
> > > > > needs to be given via vchan config and iova passed to
> > > > > rte_dma_copy/rte_dma_copy_sg's can be any address in mbuf area of
> > > > > given mempool element.
> > > > > For example, mempool element size is S. dequeued buff from
> > mempool
> > > > > is at X. Any address in (X, X+S) can be given as iova to
> rte_dma_copy.
> > > >
> > > > So the DMA library determines the pointer to the mbuf (in the given
> > > > mempool) by looking at the iova passed to
> > > > rte_dma_copy/rte_dma_copy_sg, and then calls rte_mempool_put with
> > that pointer?
> > >
> > > No. DMA hardware would determine the pointer to the mbuf using iova
> > > address and mempool. Hardware will free the buffer, on completion of
> > data transfer.
> >
> > OK. If there are any requirements to the mempool, it needs to be
> > documented in the source code comments. E.g. does it work with mempools
> > where the mempool driver is an MP_RTS/MC_RTS ring, or a stack?
> 
> I think adding a comment, related to type of supported mempool, in dma
> library code might not be needed as it is driver implementation dependent.
> Call to dev->dev_ops->vchan_setup for the driver shall check and return
> error for unsupported type of mempool.

Makes sense. But I still think that it needs to be mentioned that 
RTE_DMA_CAPA_MEM_TO_DEV_SOURCE_BUFFER_FREE has some limitations, and doesn't 
mean that any type of mempool can be used.

I suggest you add a note to the description of the new "struct rte_mempool 
*mem_to_dev_src_buf_pool" field in the rte_dma_vchan_conf structure, such as:

Note: If the mempool is not supported by the DMA driver, rte_dma_vchan_setup() 
will fail.

You should also mention it with the description of 
RTE_DMA_CAPA_MEM_TO_DEV_SOURCE_BUFFER_FREE flag, such as:

Note: Even though the DMA driver has this capability, it may not support all 
mempool drivers. If the mempool is not supported by the DMA driver, 
rte_dma_vchan_setup() will fail.


> 
> >
> > >
> > > >
> > > > >
> > > > > >
> > > > > > I like the concept. Something similar might also be useful for
> > > > > > RTE_DMA_DIR_MEM_TO_MEM, e.g. packet capture. Although such a
> > use
> > > > > > case might require decrementing the mbuf refcount instead of
> > > > > > freeing
> > > > > the
> > > > > > mbuf directly to the mempool.
> > > > > This operation is not supported in our hardware. It can be
> > > > > implemented in future if any hardware supports it.
> > > >
> > > > OK, I didn't expect that - just floating the idea. :-)
> > > >
> > > > >
> > > > > >
> > > > > > PS: It has been a while since I looked at the DMA library, so
> > > > > > ignore my comments if I got this wrong.

Reply via email to