> -----Original Message-----
> From: Leon Romanovsky <l...@kernel.org>
> Sent: Tuesday, December 08, 2020 10:31 PM
> To: Xiong, Jianxin <jianxin.xi...@intel.com>
> Cc: linux-r...@vger.kernel.org; dri-devel@lists.freedesktop.org; Doug Ledford 
> <dledf...@redhat.com>; Jason Gunthorpe <j...@ziepe.ca>;
> Sumit Semwal <sumit.sem...@linaro.org>; Christian Koenig 
> <christian.koe...@amd.com>; Vetter, Daniel <daniel.vet...@intel.com>
> Subject: Re: [PATCH v14 1/4] RDMA/umem: Support importing dma-buf as user 
> memory region
> 
> On Tue, Dec 08, 2020 at 02:39:12PM -0800, Jianxin Xiong wrote:
> > Dma-buf is a standard cross-driver buffer sharing mechanism that can
> > be used to support peer-to-peer access from RDMA devices.
> >
> > Device memory exported via dma-buf is associated with a file descriptor.
> > This is passed to the user space as a property associated with the
> > buffer allocation. When the buffer is registered as a memory region,
> > the file descriptor is passed to the RDMA driver along with other
> > parameters.
> >
> > Implement the common code for importing dma-buf object and mapping
> > dma-buf pages.
> >
> > Signed-off-by: Jianxin Xiong <jianxin.xi...@intel.com>
> > Reviewed-by: Sean Hefty <sean.he...@intel.com>
> > Acked-by: Michael J. Ruhl <michael.j.r...@intel.com>
> > Acked-by: Christian Koenig <christian.koe...@amd.com>
> > Acked-by: Daniel Vetter <daniel.vet...@ffwll.ch>
> > ---
> >  drivers/infiniband/core/Makefile      |   2 +-
> >  drivers/infiniband/core/umem.c        |   3 +
> >  drivers/infiniband/core/umem_dmabuf.c | 174 
> > ++++++++++++++++++++++++++++++++++
> >  include/rdma/ib_umem.h                |  47 ++++++++-
> >  4 files changed, 222 insertions(+), 4 deletions(-)  create mode
> > 100644 drivers/infiniband/core/umem_dmabuf.c
> 
> <...>
> 
> > +int ib_umem_dmabuf_map_pages(struct ib_umem_dmabuf *umem_dmabuf) {
> > +   struct sg_table *sgt;
> > +   struct scatterlist *sg;
> > +   struct dma_fence *fence;
> > +   unsigned long start, end, cur = 0;
> > +   unsigned int nmap = 0;
> > +   int i;
> > +
> > +   dma_resv_assert_held(umem_dmabuf->attach->dmabuf->resv);
> > +
> > +   if (umem_dmabuf->sgt)
> > +           goto wait_fence;
> > +
> > +   sgt = dma_buf_map_attachment(umem_dmabuf->attach, DMA_BIDIRECTIONAL);
> > +   if (IS_ERR(sgt))
> > +           return PTR_ERR(sgt);
> > +
> > +   /* modify the sg list in-place to match umem address and length */
> > +
> > +   start = ALIGN_DOWN(umem_dmabuf->umem.address, PAGE_SIZE);
> > +   end = ALIGN(umem_dmabuf->umem.address + umem_dmabuf->umem.length,
> > +               PAGE_SIZE);
> > +   for_each_sgtable_dma_sg(sgt, sg, i) {
> > +           if (start < cur + sg_dma_len(sg) && cur < end)
> > +                   nmap++;
> > +           if (cur <= start && start < cur + sg_dma_len(sg)) {
> > +                   unsigned long offset = start - cur;
> > +
> > +                   umem_dmabuf->first_sg = sg;
> > +                   umem_dmabuf->first_sg_offset = offset;
> > +                   sg_dma_address(sg) += offset;
> > +                   sg_dma_len(sg) -= offset;
> > +                   cur += offset;
> > +           }
> > +           if (cur < end && end <= cur + sg_dma_len(sg)) {
> > +                   unsigned long trim = cur + sg_dma_len(sg) - end;
> > +
> > +                   umem_dmabuf->last_sg = sg;
> > +                   umem_dmabuf->last_sg_trim = trim;
> > +                   sg_dma_len(sg) -= trim;
> > +                   break;
> > +           }
> > +           cur += sg_dma_len(sg);
> > +   }
> > +
> > +   umem_dmabuf->umem.sg_head.sgl = umem_dmabuf->first_sg;
> > +   umem_dmabuf->umem.sg_head.nents = nmap;
> > +   umem_dmabuf->umem.nmap = nmap;
> > +   umem_dmabuf->sgt = sgt;
> > +
> > +wait_fence:
> > +   /*
> > +    * Although the sg list is valid now, the content of the pages
> > +    * may be not up-to-date. Wait for the exporter to finish
> > +    * the migration.
> > +    */
> > +   fence = dma_resv_get_excl(umem_dmabuf->attach->dmabuf->resv);
> > +   if (fence)
> > +           return dma_fence_wait(fence, false);
> 
> You called to dma_buf_map_attachment() earlier in this function, so if you 
> return an error here, the dma_buf won't be unmapped in
> pagefault_dmabuf_mr()

Yes, that's by design. Next time ib_unen_dmabuf_map_pages() is called, 
dma_buf_map_attachment() will be skipped and dma_fence_wait() is retried.

> 
> Thanks
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Reply via email to