From: Yonatan Maman <yma...@nvidia.com> Add support for P2P for MLX5 NIC devices with automatic fallback to standard DMA when P2P mapping fails.
The change introduces P2P DMA requests by default using the HMM_PFN_ALLOW_P2P flag. If P2P mapping fails with -EFAULT error, the operation is retried without the P2P flag, ensuring a fallback to standard DMA flow (using host memory). Signed-off-by: Yonatan Maman <yma...@nvidia.com> Signed-off-by: Gal Shalom <galsha...@nvidia.com> --- drivers/infiniband/hw/mlx5/odp.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c index f6abd64f07f7..6a0171117f48 100644 --- a/drivers/infiniband/hw/mlx5/odp.c +++ b/drivers/infiniband/hw/mlx5/odp.c @@ -715,6 +715,10 @@ static int pagefault_real_mr(struct mlx5_ib_mr *mr, struct ib_umem_odp *odp, if (odp->umem.writable && !downgrade) access_mask |= HMM_PFN_WRITE; + /* + * try fault with HMM_PFN_ALLOW_P2P flag + */ + access_mask |= HMM_PFN_ALLOW_P2P; np = ib_umem_odp_map_dma_and_lock(odp, user_va, bcnt, access_mask, fault); if (np < 0) return np; @@ -724,6 +728,18 @@ static int pagefault_real_mr(struct mlx5_ib_mr *mr, struct ib_umem_odp *odp, * ib_umem_odp_map_dma_and_lock already checks this. */ ret = mlx5r_umr_update_xlt(mr, start_idx, np, page_shift, xlt_flags); + if (ret == -EFAULT) { + /* + * Indicate P2P Mapping Error, retry with no HMM_PFN_ALLOW_P2P + */ + mutex_unlock(&odp->umem_mutex); + access_mask &= ~(HMM_PFN_ALLOW_P2P); + np = ib_umem_odp_map_dma_and_lock(odp, user_va, bcnt, access_mask, fault); + if (np < 0) + return np; + ret = mlx5r_umr_update_xlt(mr, start_idx, np, page_shift, xlt_flags); + } + mutex_unlock(&odp->umem_mutex); if (ret < 0) { -- 2.34.1