On Thu, Dec 29, 2022 1:56 AM Bob Pearson wrote:
>
> On 12/23/22 00:51, Daisuke Matsuda wrote:
> > In order to implement On-Demand Paging on the rxe driver, triple tasklets
> > (requester, responder, and completer) must be allowed to sleep so that they
> > can trigger pag
nd that the target page is not
invalidated before data access completes.
Signed-off-by: Daisuke Matsuda
---
drivers/infiniband/sw/rxe/rxe.c | 1 +
drivers/infiniband/sw/rxe/rxe_loc.h | 11 +++
drivers/infiniband/sw/rxe/rxe_odp.c | 46
drivers/infiniband/sw
mapped pages are not
invalidated before data copy completes.
Signed-off-by: Daisuke Matsuda
---
drivers/infiniband/sw/rxe/rxe.c | 10 ++
drivers/infiniband/sw/rxe/rxe_loc.h | 5 +
drivers/infiniband/sw/rxe/rxe_mr.c | 2 +-
drivers/infiniband/sw/rxe/rxe_odp.c
permissions, and
possibly to prefetch pages in the future.
Signed-off-by: Daisuke Matsuda
---
drivers/infiniband/sw/rxe/rxe.c | 7 +++
drivers/infiniband/sw/rxe/rxe_loc.h | 16 ++
drivers/infiniband/sw/rxe/rxe_mr.c| 7 ++-
drivers/infiniband/sw/rxe/rxe_odp.c | 83
On page invalidation, an MMU notifier callback is invoked to unmap DMA
addresses and update the driver page table(umem_odp->dma_list). The
callback is registered when an ODP-enabled MR is created.
Signed-off-by: Daisuke Matsuda
---
drivers/infiniband/sw/rxe/Makefile | 2 ++
drivers/infinib
Both responder and completer can sleep to execute page-fault when used
with ODP, and page-fault handler can be invoked when they are going to
access user MRs, so works must be scheduled in such cases.
Signed-off-by: Daisuke Matsuda
---
drivers/infiniband/sw/rxe/rxe_comp.c | 20
Currently, rxe_responder() directly calls the function to execute Atomic
operations. This need to be modified to insert some conditional branches
for the ODP feature. Additionally, rxe_resp.h is newly added to be used by
rxe_odp.c in near future.
Signed-off-by: Daisuke Matsuda
---
drivers
.
v1->v2:
1) Fixed a crash issue reported by Haris Iqbal.
2) Tried to make lock patters clearer as pointed out by Romanovsky.
3) Minor clean ups and fixes.
Daisuke Matsuda (7):
RDMA/rxe: Convert triple tasklets to use workqueue
RDMA/rxe: Always schedule works before accessing user MRs
R
while qp reset is in progress.
The way to initialize/destroy workqueue is picked up from the
implementation of Ian Ziemba and Bob Pearson at HPE.
Link: https://lore.kernel.org/all/20221018043345.4033-1-rpearson...@gmail.com/
Signed-off-by: Daisuke Matsuda
---
drivers/infiniband/sw/rxe/rxe.c
On Thu, Nov 17, 2022 10:21 PM Li, Zhijian wrote:
> On 11/11/2022 17:22, Daisuke Matsuda wrote:
> > ib_umem_odp_map_dma_single_page(), which has been used only by the mlx5
> > driver, holds umem_mutex on success and releases on failure. This
> > behavior is not convenient for
nd that the target page is not
invalidated before data access completes.
Signed-off-by: Daisuke Matsuda
---
drivers/infiniband/sw/rxe/rxe.c | 1 +
drivers/infiniband/sw/rxe/rxe_loc.h | 2 ++
drivers/infiniband/sw/rxe/rxe_odp.c | 45
drivers/infiniband/sw/rxe/r
permissions, and
possibly to prefetch pages in the future.
Signed-off-by: Daisuke Matsuda
---
drivers/infiniband/sw/rxe/rxe.c | 7 +++
drivers/infiniband/sw/rxe/rxe_loc.h | 5 ++
drivers/infiniband/sw/rxe/rxe_mr.c| 7 ++-
drivers/infiniband/sw/rxe/rxe_odp.c | 81
Currently, rxe_responder() directly calls the function to execute Atomic
operations. This need to be modified to insert some conditional branches
for the new RDMA Write operation and the ODP feature.
Signed-off-by: Daisuke Matsuda
---
drivers/infiniband/sw/rxe/rxe_resp.c | 102
progress.
Signed-off-by: Daisuke Matsuda
---
drivers/infiniband/sw/rxe/Makefile| 2 +-
drivers/infiniband/sw/rxe/rxe_comp.c | 42 ---
drivers/infiniband/sw/rxe/rxe_loc.h | 4 +-
drivers/infiniband/sw/rxe/rxe_net.c | 4 +-
drivers/infiniband/sw/rxe/rxe_param.h | 2 +-
drivers
mapped pages are not
invalidated before data copy completes.
Signed-off-by: Daisuke Matsuda
---
drivers/infiniband/sw/rxe/rxe.c | 10 ++
drivers/infiniband/sw/rxe/rxe_loc.h | 2 +
drivers/infiniband/sw/rxe/rxe_mr.c | 2 +-
drivers/infiniband/sw/rxe/rxe_odp.c
On page invalidation, an MMU notifier callback is invoked to unmap DMA
addresses and update umem_odp->dma_list. The callback is registered when an
ODP-enabled MR is created.
Signed-off-by: Daisuke Matsuda
---
drivers/infiniband/sw/rxe/Makefile | 3 ++-
drivers/infiniband/sw/rxe/rxe_odp.c |
reported by Haris Iqbal.
2) Tried to make lock patters clearer as pointed out by Romanovsky.
3) Minor clean ups and fixes.
Daisuke Matsuda (7):
IB/mlx5: Change ib_umem_odp_map_dma_single_page() to retain umem_mutex
RDMA/rxe: Convert the triple tasklets to workqueues
RDMA/rxe: Cleanup
ib_umem_odp_map_dma_single_page(), which has been used only by the mlx5
driver, holds umem_mutex on success and releases on failure. This
behavior is not convenient for other drivers to use it, so change it to
always retain mutex on return.
Signed-off-by: Daisuke Matsuda
---
drivers/infiniband
nd that the target page is not
invalidated before data access completes.
Signed-off-by: Daisuke Matsuda
---
drivers/infiniband/sw/rxe/rxe.c | 1 +
drivers/infiniband/sw/rxe/rxe_loc.h | 2 ++
drivers/infiniband/sw/rxe/rxe_odp.c | 42 ++
drivers/infinib
mapped pages are not
invalidated before data copy completes.
Signed-off-by: Daisuke Matsuda
---
drivers/infiniband/sw/rxe/rxe.c | 10 ++
drivers/infiniband/sw/rxe/rxe_loc.h | 2 +
drivers/infiniband/sw/rxe/rxe_mr.c | 2 +-
drivers/infiniband/sw/rxe/rxe_odp.c
permissions and possibly to prefetch pages in the future.
Signed-off-by: Daisuke Matsuda
---
drivers/infiniband/sw/rxe/rxe.c | 7 +++
drivers/infiniband/sw/rxe/rxe_loc.h | 5 ++
drivers/infiniband/sw/rxe/rxe_mr.c| 7 ++-
drivers/infiniband/sw/rxe/rxe_odp.c | 80
Currently, rxe_responder() directly calls the function to execute Atomic
operations. This need to be modified to insert some conditional branches
for the new RDMA Write operation and the ODP feature.
Signed-off-by: Daisuke Matsuda
---
drivers/infiniband/sw/rxe/rxe_resp.c | 102
ical" are shown above. It shows the conversion
improves the bandwidth while causing higher latency.
Signed-off-by: Daisuke Matsuda
---
drivers/infiniband/sw/rxe/Makefile| 2 +-
drivers/infiniband/sw/rxe/rxe_comp.c | 42 ---
drivers/infiniband/sw/rxe/rxe_loc.h | 2 +-
drive
ib_umem_odp_map_dma_single_page(), which has been used only by the mlx5
driver, holds umem_mutex on success and releases on failure. This
behavior is not convenient for other drivers to use it, so change it to
always retain mutex on return.
Signed-off-by: Daisuke Matsuda
---
drivers/infiniband
linux-rdma/perftest: Infiniband Verbs Performance Tests
https://github.com/linux-rdma/perftest
Daisuke Matsuda (7):
IB/mlx5: Change ib_umem_odp_map_dma_single_page() to retain umem_mutex
RDMA/rxe: Convert the triple tasklets to workqueues
RDMA/rxe: Cleanup code for responder Atomic operations
On page invalidation, an MMU notifier callback is invoked to unmap DMA
addresses and update umem_odp->dma_list. The callback is registered when an
ODP-enabled MR is created.
Signed-off-by: Daisuke Matsuda
---
drivers/infiniband/sw/rxe/Makefile | 3 ++-
drivers/infiniband/sw/rxe/rxe_odp.c |
26 matches
Mail list logo