Mapping as little as 64GB can take more than 10 seconds,
triggering issues on kernels with CONFIG_PREEMPT_NONE=y.

ib_umem_get() already splits the work in 2MB units on x86_64,
adding a cond_resched() in the long-lasting loop is enough
to solve the issue.

Note that sg_alloc_table() can still use more than 100 ms,
which is also problematic. This might be addressed later
in ib_umem_add_sg_table(), adding new blocks in sgl
on demand.

Signed-off-by: Eric Dumazet <eduma...@google.com>
Cc: Doug Ledford <dledf...@redhat.com>
Cc: Jason Gunthorpe <j...@ziepe.ca>
Cc: linux-r...@vger.kernel.org
---
 drivers/infiniband/core/umem.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
index 
82455a1392f1d19c96ae956f0bd4e93e3a52d29c..831bff8d52e547834e9e04064127fbb280595126
 100644
--- a/drivers/infiniband/core/umem.c
+++ b/drivers/infiniband/core/umem.c
@@ -261,6 +261,7 @@ struct ib_umem *ib_umem_get(struct ib_device *device, 
unsigned long addr,
        sg = umem->sg_head.sgl;
 
        while (npages) {
+               cond_resched();
                ret = pin_user_pages_fast(cur_base,
                                          min_t(unsigned long, npages,
                                                PAGE_SIZE /
-- 
2.28.0.rc0.142.g3c755180ce-goog

Reply via email to