Re: [PATCH rdma-next 00/10] Enable relaxed ordering for ULPs

2021-04-11 Thread Max Gurtovoy
On 4/6/2021 2:53 PM, Jason Gunthorpe wrote: On Tue, Apr 06, 2021 at 08:09:43AM +0300, Leon Romanovsky wrote: On Tue, Apr 06, 2021 at 10:37:38AM +0800, Honggang LI wrote: On Mon, Apr 05, 2021 at 08:23:54AM +0300, Leon Romanovsky wrote: From: Leon Romanovsky From Avihai, Relaxed Ordering i

Re: [PATCH rdma v2] RDMA: Add rdma_connect_locked()

2020-10-27 Thread Max Gurtovoy
, 46 insertions(+), 23 deletions(-) v2: - Remove extra code from nvme (Chao) - Fix long lines (CH) I've applied this version to rdma-rc - expecting to get these ULPs unbroken for rc2 release Thanks, Jason iser and nvme/rdma looks good to me, Reviewed-by: Max Gurtovoy

[PATCH 7/9] RDMA/srp: remove support for FMR memory registration

2020-05-27 Thread Max Gurtovoy
FMR is not supported on most recent RDMA devices (that use fast memory registration mechanism). Also, FMR was recently removed from NFS/RDMA ULP. Signed-off-by: Max Gurtovoy Reviewed-by: Israel Rukshin Reviewed-by: Bart Van Assche --- drivers/infiniband/ulp/srp/ib_srp.c | 222

[PATCH 2/9] RDMA/mlx4: remove FMR support for memory registration

2020-05-27 Thread Max Gurtovoy
HCA's that are driven by mlx4 driver support FRWR method to register memory. Remove the ancient and unsafe FMR method. Signed-off-by: Max Gurtovoy --- drivers/infiniband/hw/mlx4/main.c | 10 -- drivers/infiniband/hw/mlx4/mlx4_ib.h| 16 --- drivers/infiniband/hw/mlx4

[PATCH 8/9] RDMA/core: remove FMR pool API

2020-05-27 Thread Max Gurtovoy
This ancient and unsafe method for memory registration is no longer used by any RDMA based ULP. Remove the FMR pool API from the core driver. Signed-off-by: Max Gurtovoy --- Documentation/driver-api/infiniband.rst | 3 - drivers/infiniband/core/Makefile| 2 +- drivers/infiniband

[PATCH 4/9] RDMA/mthca: remove FMR support for memory registration

2020-05-27 Thread Max Gurtovoy
Remove the ancient and unsafe FMR method. Signed-off-by: Max Gurtovoy --- drivers/infiniband/hw/mthca/mthca_dev.h | 10 - drivers/infiniband/hw/mthca/mthca_mr.c | 262 +-- drivers/infiniband/hw/mthca/mthca_provider.c | 86 - 3 files changed, 1

[PATCH 0/9 v2] Remove FMR support from RDMA drivers

2020-05-27 Thread Max Gurtovoy
"Linux 5.7-rc7" - added "Reviewed-by" Bart signature for SRP Gal Pressman (1): RDMA/mlx5: Remove FMR leftovers Israel Rukshin (1): RDMA/iser: Remove support for FMR memory registration Max Gurtovoy (7): RDMA/mlx4: remove FMR support for memory registration RDMA

[PATCH 3/9] RDMA/rds: remove FMR support for memory registration

2020-05-27 Thread Max Gurtovoy
Use FRWR method for memory registration by default and remove the ancient and unsafe FMR method. Signed-off-by: Max Gurtovoy --- net/rds/Makefile | 2 +- net/rds/ib.c | 14 +-- net/rds/ib.h | 1 - net/rds/ib_cm.c | 4 +- net/rds/ib_fmr.c | 269

[PATCH 9/9] RDMA/core: remove FMR device ops

2020-05-27 Thread Max Gurtovoy
After removing FMR support from all the RDMA ULPs and providers, there is no need to keep FMR operation for IB devices. Signed-off-by: Max Gurtovoy --- Documentation/infiniband/core_locking.rst | 2 -- drivers/infiniband/core/device.c | 4 --- drivers/infiniband/core/verbs.c

[PATCH 1/9] RDMA/mlx5: Remove FMR leftovers

2020-05-27 Thread Max Gurtovoy
From: Gal Pressman Remove a few leftovers from FMR functionality which are no longer used. Signed-off-by: Gal Pressman Signed-off-by: Max Gurtovoy --- drivers/infiniband/hw/mlx5/mlx5_ib.h | 8 1 file changed, 8 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b

[PATCH 5/9] RDMA/rdmavt: remove FMR memory registration

2020-05-27 Thread Max Gurtovoy
Use FRWR method to register memory by default and remove the ancient and unsafe FMR method. Signed-off-by: Max Gurtovoy --- drivers/infiniband/sw/rdmavt/mr.c | 154 -- drivers/infiniband/sw/rdmavt/mr.h | 15 drivers/infiniband/sw/rdmavt/vt.c | 4 - 3

[PATCH 6/9] RDMA/iser: Remove support for FMR memory registration

2020-05-27 Thread Max Gurtovoy
From: Israel Rukshin FMR is not supported on most recent RDMA devices (that use fast memory registration mechanism). Also, FMR was recently removed from NFS/RDMA ULP. Signed-off-by: Israel Rukshin Signed-off-by: Max Gurtovoy Reviewed-by: Sagi Grimberg --- drivers/infiniband/ulp/iser

Re: [PATCH rdma-next 3/3] RDMA/rw: Support threshold for registration vs scattering to local pages

2019-10-07 Thread Max Gurtovoy
On 10/7/2019 10:54 AM, Leon Romanovsky wrote: On Sun, Oct 06, 2019 at 11:58:25PM -0700, Christoph Hellwig wrote: /* - * Check if the device might use memory registration. This is currently only - * true for iWarp devices. In the future we can hopefully fine tune this based - * on HCA driver

Re: [pull request][for-next 0/9] Generic DIM lib for netdev and RDMA

2019-06-06 Thread Max Gurtovoy
On 6/6/2019 10:14 AM, Leon Romanovsky wrote: On Wed, Jun 05, 2019 at 11:24:31PM +, Saeed Mahameed wrote: Hi Dave, Doug & Jason This series improves DIM - Dynamically-tuned Interrupt Moderation- to be generic for netdev and RDMA use-cases. From Tal and Yamin: The first 7 patches provide

Re: [PATCH v4 11/13] nvmet-tcp: add NVMe over TCP target driver

2018-11-28 Thread Max Gurtovoy
hi Sagi, +static inline void nvmet_tcp_put_cmd(struct nvmet_tcp_cmd *cmd) +{ + if (unlikely(cmd == &cmd->queue->connect)) + return; if you don't return connect cmd to the list please don't add it to it in the first place (during alloc_cmd). and if you use it once, we migh

Re: [PATCH v3 13/13] nvme-tcp: add NVMe over TCP host driver

2018-11-27 Thread Max Gurtovoy
On 11/27/2018 9:48 AM, Sagi Grimberg wrote: This looks odd.  It's not really the timeout handlers job to call nvme_end_request here. Well.. if we are not yet LIVE, we will not trigger error recovery, which means nothing will complete this command so something needs to do it... I think that

Re: [PATCH v3 13/13] nvme-tcp: add NVMe over TCP host driver

2018-11-26 Thread Max Gurtovoy
+static enum blk_eh_timer_return +nvme_tcp_timeout(struct request *rq, bool reserved) +{ +    struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq); +    struct nvme_tcp_ctrl *ctrl = req->queue->ctrl; +    struct nvme_tcp_cmd_pdu *pdu = req->pdu; + +    dev_dbg(ctrl->ctrl.device, +    "queue %d

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-08-01 Thread Max Gurtovoy
sk weight > 1 it will try to be better than the naive mapping I suggest in the previous email. From 007d773af7b65a1f1ca543f031ca58b3afa5b7d9 Mon Sep 17 00:00:00 2001 From: Max Gurtovoy Date: Thu, 19 Jul 2018 12:42:00 + Subject: [PATCH 1/1] blk-mq: fix RDMA queue/cpu mappings assignments

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-07-31 Thread Max Gurtovoy
On 7/30/2018 6:47 PM, Steve Wise wrote: On 7/23/2018 11:53 AM, Max Gurtovoy wrote: On 7/23/2018 7:49 PM, Jason Gunthorpe wrote: On Fri, Jul 20, 2018 at 04:25:32AM +0300, Max Gurtovoy wrote: [ 2032.194376] nvme nvme0: failed to connect queue: 9 ret=-18 queue 9 is not mapped (overlap

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-07-23 Thread Max Gurtovoy
On 7/23/2018 7:49 PM, Jason Gunthorpe wrote: On Fri, Jul 20, 2018 at 04:25:32AM +0300, Max Gurtovoy wrote: [ 2032.194376] nvme nvme0: failed to connect queue: 9 ret=-18 queue 9 is not mapped (overlap). please try the bellow: This seems to work.  Here are three mapping cases:  each

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-07-19 Thread Max Gurtovoy
Regards, Max, From 6f7b98f1c43252f459772390c178fc3ad043fc82 Mon Sep 17 00:00:00 2001 From: Max Gurtovoy Date: Thu, 19 Jul 2018 12:42:00 + Subject: [PATCH 1/1] blk-mq: fix RDMA queue/cpu mappings assignments for mq In order to fulfil the block layer cpu <-> queue mapping, all the allocate

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-07-19 Thread Max Gurtovoy
On 7/18/2018 10:29 PM, Steve Wise wrote: On 7/18/2018 2:38 PM, Sagi Grimberg wrote: IMO we must fulfil the user wish to connect to N queues and not reduce it because of affinity overlaps. So in order to push Leon's patch we must also fix the blk_mq_rdma_map_queues to do a best effort ma

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-07-18 Thread Max Gurtovoy
On 7/18/2018 2:38 PM, Sagi Grimberg wrote: IMO we must fulfil the user wish to connect to N queues and not reduce it because of affinity overlaps. So in order to push Leon's patch we must also fix the blk_mq_rdma_map_queues to do a best effort mapping according the affinity and map the rest

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-07-17 Thread Max Gurtovoy
On 7/17/2018 11:58 AM, Leon Romanovsky wrote: On Tue, Jul 17, 2018 at 11:46:40AM +0300, Max Gurtovoy wrote: On 7/16/2018 8:08 PM, Steve Wise wrote: Hey Max: Hey, On 7/16/2018 11:46 AM, Max Gurtovoy wrote: On 7/16/2018 5:59 PM, Sagi Grimberg wrote: Hi, I've tested this

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-07-17 Thread Max Gurtovoy
On 7/16/2018 8:08 PM, Steve Wise wrote: Hey Max: Hey, On 7/16/2018 11:46 AM, Max Gurtovoy wrote: On 7/16/2018 5:59 PM, Sagi Grimberg wrote: Hi, I've tested this patch and seems problematic at this moment. Problematic how? what are you seeing? Connection failures and same

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-07-16 Thread Max Gurtovoy
On 7/16/2018 5:59 PM, Sagi Grimberg wrote: Hi, I've tested this patch and seems problematic at this moment. Problematic how? what are you seeing? Connection failures and same error Steve saw: [Mon Jul 16 16:19:11 2018] nvme nvme0: Connect command failed, error wo/DNR bit: -16402 [Mon

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-07-16 Thread Max Gurtovoy
Hi, I've tested this patch and seems problematic at this moment. maybe this is because of the bug that Steve mentioned in the NVMe mailing list. Sagi mentioned that we should fix it in the NVMe/RDMA initiator and I'll run his suggestion as well. BTW, when I run the blk_mq_map_queues it works fo

Re: Backport Mellanox mlx5 patches to stable 4.9.y

2018-02-01 Thread Max Gurtovoy
On 2/1/2018 10:21 AM, Greg KH wrote: On Tue, Jan 30, 2018 at 10:12:51AM +0100, Marta Rybczynska wrote: Hello Mellanox maintainers, I'd like to ask you to OK backporting two patches in mlx5 driver to 4.9 stable tree (they're in master for some time already). We have multiple deployment in 4.9

Re: [PATCH rfc 0/6] Automatic affinity settings for nvme over rdma

2017-04-04 Thread Max Gurtovoy
Any feedback is welcome. Hi Sagi, the patchset looks good and of course we can add support for more drivers in the future. have you run some performance testing with the nvmf initiator ? Sagi Grimberg (6): mlx5: convert to generic pci_alloc_irq_vectors mlx5: move affinity hints assig

Re: [PATCH rfc 5/6] block: Add rdma affinity based queue mapping helper

2017-04-04 Thread Max Gurtovoy
_GPL(blk_mq_rdma_map_queues); Otherwise, Looks good. Reviewed-by: Max Gurtovoy