On 4/6/2021 2:53 PM, Jason Gunthorpe wrote:
On Tue, Apr 06, 2021 at 08:09:43AM +0300, Leon Romanovsky wrote:
On Tue, Apr 06, 2021 at 10:37:38AM +0800, Honggang LI wrote:
On Mon, Apr 05, 2021 at 08:23:54AM +0300, Leon Romanovsky wrote:
From: Leon Romanovsky
From Avihai,
Relaxed Ordering i
, 46 insertions(+), 23 deletions(-)
v2:
- Remove extra code from nvme (Chao)
- Fix long lines (CH)
I've applied this version to rdma-rc - expecting to get these ULPs unbroken for
rc2
release
Thanks,
Jason
iser and nvme/rdma looks good to me,
Reviewed-by: Max Gurtovoy
FMR is not supported on most recent RDMA devices (that use fast memory
registration mechanism). Also, FMR was recently removed from NFS/RDMA
ULP.
Signed-off-by: Max Gurtovoy
Reviewed-by: Israel Rukshin
Reviewed-by: Bart Van Assche
---
drivers/infiniband/ulp/srp/ib_srp.c | 222
HCA's that are driven by mlx4 driver support FRWR method to register
memory. Remove the ancient and unsafe FMR method.
Signed-off-by: Max Gurtovoy
---
drivers/infiniband/hw/mlx4/main.c | 10 --
drivers/infiniband/hw/mlx4/mlx4_ib.h| 16 ---
drivers/infiniband/hw/mlx4
This ancient and unsafe method for memory registration is no longer used
by any RDMA based ULP. Remove the FMR pool API from the core driver.
Signed-off-by: Max Gurtovoy
---
Documentation/driver-api/infiniband.rst | 3 -
drivers/infiniband/core/Makefile| 2 +-
drivers/infiniband
Remove the ancient and unsafe FMR method.
Signed-off-by: Max Gurtovoy
---
drivers/infiniband/hw/mthca/mthca_dev.h | 10 -
drivers/infiniband/hw/mthca/mthca_mr.c | 262 +--
drivers/infiniband/hw/mthca/mthca_provider.c | 86 -
3 files changed, 1
"Linux 5.7-rc7"
- added "Reviewed-by" Bart signature for SRP
Gal Pressman (1):
RDMA/mlx5: Remove FMR leftovers
Israel Rukshin (1):
RDMA/iser: Remove support for FMR memory registration
Max Gurtovoy (7):
RDMA/mlx4: remove FMR support for memory registration
RDMA
Use FRWR method for memory registration by default and remove the ancient
and unsafe FMR method.
Signed-off-by: Max Gurtovoy
---
net/rds/Makefile | 2 +-
net/rds/ib.c | 14 +--
net/rds/ib.h | 1 -
net/rds/ib_cm.c | 4 +-
net/rds/ib_fmr.c | 269
After removing FMR support from all the RDMA ULPs and providers, there
is no need to keep FMR operation for IB devices.
Signed-off-by: Max Gurtovoy
---
Documentation/infiniband/core_locking.rst | 2 --
drivers/infiniband/core/device.c | 4 ---
drivers/infiniband/core/verbs.c
From: Gal Pressman
Remove a few leftovers from FMR functionality which are no longer used.
Signed-off-by: Gal Pressman
Signed-off-by: Max Gurtovoy
---
drivers/infiniband/hw/mlx5/mlx5_ib.h | 8
1 file changed, 8 deletions(-)
diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h
b
Use FRWR method to register memory by default and remove the ancient and
unsafe FMR method.
Signed-off-by: Max Gurtovoy
---
drivers/infiniband/sw/rdmavt/mr.c | 154 --
drivers/infiniband/sw/rdmavt/mr.h | 15
drivers/infiniband/sw/rdmavt/vt.c | 4 -
3
From: Israel Rukshin
FMR is not supported on most recent RDMA devices (that use fast memory
registration mechanism). Also, FMR was recently removed from NFS/RDMA
ULP.
Signed-off-by: Israel Rukshin
Signed-off-by: Max Gurtovoy
Reviewed-by: Sagi Grimberg
---
drivers/infiniband/ulp/iser
On 10/7/2019 10:54 AM, Leon Romanovsky wrote:
On Sun, Oct 06, 2019 at 11:58:25PM -0700, Christoph Hellwig wrote:
/*
- * Check if the device might use memory registration. This is currently only
- * true for iWarp devices. In the future we can hopefully fine tune this based
- * on HCA driver
On 6/6/2019 10:14 AM, Leon Romanovsky wrote:
On Wed, Jun 05, 2019 at 11:24:31PM +, Saeed Mahameed wrote:
Hi Dave, Doug & Jason
This series improves DIM - Dynamically-tuned Interrupt
Moderation- to be generic for netdev and RDMA use-cases.
From Tal and Yamin:
The first 7 patches provide
hi Sagi,
+static inline void nvmet_tcp_put_cmd(struct nvmet_tcp_cmd *cmd)
+{
+ if (unlikely(cmd == &cmd->queue->connect))
+ return;
if you don't return connect cmd to the list please don't add it to it in
the first place (during alloc_cmd). and if you use it once, we migh
On 11/27/2018 9:48 AM, Sagi Grimberg wrote:
This looks odd. It's not really the timeout handlers job to
call nvme_end_request here.
Well.. if we are not yet LIVE, we will not trigger error
recovery, which means nothing will complete this command so
something needs to do it...
I think that
+static enum blk_eh_timer_return
+nvme_tcp_timeout(struct request *rq, bool reserved)
+{
+ struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq);
+ struct nvme_tcp_ctrl *ctrl = req->queue->ctrl;
+ struct nvme_tcp_cmd_pdu *pdu = req->pdu;
+
+ dev_dbg(ctrl->ctrl.device,
+ "queue %d
sk
weight > 1 it will try to be better than the naive mapping I suggest in
the previous email.
From 007d773af7b65a1f1ca543f031ca58b3afa5b7d9 Mon Sep 17 00:00:00 2001
From: Max Gurtovoy
Date: Thu, 19 Jul 2018 12:42:00 +
Subject: [PATCH 1/1] blk-mq: fix RDMA queue/cpu mappings assignments
On 7/30/2018 6:47 PM, Steve Wise wrote:
On 7/23/2018 11:53 AM, Max Gurtovoy wrote:
On 7/23/2018 7:49 PM, Jason Gunthorpe wrote:
On Fri, Jul 20, 2018 at 04:25:32AM +0300, Max Gurtovoy wrote:
[ 2032.194376] nvme nvme0: failed to connect queue: 9 ret=-18
queue 9 is not mapped (overlap
On 7/23/2018 7:49 PM, Jason Gunthorpe wrote:
On Fri, Jul 20, 2018 at 04:25:32AM +0300, Max Gurtovoy wrote:
[ 2032.194376] nvme nvme0: failed to connect queue: 9 ret=-18
queue 9 is not mapped (overlap).
please try the bellow:
This seems to work. Here are three mapping cases: each
Regards,
Max,
From 6f7b98f1c43252f459772390c178fc3ad043fc82 Mon Sep 17 00:00:00 2001
From: Max Gurtovoy
Date: Thu, 19 Jul 2018 12:42:00 +
Subject: [PATCH 1/1] blk-mq: fix RDMA queue/cpu mappings assignments for mq
In order to fulfil the block layer cpu <-> queue mapping, all the
allocate
On 7/18/2018 10:29 PM, Steve Wise wrote:
On 7/18/2018 2:38 PM, Sagi Grimberg wrote:
IMO we must fulfil the user wish to connect to N queues and not reduce
it because of affinity overlaps. So in order to push Leon's patch we
must also fix the blk_mq_rdma_map_queues to do a best effort
ma
On 7/18/2018 2:38 PM, Sagi Grimberg wrote:
IMO we must fulfil the user wish to connect to N queues and not reduce
it because of affinity overlaps. So in order to push Leon's patch we
must also fix the blk_mq_rdma_map_queues to do a best effort mapping
according the affinity and map the rest
On 7/17/2018 11:58 AM, Leon Romanovsky wrote:
On Tue, Jul 17, 2018 at 11:46:40AM +0300, Max Gurtovoy wrote:
On 7/16/2018 8:08 PM, Steve Wise wrote:
Hey Max:
Hey,
On 7/16/2018 11:46 AM, Max Gurtovoy wrote:
On 7/16/2018 5:59 PM, Sagi Grimberg wrote:
Hi,
I've tested this
On 7/16/2018 8:08 PM, Steve Wise wrote:
Hey Max:
Hey,
On 7/16/2018 11:46 AM, Max Gurtovoy wrote:
On 7/16/2018 5:59 PM, Sagi Grimberg wrote:
Hi,
I've tested this patch and seems problematic at this moment.
Problematic how? what are you seeing?
Connection failures and same
On 7/16/2018 5:59 PM, Sagi Grimberg wrote:
Hi,
I've tested this patch and seems problematic at this moment.
Problematic how? what are you seeing?
Connection failures and same error Steve saw:
[Mon Jul 16 16:19:11 2018] nvme nvme0: Connect command failed, error
wo/DNR bit: -16402
[Mon
Hi,
I've tested this patch and seems problematic at this moment.
maybe this is because of the bug that Steve mentioned in the NVMe
mailing list. Sagi mentioned that we should fix it in the NVMe/RDMA
initiator and I'll run his suggestion as well.
BTW, when I run the blk_mq_map_queues it works fo
On 2/1/2018 10:21 AM, Greg KH wrote:
On Tue, Jan 30, 2018 at 10:12:51AM +0100, Marta Rybczynska wrote:
Hello Mellanox maintainers,
I'd like to ask you to OK backporting two patches in mlx5 driver to 4.9 stable
tree (they're in master for some time already).
We have multiple deployment in 4.9
Any feedback is welcome.
Hi Sagi,
the patchset looks good and of course we can add support for more
drivers in the future.
have you run some performance testing with the nvmf initiator ?
Sagi Grimberg (6):
mlx5: convert to generic pci_alloc_irq_vectors
mlx5: move affinity hints assig
_GPL(blk_mq_rdma_map_queues);
Otherwise, Looks good.
Reviewed-by: Max Gurtovoy
30 matches
Mail list logo