On Mon, Apr 10, 2017 at 01:05:50PM -0500, Steve Wise wrote:
> I'll test cxgb4 if you convert it. :)
That will take a lot of work. The problem with cxgb4 is that it
allocatesd all the interrupts at device enable time, but then only
allocates them to ULDs when they attached, while this scheme assum
On 4/2/2017 8:41 AM, Sagi Grimberg wrote:
This patch set is aiming to automatically find the optimal
queue <-> irq multi-queue assignments in storage ULPs (demonstrated
on nvme-rdma) based on the underlying rdma device irq affinity
settings.
First two patches modify mlx5 core driver to use gener
Hi Sagi,
Hey Max,
the patchset looks good and of course we can add support for more
drivers in the future.
have you run some performance testing with the nvmf initiator ?
I'm limited by the target machine in terms of IOPs, but the host shows
~10% cpu usage decrease, and latency improves slig
Any feedback is welcome.
Hi Sagi,
the patchset looks good and of course we can add support for more
drivers in the future.
have you run some performance testing with the nvmf initiator ?
Sagi Grimberg (6):
mlx5: convert to generic pci_alloc_irq_vectors
mlx5: move affinity hints assig
This patch set is aiming to automatically find the optimal
queue <-> irq multi-queue assignments in storage ULPs (demonstrated
on nvme-rdma) based on the underlying rdma device irq affinity
settings.
First two patches modify mlx5 core driver to use generic API
to allocate array of irq vectors with