Hi, while searching for a bug around zPCI + NVMe IRQ handling on a distro kernel, I got confused around handling of the maximum number of I/O queues in the NVMe driver. I think I groked it in the end but would like to propose the following improvements, that said I'm quite new to this code. I tested both patches on s390x (with a debug config) and x86_64 so with both data center and consumer NVMes. For the second patch, since I don't own a device with the quirk, I tried always returning 1 from nvme_max_io_queues() and confirmed that on my Evo 970 Pro this resulted in about half the performance in a fio test but did not otherwise break things. I couldn't find a reason why allocating only the I/O queues we actually use would be problematic in the code either but I might have missed something of course.
Best regards, Niklas Schnelle Niklas Schnelle (2): nvme-pci: drop min() from nr_io_queues assignment nvme-pci: don't allocate unused I/O queues drivers/nvme/host/pci.c | 17 +++++++---------- 1 file changed, 7 insertions(+), 10 deletions(-) -- 2.17.1