On Thu, Dec 17, 2015 at 12:14:48PM -0600, Bjorn Helgaas wrote:
> On Mon, Dec 07, 2015 at 02:32:27PM -0700, Keith Busch wrote:
> > +/*
> > + * VMD h/w converts posted config writes to non-posted. The read-back in
> > this
> > + * function forces the completion so it re
true;
> > break;
> > }
> > -
> > if (!found)
> > pci_add_resource(resources, &info->busn);
> >
> > And I only refined the commit message based on the test patch
> > I sent to Authur as an attachment at Nov
On Wed, Dec 02, 2015 at 02:07:34PM -0700, Jens Axboe wrote:
> Christoph, for-4.5/nvme also fails if integrity isn't enabled:
I forgot about this since I've merged this in my repo to fix:
https://lkml.org/lkml/2015/10/26/546
That ok, or should we handle this differently?
--
To unsubscribe from th
On Tue, Nov 24, 2015 at 11:19:34PM +0100, Rafael J. Wysocki wrote:
> Quite frankly, I'm more likely to revert the offending commit at this
> point as that's not the only regression reported against it and the
> fix only helps in one case (out of three known to me).
Using 4.4-rc1 and can confirm th
ch in the series adds a new print format since EUI-64
didn't have a specifier. It's essentially an extented MAC identifier,
so appending a specifier for the longer format to that.
I don't know who owns lib/vsprintf, so copying Greg & LKML.
Keith Busch (2):
Print: Add print forma
ormat: 00-00-00-00-00-00-00-00
Signed-off-by: Keith Busch
---
drivers/nvme/host/core.c | 60 ++--
drivers/nvme/host/nvme.h | 3 +++
2 files changed, 61 insertions(+), 2 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
MAC addresses may be formed using rules based on EUI-64, which is 2 bytes
longer than a typical 6-byte MAC. This patch adds a long specifier to
the %pM format to support the extended unique identifier.
Signed-off-by: Keith Busch
---
Documentation/printk-formats.txt | 13 ++---
lib
alizes the separator during its declaration, and removes the switch
fall through case.
Signed-off-by: Keith Busch
---
Changed from previos version:
Fixed checks for the 'l' specifier. This can be in fmt[1] or fmt[2],
pointed out by Joe Perches from original review (thanks!).
Documentatio
in a loop, and the default ':' separator is
initialized at declaration time. A side effect of this allows 'F' and
'R' both be specified, so these are appended to the documentation.
Signed-off-by: Keith Busch
---
>From previous version:
Use 'while (isalpha
And use the max bus resource from the parent rather than assume 255.
Signed-off-by: Keith Busch
---
drivers/pci/probe.c | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
index 8361d27..1cb3be7 100644
--- a/drivers/pci
t()"
Subscribe to the "new" domain specific operations rather than defining
this as a PCI FIXUP.
Fixed memory leak if irq_domain creation failed.
Keith Busch (4):
pci: skip child bus with conflicting resources
x86/pci: allow pci domain specific dma ops
x86/pci:
New x86 pci h/w will require dma operations specific to that domain. This
patch allows those domains to register their operations, and sets devices
as they are discovere3d in that domain to use them.
Signed-off-by: Keith Busch
---
arch/x86/include/asm/device.h | 10 ++
arch/x86/pci
or IO ports. Devices or drivers
requiring these features should either not be placed below VMD-owned
root ports, or VMD should be disabled by BIOS for such endpoints.
Signed-off-by: Keith Busch
---
arch/x86/Kconfig | 17 ++
arch/x86/include/asm/vmd.h | 10 +
arch/x86/kerne
PCI-e segments will continue to use the lower 16 bits as required by
ACPI. Special domains may use the full 32-bits.
Signed-off-by: Keith Busch
---
lib/filter.c |2 +-
lib/pci.h|2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/lib/filter.c b/lib/filter.c
index
On Thu, Jan 21, 2016 at 02:34:28PM -0700, Jens Axboe wrote:
> On 01/21/2016 07:57 AM, Stefan Haberland wrote:
> >Hi,
> >
> >unfortunately commit e36f62042880 "block: split bios to maxpossible length"
> >breaks the DASD driver on s390. We expect the block requests to be
> >multiple
> >of 4k in size
On Thu, Jan 21, 2016 at 05:12:13PM -0800, Linus Torvalds wrote:
> On Thu, Jan 21, 2016 at 2:51 PM, Keith Busch wrote:
> >
> > My apologies for the trouble. I trust it really is broken, but I don't
> > quite see how. The patch supposedly splits the transfer to the max
On Thu, Jan 21, 2016 at 08:15:37PM -0800, Linus Torvalds wrote:
> For the case of nvme, for example, I think the max sector number is so
> high that you'll never hit that anyway, and you'll only ever hit the
> chunk limit. No?
The device's max transfer and chunk size are not very large, both fixed
On Fri, Jan 22, 2016 at 10:43:59AM -0800, Linus Torvalds wrote:
> On Fri, Jan 22, 2016 at 10:29 AM, Ming Lei wrote:
> >
> > This patch fixes the issue by making the max io size aligned
> > to logical block size.
>
> Looks better, thanks.
>
> I'd suggest also moving the "max_sectors" variable int
On Fri, Jan 22, 2016 at 12:22:18PM -0800, Linus Torvalds wrote:
>
> On Jan 22, 2016 12:11 PM, "Keith Busch" wrote:
> >
> > The value of max_sectors doesn't change in this function
>
> Why do you say that? It depends on the offset, so it clearly *does*
Hi, thanks for the feedback. I've a few follow up questions.
On Sun, Jan 03, 2016 at 03:11:24PM +0100, Martin Mares wrote:
> This is definitely not enough. Try grepping the source for "domain" :-)
>
> At least the following places need updating, too:
>
> o struct pci_filter and operations on i
On Sat, Feb 06, 2016 at 02:32:24PM +, Wenbo Wang wrote:
> Keith,
>
> Is the following solution OK?
> synchronize_rcu guarantee that no queue_rq is running concurrently with
> device disable code. Together with your another patch (adding
> blk_sync_queue), both sync/async path shall be handle
On Mon, Feb 08, 2016 at 11:13:50AM +0100, Christoph Hellwig wrote:
> On Mon, Feb 08, 2016 at 12:01:16PM +0200, Sagi Grimberg wrote:
> >
> >> Do we have defined sysfs attributes for NVMe devices nowadays?
> >
> > /sys/block/nvme0n1/uuid
>
> That's only supported for NVMe 1.1 and higher devices, and
On Mon, Feb 08, 2016 at 04:19:13PM +0100, Hannes Reinecke wrote:
> Ok, so what about having a 'wwid' attribute which provides combined
> information (like scsi has)?
That looks like the sensible thing to do. Thanks for pointer.
Going forward, I will solicite more feedback from scsi developers
so
On Tue, Feb 09, 2016 at 11:22:04AM +, Wenbo Wang wrote:
> In most cases, rcu read lock is just a preempt_disable, which is what get_cpu
> does. I don't see any risk.
Yes, many rcu_read_lock cases expand similarly to get_cpu. What about
the other cases?
FWIW, I don't think we'll hit the probl
;s 'ok'?
Thanks,
Keith
On Thu, Feb 25, 2016 at 08:42:19AM -0600, Bjorn Helgaas wrote:
> On Tue, Feb 23, 2016 at 06:24:00PM +, Keith Busch wrote:
> > On Mon, Feb 22, 2016 at 04:10:24PM -0600, Bjorn Helgaas wrote:
> > > I'm not sure how to deal with the question o
On Mon, Feb 22, 2016 at 04:10:24PM -0600, Bjorn Helgaas wrote:
> I'm not sure how to deal with the question of a hot-added VMD. Maybe
> all we can do now is add a comment to the effect that we assume BIOS
> has assigned the non-prefetchable BAR below 4GB, and if Linux assigns
> that BAR for hot-ad
On Tue, Feb 23, 2016 at 02:50:13PM -0700, Jon Derrick wrote:
> This patch attaches the new VMD domain's resources to the VMD device's
> resources. This allows /proc/iomem to display a more complete picture.
>
> Before:
> c000-c1ff : :5d:05.5
> c200-c3ff : :5d:05.5
>
On Thu, Feb 25, 2016 at 08:42:19AM -0600, Bjorn Helgaas wrote:
> On Tue, Feb 23, 2016 at 06:24:00PM +0000, Keith Busch wrote:
> > On Mon, Feb 22, 2016 at 04:10:24PM -0600, Bjorn Helgaas wrote:
> > > + /*
> > > + * If the window is below 4GB, clear IORESOURCE_MEM_64 so w
On Wed, 8 Oct 2014, Matias Bjørling wrote:
NVMe devices are identified by the vendor specific bits:
Bit 3 in OACS (device-wide). Currently made per device, as the nvme
namespace is missing in the completion path.
The NVM-Express 1.2 actually defined this bit for Namespace Management,
so I don'
On Thu, 21 May 2015, Parav Pandit wrote:
Avoid diabling interrupt and holding q_lock for the queue
which is just getting initialized.
With this change, online_queues is also incremented without
lock during queue setup stage.
if Power management nvme_suspend() kicks in during queue setup time,
pe
On Thu, 21 May 2015, Parav Pandit wrote:
On Fri, May 22, 2015 at 1:04 AM, Keith Busch wrote:
The q_lock is held to protect polling from reading inconsistent data.
ah, yes. I can see the nvme_kthread can poll the CQ while its getting
created through the nvme_resume().
I think this opens up
On Fri, 22 May 2015, Parav Pandit wrote:
On Fri, May 22, 2015 at 8:18 PM, Keith Busch wrote:
The rcu protection on nvme queues was removed with the blk-mq conversion
as we rely on that layer for h/w access.
o.k. But above is at level where data I/Os are not even active. Its
between
On Fri, 22 May 2015, Parav Pandit wrote:
During normal positive path probe,
(a) device is added to dev_list in nvme_dev_start()
(b) nvme_kthread got created, which will eventually refers to
dev->queues[qid] to check for NULL.
(c) dev_start() worker thread has started probing device and creating
t
On Fri, 22 May 2015, Parav Pandit wrote:
On Fri, May 22, 2015 at 9:53 PM, Keith Busch wrote:
A memory barrier before incrementing the dev->queue_count (and assigning
the pointer in the array before that) should address this concern.
Sure. mb() will solve the publisher side problem. RCU
On Fri, 22 May 2015, Parav Pandit wrote:
I agree to it that nvmeq won't be null after mb(); That alone is not sufficient.
What I have proposed in previous email is,
Converting,
struct nvme_queue *nvmeq = dev->queues[i];
if (!nvmeq)
continue;
spin_lock_irq(nvmeq->q_lock);
to replace with,
On Tue, 12 May 2015, Nicholas Krause wrote:
This changes the function,nvme_alloc_queue to use the kernel code,
-ENOMEM for when failing to allocate the memory required for the
nvme_queue structure pointer,nvme in order to correctly return
to the caller the correct reason for this function's faili
On Wed, 13 May 2015, Matthew Wilcox wrote:
On Wed, May 13, 2015 at 12:21:18PM -0400, Nicholas Krause wrote:
This removes the include statement for including the header file,
linux/mm.h in the file, nvme-core.c due this driver file never
calling any functions from the header file, linux/mm.h and
On Sun, 22 Mar 2015, Steven Noonan wrote:
This happens on boot, and then eventually results in an RCU stall.
[8.047533] nvme :05:00.0: Device not ready; aborting initialisation
Note that the above is expected with this hardware (long story).
Although 3.19.x prints the above and then con
On Fri, 6 Mar 2015, Alexey Khoroshilov wrote:
class_create() returns ERR_PTR on failure,
so IS_ERR() should be used instead of check for NULL.
Found by Linux Driver Verification project (linuxtesting.org).
Signed-off-by: Alexey Khoroshilov
Thanks for the fix.
Acked-by: Keith Busch
--
To
On Mon, 9 Feb 2015, Mike Snitzer wrote:
On Mon, Feb 09 2015 at 11:38am -0500,
Dongsu Park wrote:
So that commit 6d6285c45f5a should be either reverted, or moved to
linux-dm tree, doesn't it?
Cheers,
Dongsu
[1] https://www.redhat.com/archives/dm-devel/2015-January/msg00171.html
[2]
https://gi
On Wed, 21 Jan 2015, Yan Liu wrote:
When a passthrough IO command is issued with a specific block device file
descriptor. It should be applied at
the namespace which is associated with that block device file descriptor. This
patch makes such passthrough
command ingore nsid in nvme_passthru_cmd
On Wed, 21 Jan 2015, Yan Liu wrote:
For IO passthrough command, it uses an IO queue associated with the device.
Actually, this patch does not modify that part.
This patch is not really focused on io queues; instead, it is more about
namespace protection from other namespace's user ios. The patc
On Thu, 22 Jan 2015, Christoph Hellwig wrote:
On Thu, Jan 22, 2015 at 12:47:24AM +, Keith Busch wrote:
The IOCTL's purpose was to let someone submit completely arbitrary
commands on IO queues. This technically shouldn't even need a namespace
handle, but we don't have
On Mon, 23 Feb 2015, Arnd Bergmann wrote:
A patch that was added to 4.0-rc1 in the last minute caused a
build break in the NVMe driver unless integrity support is
also enabled:
drivers/block/nvme-core.c: In function 'nvme_dif_remap':
drivers/block/nvme-core.c:523:24: error: dereferencing pointer
On Tue, Mar 15, 2016 at 04:17:56PM +0100, Vitaly Kuznetsov wrote:
> The reason of the slowdown is the fact that bios don't get merged and we
> end up sending many short requests to the host. My investigation led me to
> the following code (__bvec_gap_to_prev()):
>
> return offset ||
>
On Thu, Mar 17, 2016 at 12:20:28PM +0100, Vitaly Kuznetsov wrote:
> Keith Busch writes:
> > been combined. In any case, I think you can get what you're after just
> > by moving the gap check after BIOVEC_PHYS_MERGABLE. Does the following
> > look ok to you?
> >
>
On Thu, Nov 05, 2015 at 02:35:31PM +0800, Jiang Liu wrote:
> Hi Keith,
> Could you please try the attached patch?
> Thanks!
> Gerry
Thanks! I anticipated this and tested the same thing yesterday, and it
is successful.
I'll apply to the series and send a new revision hopefully today. Not
req
ut it's
> very useful for testing. For now, it's a per-device opt-in feature.
> To enable it, you echo 1 to /sys/block//queue/io_poll.
This really looks good. I verified similar performance improvements on
various NVMe controller types as well.
Acked-by: Keith Busch
--
To unsu
PCI-e segments will continue to use the lower 16 bits as required by
ACPI. Special domains may use the full 32-bits.
Signed-off-by: Keith Busch
---
lib/filter.c |2 +-
lib/pci.h|2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/lib/filter.c b/lib/filter.c
index
New x86 pci h/w will require DMA operations specific to that domain. This
patch allows those domains to register their operations, and sets devices
as they are discovered in that domain to use them.
Signed-off-by: Keith Busch
---
arch/x86/include/asm/device.h | 10 ++
arch/x86/pci
Signed-off-by: Keith Busch
---
drivers/pci/msi.c | 2 ++
kernel/irq/irqdomain.c | 1 +
2 files changed, 3 insertions(+)
diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
index 4a7da3c..5fb932b 100644
--- a/drivers/pci/msi.c
+++ b/drivers/pci/msi.c
@@ -1119,6 +1119,7 @@ struct pci_dev
to share.
Lots style fixes/updates and additional code comments.
The one review comment I have not fixed is the affinity hint. We are
still developing a way to better handle this, so have left it as a error
returning stub. It's less than optimal, but isn't more harmful than that.
K
And use the max bus resource from the parent rather than assume 255.
Signed-off-by: Keith Busch
---
drivers/pci/probe.c | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
index 8361d27..1cb3be7 100644
--- a/drivers/pci
From: Liu Jiang
Previously msi_domain_alloc() assumes MSI irqdomains always have parent
irqdomains, but that's not true for the new Intel VMD devices. So relax
msi_domain_alloc() to support parentless MSI irqdomains.
Signed-off-by: Jiang Liu
Signed-off-by: Liu Jiang
---
kernel/irq/msi.c | 8 +
orts. Devices or drivers
requiring these features should either not be placed below VMD-owned
root ports, or VMD should be disabled by BIOS for such endpoints.
Signed-off-by: Keith Busch
---
arch/x86/Kconfig | 13 +
arch/x86/include/asm/hw_irq.h |
On Fri, Nov 06, 2015 at 03:46:07PM -0800, Elliott, Robert (Persistent Memory)
wrote:
> > -Original Message-
> > From: linux-kernel-ow...@vger.kernel.org [mailto:linux-kernel-
> > ow...@vger.kernel.org] On Behalf Of Jens Axboe
> > Sent: Friday, November 6, 2015 11:20 AM
> ...
> > Subject: [
On Fri, Oct 30, 2015 at 02:35:11PM -0700, Nishanth Aravamudan wrote:
> Given that it's 4K just about everywhere by default (and sort of
> implicitly expected to be, I guess), I think I'd prefer we default to
> 4K. That should mitigate the performance impact (I'll ask our IO team to
> do some runs,
On Fri, 2 Oct 2015, Johannes Thumshirn wrote:
Lee Duncan writes:
Simplify ida index allocation and removal by
using the ida_simple_* helper functions.
Looks good to me. Just one comment:
static void nvme_release_instance(struct nvme_dev *dev)
{
spin_lock(&dev_list_lock);
- i
On Mon, 28 Sep 2015, Ming Lei wrote:
This patchset introduces .map_changed callback into 'struct blk_mq_ops',
and use this callback to get NVMe notified about the mapping changed event,
then NVMe can update the irq affinity hint for its queues.
I think this is going the wrong direction. Shouldn
On Tue, 29 Sep 2015, Ming Lei wrote:
Yes, I thought of that before, but it has the following cons:
- some drivers/devices may need different IRQ affinity policy, such as virtio
devices which has its own set affinity handler(see virtqueue_set_affinity()),
That's not a very good example to suppo
On Mon, Nov 02, 2015 at 07:35:15PM +0100, Thomas Gleixner wrote:
> On Tue, 27 Oct 2015, Keith Busch wrote:
Thomas,
Thanks a bunch for the feedback! I'll reply what I can right now, and
will take more time to consider or fix the rest for the next revision.
> I'm just looking
On Tue, Nov 03, 2015 at 05:18:24AM -0800, Christoph Hellwig wrote:
> On Fri, Oct 30, 2015 at 02:35:11PM -0700, Nishanth Aravamudan wrote:
> > diff --git a/drivers/block/nvme-core.c b/drivers/block/nvme-core.c
> > index ccc0c1f93daa..a9a5285bdb39 100644
> > --- a/drivers/block/nvme-core.c
> > +++ b/
On Tue, Nov 03, 2015 at 12:42:02PM +0100, Thomas Gleixner wrote:
> On Tue, 3 Nov 2015, Keith Busch wrote:
> > > > + msi_irqdomain = pci_msi_create_irq_domain(NULL,
> > > > &pci_chained_msi_domain_info,
> > > > +
On Sun, Feb 10, 2019 at 09:19:58AM -0800, Jonathan Cameron wrote:
> On Sat, 9 Feb 2019 09:20:53 +0100
> Brice Goglin wrote:
>
> > Hello Keith
> >
> > Could we ever have a single side cache in front of two NUMA nodes ? I
> > don't see a way to find that out in the current implementation. Would we
Hi Greg,
Just wanted to check with you on how we may proceed with this series.
The main feature is exporting new sysfs attributes through driver core,
so I think it makes most sense to go through you unless you'd prefer
this go through a different route.
The proposed interface has been pretty sta
On Tue, Mar 19, 2019 at 04:41:07PM +0200, Maxim Levitsky wrote:
> -> Share the NVMe device between host and guest.
> Even in fully virtualized configurations,
> some partitions of nvme device could be used by guests as block devices
> while others passed through with nvme-mdev to
On Mon, Mar 18, 2019 at 08:12:04PM -0500, Alexandru Gagniuc wrote:
> I was able to test this on edge-triggered interrupts. None of my
> machines have PCIe ports that use level-triggered interrupts. This
> might not be too straightforward to test without a hardware yanker,
> but if there's a way to
On Fri, Sep 06, 2019 at 09:48:21AM +0800, Ming Lei wrote:
> When one IRQ flood happens on one CPU:
>
> 1) softirq handling on this CPU can't make progress
>
> 2) kernel thread bound to this CPU can't make progress
>
> For example, network may require softirq to xmit packets, or another irq
> thr
On Fri, Sep 06, 2019 at 11:30:57AM -0700, Sagi Grimberg wrote:
>
> >
> > Ok, so the real problem is per-cpu bounded tasks.
> >
> > I share Thomas opinion about a NAPI like approach.
>
> We already have that, its irq_poll, but it seems that for this
> use-case, we get lower performance for some
On Sat, Sep 07, 2019 at 06:19:21AM +0800, Ming Lei wrote:
> On Fri, Sep 06, 2019 at 05:50:49PM +, Long Li wrote:
> > >Subject: Re: [PATCH 1/4] softirq: implement IRQ flood detection mechanism
> > >
> > >Why are all 8 nvmes sharing the same CPU for interrupt handling?
> > >Shouldn't matrix_find_
On Fri, Aug 30, 2019 at 06:01:39PM -0600, Logan Gunthorpe wrote:
> To fix this, assign the subsystem's instance based on the instance
> number of the controller's instance that first created it. There should
> always be fewer subsystems than controllers so the should not be a need
> to create extra
On Wed, Aug 21, 2019 at 7:34 PM Ming Lei wrote:
> On Wed, Aug 21, 2019 at 04:27:00PM +, Long Li wrote:
> > Here is the command to benchmark it:
> >
> > fio --bs=4k --ioengine=libaio --iodepth=128
> > --filename=/dev/nvme0n1:/dev/nvme1n1:/dev/nvme2n1:/dev/nvme3n1:/dev/nvme4n1:/dev/nvme5n1:/dev
ign at least 1 vector for remained nodes if 'numvecs' vectors
> have been handled already.
>
> Also, if the specified cpumask for one numa node is empty, simply not
> spread vectors on this node.
>
> Cc: Christoph Hellwig
> Cc: Keith Busch
> Cc: linux-n...@lists.infr
gt; irq 36, cpu list 8-9
> irq 37, cpu list 11,13
> irq 38, cpu list 14-15
>
> Without this patch, kernel warning is triggered on above situation, and
> allocation result was supposed to be 4 vectors for each node.
>
> Cc: Christoph Hellwig
> Cc: Ke
On Mon, Aug 19, 2019 at 04:33:45PM -0700, Ashton Holmes wrote:
> When playing certain games on my PC dmesg will start spitting out NVME
> timeout messages, this eventually results in BTRFS throwing errors and
> remounting itself as read only. The drive passes smart's health check and
> works fine w
On Tue, Sep 03, 2019 at 10:08:01AM -0600, Logan Gunthorpe wrote:
> On 2019-08-31 9:29 a.m., Keith Busch wrote:
> > On Fri, Aug 30, 2019 at 06:01:39PM -0600, Logan Gunthorpe wrote:
> >> To fix this, assign the subsystem's instance based on the instance
> >> number o
On Wed, Aug 14, 2019 at 09:05:49AM -0700, Mario Limonciello wrote:
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 8f3fbe5..47c7754 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -2251,6 +2251,29 @@ static const struct nvme_core_quirk_entry
host managed power state for suspend")
> Link: http://lists.infradead.org/pipermail/linux-nvme/2019-July/thread.html
> Signed-off-by: Mario Limonciello
> Signed-off-by: Charles Hyde
Looks fine to me.
Reviewed-by: Keith Busch
On Wed, Aug 14, 2019 at 01:14:50PM -0700, Sagi Grimberg wrote:
> Mario,
>
> Can you please respin a patch that applies cleanly on nvme-5.4?
This fixes a regression we introduced in 5.3, so it should go in
5.3-rc. For this to apply cleanly, though, we'll need to resync to Linus'
tree to get Rafael
gt; irq 36, cpu list 8-9
> irq 37, cpu list 11,13
> irq 38, cpu list 14-15
>
> Without this patch, kernel warning is triggered on above situation, and
> allocation result was supposed to be 4 vectors for each node.
>
> Cc: Christoph Hellwig
> Cc: Keith Bus
On Fri, Aug 16, 2019 at 12:43:02PM -0700, mario.limoncie...@dell.com wrote:
> > We need to coordinate with Jens, don't think its a good idea if I'll
> > just randomly get stuff from linus' tree and send an rc pull request.
>
> The dependent commit is in Linus' tree now.
> 4eaefe8c621c6195c91044396
On Tue, Aug 27, 2019 at 05:09:27PM +0800, Ming Lei wrote:
> On Tue, Aug 27, 2019 at 11:06:20AM +0200, Johannes Thumshirn wrote:
> > On 27/08/2019 10:53, Ming Lei wrote:
> > [...]
> > > + char *devname;
> > > + const struct cpumask *mask;
> > > + unsigned long irqflags = IRQF
nterrupt.
>
> Cc: Long Li
> Cc: Ingo Molnar ,
> Cc: Peter Zijlstra
> Cc: Keith Busch
> Cc: Jens Axboe
> Cc: Christoph Hellwig
> Cc: Sagi Grimberg
> Cc: John Garry
> Cc: Thomas Gleixner
> Cc: Hannes Reinecke
> Cc: linux-n...@lists.infradead.org
>
On Tue, Aug 27, 2019 at 08:34:21AM -0600, Keith Busch wrote:
> I think you should probably just have pci_irq_get_affinity() take a flags
> argument, or make a new function like __pci_irq_get_affinity() that
> pci_irq_get_affinity() can call with a default flag.
Sorry, copied the wrong
RROR_RESPONSE definition
> PCI / PM: Decode D3cold power state correctly
> PCI / PM: Return error when changing power state from D3cold
Series looks good to me.
Reviewed-by: Keith Busch
On Wed, Sep 04, 2019 at 08:05:58AM +0200, Christoph Hellwig wrote:
> On Tue, Sep 03, 2019 at 10:46:20AM -0600, Keith Busch wrote:
> > Could we possibly make /dev/nvmeX be a subsystem handle without causing
> > trouble for anyone? This would essentially be the same thing as today
&
On Wed, Sep 04, 2019 at 05:42:15PM +0200, Christoph Hellwig wrote:
> On Wed, Sep 04, 2019 at 08:44:27AM -0600, Keith Busch wrote:
> > Let me step through an example:
> >
> > Ctrl A gets instance 0.
> >
> > Its subsystem gets the same instance, and takes ref
On Wed, Sep 04, 2019 at 10:07:12AM -0600, Logan Gunthorpe wrote:
> Yes, I agree, we can't solve the mismatch problem in the general case:
> with sequences of hot plug events there will always be a case that
> mismatches. I just think we can do better in the simple common default case.
This may be
On Wed, Sep 04, 2019 at 11:01:22AM -0600, Logan Gunthorpe wrote:
> Oh, yes that's simpler than the struct/kref method and looks like it
> will accomplish the same thing. I did some brief testing with it and it
> seems to work for me (though I don't have any subsystems with multiple
> controllers).
the first controller discovered in
that subsystem.
Reviewed-by: Logan Gunthorpe
Signed-off-by: Keith Busch
---
drivers/nvme/host/core.c | 21 ++---
1 file changed, 10 insertions(+), 11 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 14c0bfb55615..8a
covered in that
subsystem.
Reviewed-by: Logan Gunthorpe
Signed-off-by: Keith Busch
---
v1 -> v2:
Changelog: reduce sensationalism, fix spelling
drivers/nvme/host/core.c | 21 ++---
1 file changed, 10 insertions(+), 11 deletions(-)
diff --git a/drivers/nvme/host/core.c b/driv
On Mon, Sep 16, 2019 at 12:13:24PM +, Baldyga, Robert wrote:
> Ok, fair enough. We want to keep things hidden behind certain layers,
> and that's definitely a good thing. But there is a problem with these
> layers - they do not expose all the features. For example AFAIK there
> is no clear way
On Wed, Sep 11, 2019 at 06:42:33PM -0500, Mario Limonciello wrote:
> The action of saving the PCI state will cause numerous PCI configuration
> space reads which depending upon the vendor implementation may cause
> the drive to exit the deepest NVMe state.
>
> In these cases ASPM will typically re
On Mon, Oct 07, 2019 at 01:50:11PM -0400, Tyler Ramer wrote:
> Shutdown the controller when nvme_remove_dead_controller is
> reached.
>
> If nvme_remove_dead_controller() is called, the controller won't
> be comming back online, so we should shut it down rather than just
> disabling.
>
> Remove n
On Fri, Oct 04, 2019 at 11:36:42AM -0400, Tyler Ramer wrote:
> Here's a failure we had which represents the issue the patch is
> intended to solve:
>
> Aug 26 15:00:56 testhost kernel: nvme nvme4: async event result 00010300
> Aug 26 15:01:27 testhost kernel: nvme nvme4: controller is down; will
>
On Mon, Oct 07, 2019 at 11:13:12AM -0400, Tyler Ramer wrote:
> > Setting the shutdown to true is
> > usually just to get the queues flushed, but the nvme_kill_queues() that
> > we call accomplishes the same thing.
>
> The intention of this patch was to clean up another location where
> nvme_dev_di
On Wed, Sep 18, 2019 at 03:26:11PM +0200, Christoph Hellwig wrote:
> Even if we had a use case for that the bounce buffer is just too ugly
> to live. And I'm really sick and tired of Intel wasting our time for
> their out of tree monster given that they haven't even tried helping
> to improve the
On Wed, Sep 11, 2019 at 06:42:33PM -0500, Mario Limonciello wrote:
> ---
> drivers/nvme/host/pci.c | 13 +++--
> 1 file changed, 7 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index 732d5b6..9b3fed4 100644
> --- a/drivers/nvme/host/pci
On Thu, Sep 19, 2019 at 01:47:50PM +, Bharat Kumar Gogada wrote:
> Hi All,
>
> We are testing NVMe cards on ARM64 platform, the card uses MSI-X interrupts.
> We are hitting following case in drivers/nvme/host/pci.c
> /*
> * Did we miss an interrupt?
> */
> if (__nvme_
On Tue, Sep 24, 2019 at 11:05:36AM -0700, Sagi Grimberg wrote:
> Looks fine to me,
>
> Reviewed-by: Sagi Grimberg
>
> Keith, Christoph?
Looks good to me, too.
Reviewed-by: Keith Busch
401 - 500 of 948 matches
Mail list logo