tures has been
> called. This has been proven to resolve the issue across a 5000 sample
> test on previously failing disk/system combinations.
>
> Signed-off-by: Mario Limonciello
This looks good. It clashes with something I posted yesterday, but
I'll rebase after this one.
Reviewed-by: Keith Busch
> Signed-off-by: Dan Carpenter
Thanks, patch looks good.
Reviewed-by: Keith Busch
On Mon, Aug 19, 2019 at 12:06:23AM -0700, Marta Rybczynska wrote:
> - On 16 Aug, 2019, at 15:16, Christoph Hellwig h...@lst.de wrote:
> > Sorry for not replying to the earlier version, and thanks for doing
> > this work.
> >
> > I wonder if instead of using our own structure we'd just use
> >
On Mon, Aug 19, 2019 at 11:56:28AM -0700, Sagi Grimberg wrote:
>
> >> - On 16 Aug, 2019, at 15:16, Christoph Hellwig h...@lst.de wrote:
> >>> Sorry for not replying to the earlier version, and thanks for doing
> >>> this work.
> >>>
> >>> I wonder if instead of using our own structure we'd jus
On Mon, Aug 19, 2019 at 02:17:44PM -0700, Sagi Grimberg wrote:
>
> - On 16 Aug, 2019, at 15:16, Christoph Hellwig h...@lst.de wrote:
> > Sorry for not replying to the earlier version, and thanks for doing
> > this work.
> >
> > I wonder if instead of using our own structur
On Tue, Aug 20, 2019 at 01:59:32AM -0700, John Garry wrote:
> diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
> index e8f7f179bf77..cb483a055512 100644
> --- a/kernel/irq/manage.c
> +++ b/kernel/irq/manage.c
> @@ -966,9 +966,13 @@ irq_thread_check_affinity(struct irq_desc *desc,
> struct ir
On Wed, Apr 29, 2020 at 05:20:09AM +, Williams, Dan J wrote:
> On Tue, 2020-04-28 at 08:27 -0700, David E. Box wrote:
> > On Tue, 2020-04-28 at 16:22 +0200, Christoph Hellwig wrote:
> > > On Tue, Apr 28, 2020 at 07:09:59AM -0700, David E. Box wrote:
> > > > > I'm not sure who came up with the i
>
> You can find EDR spec in the following link.
>
> https://members.pcisig.com/wg/PCI-SIG/document/12614
Thank you for sticking with this. I've reviewed the series and I think
this looks good for the next merge window.
Acked-by: Keith Busch
HMAT requires valid address ranges have an equivalent SRAT entry,
verify each memory target satisfies this requirement.
Signed-off-by: Keith Busch
---
drivers/acpi/hmat/Kconfig | 1 +
drivers/acpi/hmat/hmat.c | 396 +-
2 files changed, 396 insertions(
: Keith Busch
---
Documentation/ABI/stable/sysfs-devices-node | 35 +++
drivers/base/node.c | 151
include/linux/node.h| 34 +++
3 files changed, 220 insertions(+)
diff --git a/Documentation/ABI/stable/sysfs-devices
query this information.
Reviewed-by: Mike Rapoport
Signed-off-by: Keith Busch
---
Documentation/admin-guide/mm/numaperf.rst | 164 ++
1 file changed, 164 insertions(+)
create mode 100644 Documentation/admin-guide/mm/numaperf.rst
diff --git a/Documentation/admin-guide
ess0/
relative: /sys/devices/system/node/nodeY/access0/initiators/nodeX ->
../../nodeX
The new attributes are added to the sysfs stable documentation.
Reviewed-by: Rafael J. Wysocki
Signed-off-by: Keith Busch
---
Documentation/ABI/stable/sysfs-devices-node | 25 -
nd cold
data. This works with mbind() or the numactl library.
Keith Busch (10):
acpi: Create subtable parsing infrastructure
acpi: Add HMAT to generic parsing tables
acpi/hmat: Parse and report heterogeneous memory
node: Link memory nodes to their compute nodes
node: Add heterogenous mem
parsing
the entries array may be more reused for all ACPI system tables and
the common code doesn't need to be duplicated.
Reviewed-by: Rafael J. Wysocki
Cc: Dan Williams
Signed-off-by: Keith Busch
---
arch/arm64/kernel/acpi_numa.c | 2 +-
arch/arm64/kernel/
Save the best performance access attributes and register these with the
memory's node if HMAT provides the locality table. While HMAT does make
it possible to know performance for all possible initiator-target
pairings, we export only the local pairings at this time.
Signed-off-by: Keith
The Heterogeneous Memory Attribute Table (HMAT) header has different
field lengths than the existing parsing uses. Add the HMAT type to the
parsing rules so it may be generically parsed.
Cc: Dan Williams
Reviewed-by: Rafael J. Wysocki
Signed-off-by: Keith Busch
---
drivers/acpi/tables.c | 9
Register memory side cache attributes with the memory's node if HMAT
provides the side cache iniformation table.
Signed-off-by: Keith Busch
---
drivers/acpi/hmat/hmat.c | 32
1 file changed, 32 insertions(+)
diff --git a/drivers/acpi/hmat/hmat.c b/drivers
mbers, or
omitted from the any access class' initiators.
Descriptions for memory access initiator performance access attributes
are added to sysfs stable documentation.
Signed-off-by: Keith Busch
---
Documentation/ABI/stable/sysfs-devices-node | 31 ++-
drivers/base/Kconfig
: Keith Busch
---
drivers/acpi/Kconfig | 1 +
drivers/acpi/Makefile | 1 +
drivers/acpi/hmat/Kconfig | 8 ++
drivers/acpi/hmat/Makefile | 1 +
drivers/acpi/hmat/hmat.c | 236 +
5 files changed, 247 insertions(+)
create mode 100644
On Thu, Feb 14, 2019 at 12:44:48PM -0800, Elliott, Robert (Persistent Memory)
wrote:
>
> The PCIe and NVMe specifications dosn't standardize a way to tell the device
> when to use RO, which leads to system workarounds like this.
>
> The Enable Relaxed Ordering bit defined by PCIe tells the devic
On Mon, Feb 18, 2019 at 04:42:27PM -0800, 陈华才 wrote:
> I've tested, this patch can fix the nvme problem, but it can't be applied
> to 4.19 because of different context. And, I still think my original solution
> (genirq/affinity: Assign default affinity to pre/post vectors) is correct.
> There may b
On Mon, Feb 18, 2019 at 03:25:31PM +0100, Brice Goglin wrote:
> Le 14/02/2019 à 18:10, Keith Busch a écrit :
> > Determining the cpu and memory node local relationships is quite
> > different this time (PATCH 7/10). The local relationship to a memory
> > target will be e
On Thu, Feb 14, 2019 at 10:10:07AM -0700, Keith Busch wrote:
> Platforms may provide multiple types of cpu attached system memory. The
> memory ranges for each type may have different characteristics that
> applications may wish to know about when considering what node they want
>
On Wed, Feb 20, 2019 at 11:21:45PM +0100, Rafael J. Wysocki wrote:
> On Wed, Feb 20, 2019 at 11:11 PM Dave Hansen wrote:
> > On 2/20/19 2:02 PM, Rafael J. Wysocki wrote:
> > >> diff --git a/drivers/acpi/hmat/Kconfig b/drivers/acpi/hmat/Kconfig
> > >> index c9637e2e7514..08e972ead159 100644
> > >>
On Thu, Feb 07, 2019 at 01:53:36AM -0800, Jonathan Cameron wrote:
> As a general heads up, ACPI 6.3 is out and makes some changes.
> Discussions I've had in the past suggested there were few systems
> shipping with 6.2 HMAT and that many firmwares would start at 6.3.
> Of course, that might not be
On Sun, Jan 20, 2019 at 05:16:05PM +0100, Rafael J. Wysocki wrote:
> On Sat, Jan 19, 2019 at 10:01 AM Greg Kroah-Hartman
> wrote:
> >
> > If you do a subdirectory "correctly" (i.e. a name for an attribute
> > group), that's fine.
>
> Yes, that's what I was thinking about: along the lines of the "
On Tue, Jan 29, 2019 at 03:25:48AM -0800, John Garry wrote:
> Hi,
>
> I have a question on $subject which I hope you can shed some light on.
>
> According to commit c5cb83bb337c25 ("genirq/cpuhotplug: Handle managed
> IRQs on CPU hotplug"), if we offline the last CPU in a managed IRQ
> affinity
On Tue, Jan 29, 2019 at 05:12:40PM +, John Garry wrote:
> On 29/01/2019 15:44, Keith Busch wrote:
> >
> > Hm, we used to freeze the queues with CPUHP_BLK_MQ_PREPARE callback,
> > which would reap all outstanding commands before the CPU and IRQ are
> > taken off
HMAT requires valid address ranges have an equivalent SRAT entry,
verify each memory target satisfies this requirement.
Signed-off-by: Keith Busch
---
drivers/acpi/hmat/hmat.c | 143 ---
1 file changed, 136 insertions(+), 7 deletions(-)
diff --git a/dr
Systems may provide different memory types and export this information
in the ACPI Heterogeneous Memory Attribute Table (HMAT). Parse these
tables provided by the platform and report the memory access and caching
attributes.
Signed-off-by: Keith Busch
---
drivers/acpi/Kconfig | 1
Save the best performace access attributes and register these with the
memory's node if HMAT provides the locality table. While HMAT does make
it possible to know performance for all possible initiator-target
pairings, we export only the best pairings at this time.
Signed-off-by: Keith
Add the attributes for the system memory side caches.
Signed-off-by: Keith Busch
---
Documentation/ABI/stable/sysfs-devices-node | 34 +
1 file changed, 34 insertions(+)
diff --git a/Documentation/ABI/stable/sysfs-devices-node
b/Documentation/ABI/stable/sysfs
the cache size, the line size, associativity,
and write back policy.
Signed-off-by: Keith Busch
---
drivers/base/node.c | 142 +++
include/linux/node.h | 39 ++
2 files changed, 181 insertions(+)
diff --git a/drivers/base/node.c b
parsing
the entries array may be more reused for all ACPI system tables and
the common code doesn't need to be duplicated.
Reviewed-by: Rafael J. Wysocki
Cc: Dan Williams
Signed-off-by: Keith Busch
---
arch/arm64/kernel/acpi_numa.c | 2 +-
arch/arm64/kernel/
query this information.
Reviewed-by: Mike Rapoport
Signed-off-by: Keith Busch
---
Documentation/admin-guide/mm/numaperf.rst | 184 ++
1 file changed, 184 insertions(+)
create mode 100644 Documentation/admin-guide/mm/numaperf.rst
diff --git a/Documentation/admin-guide
The Heterogeneous Memory Attribute Table (HMAT) header has different
field lengths than the existing parsing uses. Add the HMAT type to the
parsing rules so it may be generically parsed.
Cc: Dan Williams
Reviewed-by: Rafael J. Wysocki
Signed-off-by: Keith Busch
---
drivers/acpi/tables.c | 9
Register memory side cache attributes with the memory's node if HMAT
provides the side cache iniformation table.
Signed-off-by: Keith Busch
---
drivers/acpi/hmat/hmat.c | 32
1 file changed, 32 insertions(+)
diff --git a/drivers/acpi/hmat/hmat.c b/drivers
Add descriptions for memory class initiator performance access attributes.
Signed-off-by: Keith Busch
---
Documentation/ABI/stable/sysfs-devices-node | 28
1 file changed, 28 insertions(+)
diff --git a/Documentation/ABI/stable/sysfs-devices-node
b/Documentation
series' objective is to provide the attributes from such systems
that are useful for applications to know about, and readily usable with
existing tools and libraries.
Keith Busch (13):
acpi: Create subtable parsing infrastructure
acpi: Add HMAT to generic parsing tables
acpi/hmat: Pars
nodelist:
# cat /sys/devices/system/node/nodeX/class0/target_nodelist
Y
# cat /sys/devices/system/node/nodeY/class0/initiator_nodelist
X
Signed-off-by: Keith Busch
---
drivers/base/node.c | 127 ++-
include/linux/node.h | 6 ++-
2 files
eported here. When a subsystem makes use of this interface, initiators
of a lower class number, "Z", have better performance relative to higher
class numbers. When provided, class 0 is the highest performing access
class.
Signed-off-by: Keith Busch
---
drivers/base/Kconfig
Add entries for memory initiator and target node class attributes.
Signed-off-by: Keith Busch
---
Documentation/ABI/stable/sysfs-devices-node | 25 -
1 file changed, 24 insertions(+), 1 deletion(-)
diff --git a/Documentation/ABI/stable/sysfs-devices-node
b
On Thu, Jan 17, 2019 at 11:58:21PM +1100, Balbir Singh wrote:
> On Wed, Jan 16, 2019 at 10:57:51AM -0700, Keith Busch wrote:
> > It had previously been difficult to describe these setups as memory
> > rangers were generally lumped into the NUMA node of the CPUs. New
> > pla
On Thu, Jan 17, 2019 at 11:29:10AM -0500, Jeff Moyer wrote:
> Dave Hansen writes:
> > Persistent memory is cool. But, currently, you have to rewrite
> > your applications to use it. Wouldn't it be cool if you could
> > just have it show up in your system like normal RAM and get to
> > it like a
On Thu, Jan 17, 2019 at 12:20:06PM -0500, Jeff Moyer wrote:
> Keith Busch writes:
> > On Thu, Jan 17, 2019 at 11:29:10AM -0500, Jeff Moyer wrote:
> >> Dave Hansen writes:
> >> > Persistent memory is cool. But, currently, you have to rewrite
> >> > yo
On Thu, Jan 17, 2019 at 10:18:35AM -0800, Jonathan Cameron wrote:
> I've been having a play with various hand constructed HMAT tables to allow
> me to try breaking them in all sorts of ways.
>
> Mostly working as expected.
>
> Two places I am so far unsure on...
>
> 1. Concept of 'best' is not i
On Mon, Jan 14, 2019 at 12:10:12AM +0100, Pavel Machek wrote:
> On Wed 2019-01-09 10:43:36, Keith Busch wrote:
> > + This node's write latency in nanosecondss available to memory
> > + initiators in nodes found in this class's
> > initiators_nodel
On Sun, Jan 13, 2019 at 01:42:30PM +0200, Mike Rapoport wrote:
> There are a couple of nitpicks below, otherwise
>
> Reviewed-by: Mike Rapoport
Thank you for the detailed review. I've incorporated all your
recommmendations for the next revision.
[+Ming]
On Mon, Jan 14, 2019 at 08:31:45AM -0600, Bjorn Helgaas wrote:
> [+cc Dou, Jens, Thomas, Christoph, linux-pci, LKML]
>
> On Sun, Jan 13, 2019 at 11:24 PM fin4478 fin4478 wrote:
> >
> > Hi,
> >
> > A regression from the 4.20 kernel: I have the Asgard 256GB nvme drive
> > and my custom non
[+linux-n...@lists.infradead.org]
On Mon, Jan 14, 2019 at 10:03:39AM -0700, Keith Busch wrote:
> [+Ming]
> On Mon, Jan 14, 2019 at 08:31:45AM -0600, Bjorn Helgaas wrote:
> > [+cc Dou, Jens, Thomas, Christoph, linux-pci, LKML]
> >
> > On Sun, Jan 13, 2019 at 11:24 PM fin
On Thu, Jan 10, 2019 at 07:42:46AM -0800, Rafael J. Wysocki wrote:
> On Wed, Jan 9, 2019 at 6:47 PM Keith Busch wrote:
> >
> > Systems may provide different memory types and export this information
> > in the ACPI Heterogeneous Memory Attribute Table (HMAT). Parse these
>
On Thu, Jan 17, 2019 at 12:41:19PM +0100, Rafael J. Wysocki wrote:
> On Wed, Jan 16, 2019 at 6:59 PM Keith Busch wrote:
> >
> > Add entries for memory initiator and target node class attributes.
> >
> > Signed-off-by: Keith Busch
>
> I would recommend combining
On Fri, Feb 01, 2019 at 09:46:15PM +0900, Takao Indoh wrote:
> From: Takao Indoh
>
> Fujitsu A64FX processor has a feature to accelerate data transfer of
> internal bus by relaxed ordering. It is enabled when the bit 56 of dma
> address is set to 1.
Wait, what? RO is a standard PCIe TLP attribut
ice-DAX
> sub-systems.
>
> The linux-nvdimm mailing hosts a patchwork instance for both DAX and
> NVDIMM patches.
>
> Cc: Jan Kara
> Cc: Ira Weiny
> Cc: Ross Zwisler
> Cc: Keith Busch
> Cc: Matthew Wilcox
> Signed-off-by: Dan Williams
Acked-by: Keith Busch
parsing
the entries array may be more reused for all ACPI system tables and
the common code doesn't need to be duplicated.
Reviewed-by: Rafael J. Wysocki
Cc: Dan Williams
Signed-off-by: Keith Busch
---
arch/arm64/kernel/acpi_numa.c | 2 +-
arch/arm64/kernel/
x27;s sysfs directory.
Since HMAT requires valid address ranges have an equivalent SRAT entry,
verify each memory target satisfies this requirement.
Signed-off-by: Keith Busch
---
drivers/acpi/hmat/hmat.c | 310 +++
1 file changed, 310 insertions(+)
Register memory side cache attributes with the memory's node if HMAT
provides the side cache iniformation table.
Signed-off-by: Keith Busch
---
drivers/acpi/hmat/hmat.c | 32
1 file changed, 32 insertions(+)
diff --git a/drivers/acpi/hmat/hmat.c b/drivers
query this information.
Reviewed-by: Mike Rapoport
Signed-off-by: Keith Busch
---
Documentation/admin-guide/mm/numaperf.rst | 167 ++
1 file changed, 167 insertions(+)
create mode 100644 Documentation/admin-guide/mm/numaperf.rst
diff --git a/Documentation/admin-guide
relative: /sys/devices/system/node/nodeY/access0/initiators/nodeX ->
../../nodeX
The new attributes are added to the sysfs stable documentation.
Signed-off-by: Keith Busch
---
Documentation/ABI/stable/sysfs-devices-node | 25 -
drivers/base/node.c | 142
The Heterogeneous Memory Attribute Table (HMAT) header has different
field lengths than the existing parsing uses. Add the HMAT type to the
parsing rules so it may be generically parsed.
Cc: Dan Williams
Reviewed-by: Rafael J. Wysocki
Signed-off-by: Keith Busch
---
drivers/acpi/tables.c | 9
cache size, the line size, associativity,
and write back policy.
Add the attributes for the system memory side caches to sysfs stable
documentation.
Signed-off-by: Keith Busch
---
Documentation/ABI/stable/sysfs-devices-node | 34 +++
drivers/base/node.c | 153
mbers, or
omitted from the any access class' initiators.
Descriptions for memory access initiator performance access attributes
are added to sysfs stable documentation.
Signed-off-by: Keith Busch
---
Documentation/ABI/stable/sysfs-devices-node | 28 ++
drivers/base/Kconfig
Register the local attached performace access attributes with the memory's
node if HMAT provides the locality table. While HMAT does make it possible
to know performance for all possible initiator-target pairings, we export
only the local and matching pairings at this time.
Signed-off-by:
Systems may provide different memory types and export this information
in the ACPI Heterogeneous Memory Attribute Table (HMAT). Parse these
tables provided by the platform and report the memory access and caching
attributes to the kernel messages.
Signed-off-by: Keith Busch
---
drivers/acpi
been created and in use today that describe
the more complex memory hierarchies that can be created.
This series' objective is to provide the attributes from such systems
that are useful for applications to know about, and readily usable with
existing tools and libraries.
Keith Busch (10):
acp
On Thu, May 09, 2019 at 02:59:55AM +0800, Kai-Heng Feng wrote:
> +static int nvme_do_resume_from_idle(struct pci_dev *pdev)
> +{
> + struct nvme_dev *ndev = pci_get_drvdata(pdev);
> + int result;
> +
> + pdev->dev_flags &= ~PCI_DEV_FLAGS_NO_D3;
> + ndev->ctrl.suspend_to_idle = false
migrate_page_move_mapping() doesn't use the mode argument. Remove it
and update callers accordingly.
Signed-off-by: Keith Busch
---
fs/aio.c| 2 +-
fs/f2fs/data.c | 2 +-
fs/iomap.c | 2 +-
fs/ubifs/file.c | 2 +-
include/linux/migrate.h | 3 +-
On Thu, May 09, 2019 at 02:12:55AM -0700, Stefan Hajnoczi wrote:
> On Mon, May 06, 2019 at 12:04:06PM +0300, Maxim Levitsky wrote:
> > On top of that, it is expected that newer hardware will support the PASID
> > based
> > device subdivision, which will allow us to _directly_ pass through the
> >
r the series:
Reviewed-by: Keith Busch
On Thu, May 09, 2019 at 03:28:32AM -0700, Kai-Heng Feng wrote:
> at 17:56, Christoph Hellwig wrote:
> > The we have the sequence in your patch. This seems to be related to
> > some of the MS wording, but I'm not sure what for example tearing down
> > the queues buys us. Can you explain a bit mor
On Thu, May 09, 2019 at 06:57:34PM +, mario.limoncie...@dell.com wrote:
> No, current Windows versions don't transition to D3 with inbox NVME driver.
> You're correct, it's explicit state transitions even if APST was enabled
> (as this patch is currently doing as well).
The proposed patch does
On Thu, May 09, 2019 at 10:54:04PM +0200, Rafael J. Wysocki wrote:
> On Thu, May 9, 2019 at 9:33 PM Keith Busch wrote:
> > #include
> > @@ -2851,6 +2852,8 @@ static int nvme_suspend(struct device *dev)
> > struct pci_dev *pdev = to_pci_dev(dev);
> >
On Thu, May 09, 2019 at 09:37:58PM +, mario.limoncie...@dell.com wrote:
> > +int nvme_set_power(struct nvme_ctrl *ctrl, unsigned npss)
> > +{
> > + int ret;
> > +
> > + mutex_lock(&ctrl->scan_lock);
> > + nvme_start_freeze(ctrl);
> > + nvme_wait_freeze(ctrl);
> > + ret = nvme_set_feat
On Thu, May 09, 2019 at 10:30:52PM -0700, Christoph Hellwig wrote:
> Also I don't see any reason why we'd need to do the freeze game on
> resume.
Right, definitely no reason for resume.
> Even on suspend it looks a little odd to me, as in theory
> the PM core should have already put the system
On Fri, May 10, 2019 at 01:23:11AM -0700, Rafael J. Wysocki wrote:
> On Fri, May 10, 2019 at 8:08 AM Kai-Heng Feng
> > I tested the patch from Keith and it has two issues just as simply skipping
> > nvme_dev_disable():
> > 1) It consumes more power in S2I
> > 2) System freeze after resume
>
> Well
On Thu, May 09, 2019 at 11:05:42PM -0700, Kai-Heng Feng wrote:
> Yes, that’ what I was told by the NVMe vendor, so all I know is to impose a
> memory barrier.
> If mb() shouldn’t be used here, what’s the correct variant to use in this
> context?
I'm afraid the requirement is still not clear to
On Fri, May 10, 2019 at 11:15:05PM +0800, Kai Heng Feng wrote:
> Sorry, I should mention that I use a slightly modified
> drivers/nvme/host/pci.c:
>
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index 3e4fb891a95a..ece428ce6876 100644
> --- a/drivers/nvme/host/pci.c
> +++ b/d
Feng
Signed-off-by: Keith Busch
---
Disclaimer: I've tested only on emulation faking support for the feature.
General question: different devices potentially have divergent values
for power consumption and transition latencies. Would it be useful to
allow a user tunable setting to select the desi
On Fri, May 24, 2019 at 07:45:00AM +1000, Stephen Rothwell wrote:
> Commits
>
> 5fb4aac756ac ("nvme: release namespace SRCU protection before performing
> controller ioctls")
> 90ec611adcf2 ("nvme: merge nvme_ns_ioctl into nvme_ioctl")
> 3f98bcc58cd5 ("nvme: remove the ifdef around nvme_nvm
On Fri, May 24, 2019 at 05:22:30PM +0200, Jiri Kosina wrote:
> Hi,
>
> Something is broken in Linus' tree (4dde821e429) with respec to
> hibernation on my thinkpad x270, and it seems to be nvme related.
>
> I reliably see the warning below during hibernation, and then sometimes
> resume sort of
y: ("Uninitialized pointer read")
> Signed-off-by: Colin Ian King
I would have sworn this was fixed as it's in my tree already, but the
submitted patch sure enough doesn't have it.
I've double checked to see if there are any other discrepencies, and
there
On Sat, May 11, 2019 at 12:22:58AM -0700, Christoph Hellwig wrote:
> A couple nitpicks, mostly leftover from the previous iteration
> (I didn't see replies to those comments from you, despite seeing
> a reply to my mail, assuming it didn't get lost):
I thought you just meant the freeze/unfreeze se
On Sat, May 11, 2019 at 11:06:35PM -0700, Chaitanya Kulkarni wrote:
> On 5/10/19 2:35 PM, Keith Busch wrote:
> >
> > +int nvme_set_power(struct nvme_ctrl *ctrl, unsigned ps)
> dev->ctrl.npss is u8 can we use same data type here ?
> If this is due to last_ps we use a
On Sun, May 12, 2019 at 08:54:15AM -0700, Akinobu Mita wrote:
> +static void nvme_coredump_logs(struct nvme_dev *dev)
> +{
> + struct dev_coredumpm_bulk_data *bulk_data;
> +
> + if (!dev->dumps)
> + return;
> +
> + bulk_data = nvme_coredump_alloc(dev, 1);
> + if (!bulk_d
On Sun, May 12, 2019 at 08:54:16AM -0700, Akinobu Mita wrote:
> @@ -2536,6 +2539,9 @@ static void nvme_reset_work(struct work_struct *work)
> if (result)
> goto out;
>
> + nvme_coredump_logs(dev);
If you change nvme_coredump_logs to return an int, check it here for < 0
an
On Mon, May 13, 2019 at 02:24:41PM +, mario.limoncie...@dell.com wrote:
> This was not a disk with HMB, but with regard to the HMB I believe it needs
> to be
> removed during s0ix so that there isn't any mistake that SSD thinks it can
> access HMB
> memory in s0ix.
Is that really the case, t
On Mon, May 13, 2019 at 02:43:43PM +, mario.limoncie...@dell.com wrote:
> > Well, it sounds like your partners device does not work properly in this
> > case. There is nothing in the NVMe spec that says queues should be
> > torn down for deep power states, and that whole idea seems rather
> >
On Mon, May 13, 2019 at 03:05:42PM +, mario.limoncie...@dell.com wrote:
> This system power state - suspend to idle is going to freeze threads.
> But we're talking a multi threaded kernel. Can't there be a timing problem
> going
> on then too? With a disk flush being active in one task and t
On Mon, May 13, 2019 at 04:57:08PM +0200, Christoph Hellwig wrote:
> On Mon, May 13, 2019 at 02:54:49PM +, mario.limoncie...@dell.com wrote:
> > And NVME spec made it sound to me that while in a low power state it
> > shouldn't
> > be available if the memory isn't available.
> >
> > NVME spec
On Tue, May 14, 2019 at 01:16:22AM +0800, Kai-Heng Feng wrote:
> Disabling HMB prior suspend makes my original patch work without memory
> barrier.
>
> However, using the same trick on this patch still freezes the system during
> S2I.
Could you post your code, please?
On Tue, May 14, 2019 at 10:04:22AM +0200, Rafael J. Wysocki wrote:
> On Mon, May 13, 2019 at 5:10 PM Keith Busch wrote:
> >
> > On Mon, May 13, 2019 at 03:05:42PM +, mario.limoncie...@dell.com wrote:
> > > This system power state - suspend to idle is going to freeze
On Thu, Mar 28, 2019 at 02:59:30PM -0700, Yang Shi wrote:
> Yes, it still could fail. I can't tell which way is better for now. I
> just thought scanning another round then migrating should be still
> faster than swapping off the top of my head.
I think it depends on the relative capacities betw
On Fri, Mar 29, 2019 at 02:15:03PM -0700, Dan Williams wrote:
> On Mon, Mar 11, 2019 at 1:55 PM Keith Busch wrote:
> > +static __init struct memory_target *find_mem_target(unsigned int mem_pxm)
> > +{
> > + struct memory_target *target;
> > +
> > + lis
On Wed, Apr 17, 2019 at 10:13:44AM -0700, Dave Hansen wrote:
> On 4/17/19 2:23 AM, Michal Hocko wrote:
> > yes. This could be achieved by GFP_NOWAIT opportunistic allocation for
> > the migration target. That should prevent from loops or artificial nodes
> > exhausting quite naturaly AFAICS. Maybe
On Fri, Apr 19, 2019 at 09:54:35AM -0700, Alison Schofield wrote:
> On Thu, Apr 18, 2019 at 05:07:12PM +0200, Rafael J. Wysocki wrote:
> > On Thu, Apr 18, 2019 at 5:02 PM Keith Busch wrote:
> > >
> > > On Wed, Apr 17, 2019 at 11:13:10AM -0700, Alison Schofield wrote:
&
On Mon, May 06, 2019 at 05:57:52AM -0700, Christoph Hellwig wrote:
> > However, similar to the (1), when the driver will support the devices with
> > hardware based passthrough, it will have to dedicate a bunch of queues to
> > the
> > guest, configure them with the appropriate PASID, and then let
shutdown = true;
> + /* fall through */
> case NVME_CTRL_CONNECTING:
> case NVME_CTRL_RESETTING:
> dev_warn_ratelimited(dev->ctrl.device,
Thanks, Looks good.
Reviewed-by: Keith Busch
On Wed, May 08, 2019 at 01:58:33AM +0900, Akinobu Mita wrote:
> +static void nvme_coredump(struct device *dev)
> +{
> + struct nvme_dev *ndev = dev_get_drvdata(dev);
> +
> + mutex_lock(&ndev->shutdown_lock);
> +
> + nvme_coredump_prologue(ndev);
> + nvme_coredump_epilogue(ndev);
> +
On Tue, May 07, 2019 at 02:31:41PM -0600, Heitke, Kenneth wrote:
> On 5/7/2019 10:58 AM, Akinobu Mita wrote:
> > +
> > +static int nvme_get_telemetry_log_blocks(struct nvme_ctrl *ctrl, void *buf,
> > +size_t bytes, loff_t offset)
> > +{
> > + const size_t chunk
On Mon, Apr 29, 2019 at 10:59:26AM -0600, Alex Williamson wrote:
> On Mon, 29 Apr 2019 09:45:28 -0700
> Sinan Kaya wrote:
>
> > On 4/29/2019 10:51 AM, Alex Williamson wrote:
> > > So where do we go from here? I agree that dmesg is not necessarily a
> > > great choice for these sorts of events an
501 - 600 of 948 matches
Mail list logo