On Tue, 8 Oct 2013, Matias Bjørling wrote:
Convert the driver to blk mq.
The patch consists of:
* Initializion of mq data structures.
* Convert function calls from bio to request data structures.
* IO queues are split into an admin queue and io queues.
* bio splits are removed as it should be h
On Tue, 8 Oct 2013, Jens Axboe wrote:
On Tue, Oct 08 2013, Matthew Wilcox wrote:
On Tue, Oct 08, 2013 at 11:34:20AM +0200, Matias Bjørling wrote:
The nvme driver implements itself as a bio-based driver. This primarily because
of high lock congestion for high-performance nvm devices. To remove t
On Mon, Sep 19, 2016 at 12:38:05PM +0200, Alexander Gordeev wrote:
> On Fri, Sep 16, 2016 at 05:04:48PM -0400, Keith Busch wrote:
>
> > Having a 1:1 already seemed like the ideal solution since you can't
> > simultaneously utilize more than that from the host, so
On Sun, Jun 19, 2016 at 04:06:31PM -0700, Jethro Beekman wrote:
> If an NVMe drive is locked with ATA Security, most commands sent to the drive
> will fail. This includes commands sent by the kernel upon discovery to probe
> for partitions. The failing happens in such a way that trying to do anyt
On Mon, Jun 20, 2016 at 11:21:09AM -0700, Jethro Beekman wrote:
> On 20-06-16 08:26, Keith Busch wrote:
>
> Would this just be a matter of setting req->retries and checking for it in
> nvme_req_needs_retry? How does one keep track of the number of tries so far?
I just sent a pa
On Thu, Sep 22, 2016 at 02:33:36PM -0700, J Freyensee wrote:
> ...and some SSDs don't even support this feature yet, so the number of
> different NVMe devices available to test initially will most likely be
> small (like the Fultondales I have, all I could check is to see if the
> code broke anythi
On Fri, Sep 23, 2016 at 09:34:41AM -0500, Bjorn Helgaas wrote:
> I made the necessary changes to match the renaming I did in the first
> patch, and I also used plain old "#ifdef" instead of "#if IS_ENABLED"
> since the rest of the file uses the former style. If there's a reason
> to switch, we sho
On Fri, Sep 23, 2016 at 02:12:23PM -0500, Bjorn Helgaas wrote:
> BTW, the "Volume Management Device Driver" config item appears by
> itself in the top-level menuconfig menu. That seems a little ...
> presumptuous; is it what you intended?
Not really intended, but I didn't really know any better a
se, the init ordering remains unchanged with this commit.
>
> We also delete the MODULE_LICENSE tag etc. since all that information
> was (or is now) contained at the top of the file in the comments.
Thanks for cleaning this up.
FWIW, all of the other pcie service drivers look like they could use
the same cleanup.
Reviewed-by: Keith Busch
On Thu, Sep 08, 2016 at 11:36:34AM +0800, Wang Weber wrote:
> Hi Jens,
>
> The following email was sent before, but it seems to be blocked since I
> received some messages about sending failure. So resend it with Google
> email account.
You're still sending non plain text email, so it's going to
repurpose these control bits for non-standard use.
Signed-off-by: Keith Busch
---
v2 -> v3:
Moved the slot op's attention status callback to pciehp_hpc.c
drivers/pci/hotplug/pciehp.h | 5 +
drivers/pci/hotplug/pciehp_core.c | 3 +++
drivers/pci/hotplug/pciehp_hpc
us is within a PCI domain, the patch appends
a bool to the pci_sysdata structure that the VMD driver sets during
initialization.
Requested-by: Kapil Karkra
Tested-by: Artur Paszkiewicz
Signed-off-by: Keith Busch
---
No change from previous version of this patch; just part of the series.
arc
On Tue, Sep 13, 2016 at 09:05:39AM -0600, Keith Busch wrote:
> +int pciehp_get_raw_attention_status(struct hotplug_slot *hotplug_slot,
> + u8 *value)
> +{
> + struct slot *slot = hotplug_slot->private;
> + st
ing the driver
> to pass in such a mask obtained from the (PCI) interrupt code. To fully
> support this feature in drivers the final third in the PCI layer will
> be needed as well.
Thanks, this looks good and tests successfully on my hardware.
For the series:
Reviewed-by: Keith Busch
On Fri, Aug 05, 2016 at 12:03:23PM -0700, Marc MERLIN wrote:
> Would this patch make sense as being the reason why I can't S3 sleep
> anymore and would you have a test patch against 4.5, 4.6, or 4.7 I can
> try to see if it fixes the problem?
Hi Marc,
It might be blk-mq's hot cpu notifier is invo
On Fri, Aug 26, 2016 at 04:35:57PM +0200, Christoph Hellwig wrote:
> On Fri, Aug 26, 2016 at 07:31:33AM -0700, Andy Lutomirski wrote:
> > - Consider *deleting* the SCSI translation layer's power saving code.
> > It looks almost entirely bogus to me. It has an off-by-one in its
> > NPSS handling,
On Tue, Jun 20, 2017 at 01:37:15AM +0200, Thomas Gleixner wrote:
> static int vmd_enable_domain(struct vmd_dev *vmd)
> {
> struct pci_sysdata *sd = &vmd->sysdata;
> + struct fwnode_handle *fn;
> struct resource *res;
> u32 upper_bits;
> unsigned long flags;
> @@ -617,8
On Tue, Jun 20, 2017 at 01:37:32AM +0200, Thomas Gleixner wrote:
> @@ -441,18 +440,27 @@ void fixup_irqs(void)
>
> for_each_irq_desc(irq, desc) {
> const struct cpumask *affinity;
> - int break_affinity = 0;
> - int set_affinity = 1;
> + boo
On Wed, May 24, 2017 at 05:26:25PM +0300, Rakesh Pandit wrote:
> Commit c5f6ce97c1210 tries to address multiple resets but fails as
> work_busy doesn't involve any synchronization and can fail. This is
> reproducible easily as can be seen by WARNING below which is triggered
> with line:
>
> WARN_
On Wed, May 24, 2017 at 03:06:31PM -0700, Andy Lutomirski wrote:
> They have known firmware bugs. A fix is apparently in the works --
> once fixed firmware is available, someone from Intel (Hi, Keith!)
> can adjust the quirk accordingly.
Here's the latest firmware with all the known fixes:
htt
On Fri, Nov 03, 2017 at 01:53:40PM +0100, Christoph Hellwig wrote:
> > - if (ns && ns->ms &&
> > + if (ns->ms &&
> > (!ns->pi_type || ns->ms != sizeof(struct t10_pi_tuple)) &&
> > !blk_integrity_rq(req) && !blk_rq_is_passthrough(req))
> > return BLK_STS_NOTSUPP;
>
>
the 'ph' format, which would look like this:
01 02 03 04 05 06 07 08
The change will make it look like this:
01-02-03-04-05-06-07-08
I think that was the original intention.
Reviewed-by: Keith Busch
On Sat, Nov 04, 2017 at 09:18:25AM +0100, Christoph Hellwig wrote:
> On Fri, Nov 03, 2017 at 09:02:04AM -0600, Keith Busch wrote:
> > If the namespace has metadata, but the request doesn't have a metadata
> > payload attached to it for whatever reason, we can't construct
On Mon, Nov 06, 2017 at 10:13:24AM +0100, Christoph Hellwig wrote:
> On Sat, Nov 04, 2017 at 09:38:45AM -0600, Keith Busch wrote:
> > That's not quite right. For non-PI metadata formats, we use the
> > 'nop_profile', which gets the metadata buffer allocated so we
On Sat, 3 Oct 2015, Ingo Molnar wrote:
* Keith Busch wrote:
+config VMDDEV
+ depends on PCI && PCI_DOMAINS && PCI_MSI && GENERIC_MSI_IRQ_DOMAIN &&
IRQ_DOMAIN_HIERARCHY
+ tristate "Volume Management Device Driver"
+ default N
On Mon, 5 Oct 2015, Ingo Molnar wrote:
* Keith Busch wrote:
The immediate benefit is that devices on VMD domains do not use resources
on the default PCI domain, so we have more than the 256 buses available.
Would be nice to incorporate that information in the help text and in the
changelog
Hi Bjorn,
Thanks for the feedback. Much of the issues you mentioned look pretty
straight forward to resolve, and will fix of for the next revision.
I have some immediate follow up comments to two issues you brought up:
On Tue, 6 Oct 2015, Bjorn Helgaas wrote:
+static int vmd_find_free_domain(v
On Tue, 6 Oct 2015, Keith Busch wrote:
On Tue, 6 Oct 2015, Bjorn Helgaas wrote:
+ resource_list_for_each_entry(entry, &resources) {
+ struct resource *source, *resource = entry->res;
+
+ if (!i) {
+ resource->
This patch adds struct x86_msi_ops to x86's PCI sysdata. This gives a
host bridge driver the option to provide alternate MSI Data Register
and MSI-X Table Entry programming for devices in PCI domains that do
not subscribe to usual "IOAPIC" format.
Signed-off-by: Keith Busch
CC:
s the VMD domain using the root bus configuration interface
provided by the PCI subsystem.
CC: Bryan Veal
CC: Dan Williams
CC: x...@kernel.org
CC: linux-kernel@vger.kernel.org
CC: linux-...@vger.kernel.org
Keith Busch (2):
x86: PCI bus specific MSI operations
x86/pci: Initial commit for ne
low VMD-owned
root ports, or VMD should be disabled by BIOS for such enpdoints.
Contributers to this patch include:
Artur Paszkiewicz
Bryan Veal
Jon Derrick
Signed-off-by: Keith Busch
CC: Bryan Veal
CC: Dan Williams
CC: x...@kernel.org
CC: linux-kernel@vger.kernel.org
CC: linux-...@vg
On Fri, 28 Aug 2015, Thomas Gleixner wrote:
On Thu, 27 Aug 2015, Keith Busch wrote:
This patch adds struct x86_msi_ops to x86's PCI sysdata. This gives a
host bridge driver the option to provide alternate MSI Data Register
and MSI-X Table Entry programming for devices in PCI domains th
though, and only uses the variables if they were
successfully set, so suppressing the warning with uninitialized_val.
Signed-off-by: Keith Busch
---
drivers/regulator/helpers.c |6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/regulator/helpers.c b/drivers/regulator
On Tue, 1 Sep 2015, Mark Brown wrote:
On Tue, Sep 01, 2015 at 09:52:13AM +0900, Krzysztof Kozlowski wrote:
2015-09-01 1:41 GMT+09:00 Keith Busch :
int regulator_is_enabled_regmap(struct regulator_dev *rdev)
{
- unsigned int val;
+ unsigned int uninitialized_var(val);
int
On Tue, 6 Oct 2015, Bjorn Helgaas wrote:
+static int __init vmd_init(void)
+{
+ return pci_register_driver(&vmd_drv);
+}
+module_init(vmd_init);
module_pci_driver(vmd_drv)?
We actually only have a module_init in this driver, and purposely left
out module_exit. We don't want to be able t
On Wed, 14 Oct 2015, Christoph Hellwig wrote:
Analsys and tentativ fix below:
blktrace for before the commit:
259,012 0.02543 2394 G D 0 + 8388607 [mkfs.xfs]
259,013 0.08230 2394 I D 0 + 8388607 [mkfs.xfs]
259,014 0.31090 207
tural to
free cpumask in its counter part, blk_mq_free_tags().
Thanks for the fix.
Reviewed-by: Keith Busch
Fixes: f26cdc8536ad ("blk-mq: Shared tag enhancements")
Signed-off-by: Jun'ichi Nomura
Cc: Keith Busch
Cc: Jens Axboe
--
To unsubscribe from this list: send the line "
epeat the same configuration checks.
Signed-off-by: Keith Busch
---
include/linux/bio.h | 32 ++--
1 file changed, 22 insertions(+), 10 deletions(-)
diff --git a/include/linux/bio.h b/include/linux/bio.h
index b9b6e04..f0c46d0 100644
--- a/include/linux/bio.h
epeat the same configuration checks.
Signed-off-by: Keith Busch
---
v1 -> v2: Fixed corrupted patch and subject line spelling error
include/linux/bio.h | 32 ++--
1 file changed, 22 insertions(+), 10 deletions(-)
diff --git a/include/linux/bio.h b/include/lin
eatures should either not be placed below VMD-owned
root ports, or VMD should be disabled by BIOS for such endpoints.
Contributers to this patch include:
Artur Paszkiewicz
Bryan Veal
Jon Derrick
Signed-off-by: Keith Busch
---
v1 -> v2:
The original RFC used custom x86_msi_ops
->ctrl);
> if (pci_get_drvdata(pdev))
> device_release_driver(&pdev->dev);
> nvme_put_ctrl(&dev->ctrl);
Looks good to me.
Reviewed-by: Keith Busch
On Mon, Jul 23, 2018 at 04:24:31PM -0600, Alex Williamson wrote:
> Take advantage of NVMe devices using a standard interface to quiesce
> the controller prior to reset, including device specific delays before
> and after that reset. This resolves several NVMe device assignment
> scenarios with two
On Wed, Jan 24, 2018 at 11:29:12PM +0100, Paul Menzel wrote:
> Am 22.01.2018 um 22:30 schrieb Keith Busch:
> > The nvme spec guides toward longer times than that. I don't see the
> > point of warning users about things operating within spec.
>
> I quickly glanced ove
.
>
> Suggested-by: James Smart
> Reviewed-by: James Smart
> Signed-off-by: Jianchao Wang
This looks fine. Thank you for your patience.
Reviewed-by: Keith Busch
On Thu, Jan 11, 2018 at 01:09:39PM +0800, Jianchao Wang wrote:
> The calculation of iod and avg_seg_size maybe meaningless if
> nvme_pci_use_sgls returns before uses them. So calculate
> just before use them.
The compiler will do the right thing here, but I see what you mean. I
think Christoph has
On Thu, Jan 11, 2018 at 06:50:40PM +0100, Maik Broemme wrote:
> I've re-run the test with 4.15rc7.r111.g5f615b97cdea and the following
> patches from Keith:
>
> [PATCH 1/4] PCI/AER: Return approrpiate value when AER is not supported
> [PATCH 2/4] PCI/AER: Provide API for getting AER information
>
On Tue, Jan 30, 2018 at 11:41:07AM +0800, jianchao.wang wrote:
> Another point that confuses me is that whether nvme_set_host_mem is necessary
> in nvme_dev_disable ?
> As the comment:
>
> /*
>* If the controller is still alive tell it to stop using the
>
with current code we do not acknowledge the
> interrupt and we get dpc interrupt storm.
> This patch acknowledges the interrupt in interrupt handler.
>
> Signed-off-by: Oza Pawandeep
Thanks, looks good to me.
Reviewed-by: Keith Busch
On Thu, Jan 18, 2018 at 11:35:59AM -0500, Sinan Kaya wrote:
> On 1/18/2018 12:32 AM, p...@codeaurora.org wrote:
> > On 2018-01-18 08:26, Keith Busch wrote:
> >> On Wed, Jan 17, 2018 at 08:27:39AM -0800, Sinan Kaya wrote:
> >>> On 1/17/2018 5:37 AM, Oza Pawande
On Thu, Jan 18, 2018 at 06:10:02PM +0800, Jianchao Wang wrote:
> + * - When the ctrl.state is NVME_CTRL_RESETTING, the expired
> + * request should come from the previous work and we handle
> + * it as nvme_cancel_request.
> + * - When the ctrl.state is NVME_CTRL_RECONNECTIN
On Fri, Jan 19, 2018 at 01:55:29PM +0800, jianchao.wang wrote:
> On 01/19/2018 12:59 PM, Keith Busch wrote:
> > On Thu, Jan 18, 2018 at 06:10:02PM +0800, Jianchao Wang wrote:
> >> + * - When the ctrl.state is NVME_CTRL_RESETTING, the expired
> >> + * request sh
On Thu, Jan 18, 2018 at 06:10:00PM +0800, Jianchao Wang wrote:
> Hello
>
> Please consider the following scenario.
> nvme_reset_ctrl
> -> set state to RESETTING
> -> queue reset_work
> (scheduling)
> nvme_reset_work
> -> nvme_dev_disable
> -> quiesce queues
> -> nvme_cance
On Fri, Jan 19, 2018 at 04:14:02PM +0800, jianchao.wang wrote:
> On 01/19/2018 04:01 PM, Keith Busch wrote:
> > The nvme_dev_disable routine makes forward progress without depending on
> > timeout handling to complete expired commands. Once controller disabling
> > completes,
On Fri, Jan 19, 2018 at 05:02:06PM +0800, jianchao.wang wrote:
> We should not use blk_sync_queue here, the requeue_work and run_work will be
> canceled.
> Just flush_work(&q->timeout_work) should be ok.
I agree flushing timeout_work is sufficient. All the other work had
already better not be run
On Fri, Jan 19, 2018 at 09:56:48PM +0800, jianchao.wang wrote:
> In nvme_dev_disable, the outstanding requests will be requeued finally.
> I'm afraid the requests requeued on the q->requeue_list will be blocked until
> another requeue
> occurs, if we cancel the requeue work before it get scheduled
On Mon, Jan 22, 2018 at 10:02:12PM +0100, Paul Menzel wrote:
> Dear Linux folks,
>
>
> Benchmarking the ACPI S3 suspend and resume times with `sleepgraph.py
> -config config/suspend-callgraph.cfg` [1], shows that the NVMe disk SAMSUNG
> MZVKW512HMJP-0 in the TUXEDO Book BU1406 takes between 0
On Mon, Jan 22, 2018 at 09:14:23PM +0100, Christoph Hellwig wrote:
> > Link: https://lkml.org/lkml/2018/1/19/68
> > Suggested-by: Keith Busch
> > Signed-off-by: Keith Busch
> > Signed-off-by: Jianchao Wang
>
> Why does this have a signoff from Keith?
Right, I
On Thu, Jan 04, 2018 at 12:01:34PM -0700, Logan Gunthorpe wrote:
> Register the CMB buffer as p2pmem and use the appropriate allocation
> functions to create and destroy the IO SQ.
>
> If the CMB supports WDS and RDS, publish it for use as p2p memory
> by other devices.
<>
> + if (qid && dev
On Fri, Jan 05, 2018 at 11:19:28AM -0700, Logan Gunthorpe wrote:
> Although it is not explicitly stated anywhere, pci_alloc_p2pmem() should
> always be at least 4k aligned. This is because the gen_pool that implements
> it is created with PAGE_SHIFT for its min_alloc_order.
Ah, I see that now. Tha
On Tue, Jan 09, 2018 at 10:03:11AM +0800, Jianchao Wang wrote:
> Hello
Sorry for the distraction, but could you possibly fix the date on your
machine? For some reason, lists.infradead.org sorts threads by the time
you claim to have sent your message rather than the time it was received,
and you're
On Mon, Jan 29, 2018 at 11:07:35AM +0800, Jianchao Wang wrote:
> nvme_set_host_mem will invoke nvme_alloc_request without NOWAIT
> flag, it is unsafe for nvme_dev_disable. The adminq driver tags
> may have been used up when the previous outstanding adminq requests
> cannot be completed due to some
On Mon, Jan 29, 2018 at 09:55:41PM +0200, Sagi Grimberg wrote:
> > Thanks for the fix. It looks like we still have a problem, though.
> > Commands submitted with the "shutdown_lock" held need to be able to make
> > forward progress without relying on a completion, but this one could
> > block indef
x86_vector_free_irqs(domain, virq, i);
> return err;
> }
>
The patch does indeed fix all the warnings and allows device binding to
succeed, albeit in a degraded performance mode. Despite that, this is
a good fix, and looks applicable to 4.4-stable, so:
Tested-by: Keith
On Wed, Jan 17, 2018 at 08:34:22AM +0100, Thomas Gleixner wrote:
> Can you trace the matrix allocations from the very beginning or tell me how
> to reproduce. I'd like to figure out why this is happening.
Sure, I'll get the irq_matrix events.
I reproduce this on a machine with 112 CPUs and 3 NVMe
er 200 iterations that used to
fail within only a few. I'd say the problem is cured. Thanks!
Tested-by: Keith Busch
On Wed, Jan 17, 2018 at 08:27:39AM -0800, Sinan Kaya wrote:
> On 1/17/2018 5:37 AM, Oza Pawandeep wrote:
> > +static bool dpc_wait_link_active(struct pci_dev *pdev)
> > +{
>
> I think you can also make this function common instead of making another copy
> here.
> Of course, this would be another
Looks good.
Reviewed-by: Keith Busch
Looks good.
Reviewed-by: Keith Busch
On Thu, Jan 18, 2018 at 09:10:43AM +0100, Thomas Gleixner wrote:
> Can you please provide the output of
>
> # cat /sys/kernel/debug/irq/irqs/$ONE_I40_IRQ
# cat /sys/kernel/debug/irq/irqs/48
handler: handle_edge_irq
device: :1a:00.0
status: 0x
istate: 0x
ddepth: 0
wdep
before commit 6bfe04255d5e ("nvme: add hostid token to fabric
> options").
>
> Fixes: 6bfe04255d5e ("nvme: add hostid token to fabric options")
> Reported-by: Alexander Potapenko
> Signed-off-by: Johannes Thumshirn
Thanks for the report and the fix. It'd still be good to use the kzalloc
variant in addition to this.
Reviewed-by: Keith Busch
On Mon, Jan 15, 2018 at 10:02:04AM +0800, jianchao.wang wrote:
> Hi keith
>
> Thanks for your kindly review and response.
I agree with Sagi's feedback, but I can't take credit for it. :)
I hoped to have a better report before the weekend, but I've run out of
time and without my machine till next week, so sending what I have and
praying someone more in the know will have a better clue.
I've a few NVMe drives and occasionally the IRQ teardown and bring-up
is failing. Resetting the c
This is all way over my head, but the part that obviously shows
something's gone wrong:
kworker/u674:3-1421 [028] d... 335.307051: irq_matrix_reserve_managed:
bit=56 cpu=0 online=1 avl=86 alloc=116 managed=3 online_maps=112
global_avl=22084, global_rsvd=157, total_alloc=570
kworker/u674:3
On Tue, Jan 16, 2018 at 12:20:18PM +0100, Thomas Gleixner wrote:
> What we want is s/i + 1/i/
>
> That's correct because x86_vector_free_irqs() does:
>
>for (i = 0; i < nr; i++)
>
>
> So if we fail at the first irq, then the loop will do nothing. Failing on
> the se
On Tue, Jan 16, 2018 at 03:28:19PM +0100, Johannes Thumshirn wrote:
> Add tracepoints for nvme command submission and completion. The tracepoints
> are modeled after SCSI's trace_scsi_dispatch_cmd_start() and
> trace_scsi_dispatch_cmd_done() tracepoints and fulfil a similar purpose,
> namely a fast
On Sun, Mar 11, 2018 at 11:03:58PM -0400, Sinan Kaya wrote:
> On 3/11/2018 6:03 PM, Bjorn Helgaas wrote:
> > On Wed, Feb 28, 2018 at 10:34:11PM +0530, Oza Pawandeep wrote:
>
> > That difference has been there since the beginning of DPC, so it has
> > nothing to do with *this* series EXCEPT for the
On Mon, Mar 12, 2018 at 08:16:38PM +0530, p...@codeaurora.org wrote:
> On 2018-03-12 19:55, Keith Busch wrote:
> > On Sun, Mar 11, 2018 at 11:03:58PM -0400, Sinan Kaya wrote:
> > > On 3/11/2018 6:03 PM, Bjorn Helgaas wrote:
> > > > On Wed, Feb 28, 2018 at 10:34:1
On Mon, Mar 12, 2018 at 09:04:47PM +0530, p...@codeaurora.org wrote:
> On 2018-03-12 20:28, Keith Busch wrote:
> > I'm not sure I understand. The link is disabled while DPC is triggered,
> > so if anything, you'd want to un-enumerate everything below the
> > containe
On Mon, Mar 12, 2018 at 10:21:29AM -0700, Alexander Duyck wrote:
> diff --git a/include/linux/pci.h b/include/linux/pci.h
> index 024a1beda008..9cab9d0d51dc 100644
> --- a/include/linux/pci.h
> +++ b/include/linux/pci.h
> @@ -1953,6 +1953,7 @@ static inline void pci_mmcfg_late_init(void) { }
> int
On Mon, Mar 12, 2018 at 01:41:07PM -0400, Sinan Kaya wrote:
> I was just writing a reply to you. You acted first :)
>
> On 3/12/2018 1:33 PM, Keith Busch wrote:
> >>> After releasing a slot from DPC, the link is allowed to retrain. If
> >>> there
> >>
On Mon, Mar 12, 2018 at 11:09:34AM -0700, Alexander Duyck wrote:
> On Mon, Mar 12, 2018 at 10:40 AM, Keith Busch wrote:
> > On Mon, Mar 12, 2018 at 10:21:29AM -0700, Alexander Duyck wrote:
> >> diff --git a/include/linux/pci.h b/include/linux/pci.h
> >> index 024a1b
Hi Jianchao,
The patch tests fine on all hardware I had. I'd like to queue this up
for the next 4.16-rc. Could you send a v3 with the cleanup changes Andy
suggested and a changelog aligned with Ming's insights?
Thanks,
Keith
On Mon, Mar 12, 2018 at 02:47:30PM -0500, Bjorn Helgaas wrote:
> [+cc Alex]
>
> On Mon, Mar 12, 2018 at 08:25:51AM -0600, Keith Busch wrote:
> > On Sun, Mar 11, 2018 at 11:03:58PM -0400, Sinan Kaya wrote:
> > > On 3/11/2018 6:03 PM, Bjorn Helgaas wrote:
> > > >
On Mon, Mar 05, 2018 at 12:33:29PM +1100, Oliver wrote:
> On Thu, Mar 1, 2018 at 10:40 AM, Logan Gunthorpe wrote:
> > @@ -429,10 +429,7 @@ static void __nvme_submit_cmd(struct nvme_queue *nvmeq,
> > {
> > u16 tail = nvmeq->sq_tail;
>
> > - if (nvmeq->sq_cmds_io)
> > -
On Mon, Mar 05, 2018 at 01:10:53PM -0700, Jason Gunthorpe wrote:
> So when reading the above mlx code, we see the first wmb() being used
> to ensure that CPU stores to cachable memory are visible to the DMA
> triggered by the doorbell ring.
IIUC, we don't need a similar barrier for NVMe to ensure
On Thu, Apr 12, 2018 at 10:34:37AM -0400, Sinan Kaya wrote:
> On 4/12/2018 10:06 AM, Bjorn Helgaas wrote:
> >
> > I think the scenario you are describing is two systems that are
> > identical except that in the first, the endpoint is below a hotplug
> > bridge, while in the second, it's below a no
On Thu, Apr 12, 2018 at 08:39:54AM -0600, Keith Busch wrote:
> On Thu, Apr 12, 2018 at 10:34:37AM -0400, Sinan Kaya wrote:
> > On 4/12/2018 10:06 AM, Bjorn Helgaas wrote:
> > >
> > > I think the scenario you are describing is two systems that are
> > > ide
On Thu, Apr 12, 2018 at 12:27:20PM -0400, Sinan Kaya wrote:
> On 4/12/2018 11:02 AM, Keith Busch wrote:
> >
> > Also, I thought the plan was to keep hotplug and non-hotplug the same,
> > except for the very end: if not a hotplug bridge, initiate the rescan
> > automat
Thanks, applied for 4.17-rc1.
I was a little surprised git was able to apply this since the patch
format is off, but it worked!
On Wed, Mar 21, 2018 at 03:06:05AM -0700, Matias Bjørling wrote:
> > outside of nvme core so that we can use it form lightnvm.
> >
> > Signed-off-by: Javier González
> > ---
> > drivers/lightnvm/core.c | 11 +++
> > drivers/nvme/host/core.c | 6 ++--
> > drivers/nvme/host/lightn
On Wed, Mar 21, 2018 at 11:48:09PM +0800, Ming Lei wrote:
> On Wed, Mar 21, 2018 at 01:10:31PM +0100, Marta Rybczynska wrote:
> > > On Wed, Mar 21, 2018 at 12:00:49PM +0100, Marta Rybczynska wrote:
> > >> NVMe driver uses threads for the work at device reset, including enabling
> > >> the PCIe devi
On Wed, Mar 21, 2018 at 08:27:07PM +0100, Matias Bjørling wrote:
> Enable the lightnvm integration to use the nvme_get_log_ext()
> function.
>
> Signed-off-by: Matias Bjørling
Thanks, applied to nvme-4.17.
On Mon, Apr 09, 2018 at 10:41:49AM -0400, Oza Pawandeep wrote:
> This patch renames error recovery to generic name with pcie prefix
>
> Signed-off-by: Oza Pawandeep
Looks fine.
Reviewed-by: Keith Busch
off-by: Oza Pawandeep
Looks fine.
Reviewed-by: Keith Busch
On Mon, Apr 09, 2018 at 10:41:51AM -0400, Oza Pawandeep wrote:
> This patch implements generic pcie_port_find_service() routine.
>
> Signed-off-by: Oza Pawandeep
Looks good.
Reviewed-by: Keith Busch
On Mon, Apr 09, 2018 at 10:41:53AM -0400, Oza Pawandeep wrote:
> +/**
> + * pcie_wait_for_link - Wait for link till it's active/inactive
> + * @pdev: Bridge device
> + * @active: waiting for active or inactive ?
> + *
> + * Use this to wait till link becomes active or inactive.
> + */
> +bool pcie_
On Mon, Apr 09, 2018 at 10:41:52AM -0400, Oza Pawandeep wrote:
> +static int find_dpc_dev_iter(struct device *device, void *data)
> +{
> + struct pcie_port_service_driver *service_driver;
> + struct device **dev;
> +
> + dev = (struct device **) data;
> +
> + if (device->bus == &pci
On Thu, Mar 08, 2018 at 08:42:20AM +0100, Christoph Hellwig wrote:
>
> So I suspect we'll need to go with a patch like this, just with a way
> better changelog.
I have to agree this is required for that use case. I'll run some
quick tests and propose an alternate changelog.
Longer term, the curr
On Tue, Mar 13, 2018 at 06:45:00PM +0800, Ming Lei wrote:
> On Tue, Mar 13, 2018 at 05:58:08PM +0800, Jianchao Wang wrote:
> > Currently, adminq and ioq1 share the same irq vector which is set
> > affinity to cpu0. If a system allows cpu0 to be offlined, the adminq
> > will not be able work any mor
Thanks, applied for 4.17.
1 - 100 of 944 matches
Mail list logo