On Mon, Apr 05, 2021 at 11:42:31PM +, Chuck Lever III wrote:
> > On Apr 5, 2021, at 4:07 PM, Jason Gunthorpe wrote:
> > On Mon, Apr 05, 2021 at 03:41:15PM +0200, Christoph Hellwig wrote:
> >> On Mon, Apr 05, 2021 at 08:23:54AM +0300, Leon Romanovsky wrote:
> >>> From: Leon Romanovsky
> >>>
>
On Thu, Apr 01, 2021 at 12:00:39PM +0300, Dan Carpenter wrote:
> Hi Keith,
>
> I've been trying to figure out ways Smatch can check for device managed
> resources. Like adding rules that if we call dev_set_name(&foo->bar)
> then it's device managaged and if there is a kfree(foo) without calling
>
A 'false' return means the value was safely set, so the comment should
say 'true' for when it is not considered safe.
Cc: Jason Gunthorpe
Cc: Kees Cook
Signed-off-by: Keith Busch
---
include/linux/overflow.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff -
For the subject, s/superflues/superfluous
On Tue, Mar 30, 2021 at 10:34:25AM -0700, Sagi Grimberg wrote:
>
> > > It is, but in this situation, the controller is sending a second
> > > completion that results in a use-after-free, which makes the
> > > transport irrelevant. Unless there is some other flow (which is
> > > unclear
> > > to me
On Thu, Mar 25, 2021 at 09:48:37AM +, Niklas Cassel wrote:
> From: Niklas Cassel
>
> When a passthru command targets a specific namespace, the ns parameter to
> nvme_user_cmd()/nvme_user_cmd64() is set. However, there is currently no
> validation that the nsid specified in the passthru comman
On Wed, Mar 10, 2021 at 02:41:10PM +0100, Christoph Hellwig wrote:
> On Wed, Mar 10, 2021 at 02:21:56PM +0100, Christoph Hellwig wrote:
> > Can you try this patch instead?
> >
> > http://lists.infradead.org/pipermail/linux-nvme/2021-February/023183.html
>
> Actually, please try the patch below in
On Tue, Mar 02, 2021 at 08:18:40AM +0100, Hannes Reinecke wrote:
> On 3/1/21 9:59 PM, Keith Busch wrote:
> > On Mon, Mar 01, 2021 at 05:53:25PM +0100, Hannes Reinecke wrote:
> >> On 3/1/21 5:05 PM, Keith Busch wrote:
> >>> On Mon, Mar 01, 2021 at 02:55:30PM +0100, Ha
On Mon, Mar 01, 2021 at 05:53:25PM +0100, Hannes Reinecke wrote:
> On 3/1/21 5:05 PM, Keith Busch wrote:
> > On Mon, Mar 01, 2021 at 02:55:30PM +0100, Hannes Reinecke wrote:
> > > On 3/1/21 2:26 PM, Daniel Wagner wrote:
> > > > On Sat, Feb 27, 2021 at 02:19
On Mon, Mar 01, 2021 at 02:55:30PM +0100, Hannes Reinecke wrote:
> On 3/1/21 2:26 PM, Daniel Wagner wrote:
> > On Sat, Feb 27, 2021 at 02:19:01AM +0900, Keith Busch wrote:
> >> Crashing is bad, silent data corruption is worse. Is there truly no
> >> defense against that
On Fri, Feb 26, 2021 at 05:42:46PM +0100, Hannes Reinecke wrote:
> On 2/26/21 5:13 PM, Keith Busch wrote:
> >
> > That's just addressing a symptom. You can't fully verify the request is
> > valid this way because the host could have started the same command ID
>
On Fri, Feb 26, 2021 at 01:54:00PM +0100, Hannes Reinecke wrote:
> On 2/26/21 1:35 PM, Daniel Wagner wrote:
> > On Mon, Feb 15, 2021 at 01:29:45PM -0800, Sagi Grimberg wrote:
> > > Well, I think we should probably figure out why that is happening first.
> >
> > I got my hands on a tcpdump trace. I
On Sat, Feb 20, 2021 at 05:10:18PM +0800, Yang Li wrote:
> fixed the following coccicheck:
> ./drivers/nvme/host/core.c:3440:60-61: WARNING opportunity for
> kobj_to_dev()
> ./drivers/nvme/host/core.c:3679:60-61: WARNING opportunity for
> kobj_to_dev()
>
> Reported-by: Abaci Robot
> Signed-off-by
On Sat, Feb 20, 2021 at 06:01:56PM +, David Laight wrote:
> From: SelvaKumar S
> > Sent: 19 February 2021 12:45
> >
> > This patchset tries to add support for TP4065a ("Simple Copy Command"),
> > v2020.05.04 ("Ratified")
> >
> > The Specification can be found in following link.
> > https://nv
On Fri, Feb 12, 2021 at 12:58:27PM -0800, Sagi Grimberg wrote:
> > blk_mq_tag_to_rq() will always return a request if the command_id is
> > in the valid range. Check if the request has been started. If we
> > blindly process the request we might double complete a request which
> > can be fatal.
>
On Wed, Feb 03, 2021 at 12:22:31PM +0100, Filippo Sironi wrote:
>
> On 2/3/21 12:15 PM, Christoph Hellwig wrote:
> >
> > On Wed, Feb 03, 2021 at 12:12:31PM +0100, Filippo Sironi wrote:
> > > I don't disagree on the first part of your sentence, this is a big
> > > oversight.
> >
> > But it is not
On Mon, Feb 01, 2021 at 10:30:17AM -0800, Jianxiong Gao wrote:
> @@ -868,12 +871,24 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev,
> struct request *req,
> if (!iod->nents)
> goto out_free_sg;
>
> + offset_ret = dma_set_min_align_mask(dev->dev, NVME_CTRL_PAGE_
On Thu, Jan 28, 2021 at 12:15:28PM -0500, Konrad Rzeszutek Wilk wrote:
> On Wed, Jan 27, 2021 at 04:38:28PM -0800, Jianxiong Gao wrote:
> > For devices that need to preserve address offset on mapping through
> > swiotlb, this patch adds offset preserving based on page_offset_mask
> > and keeps the
On Fri, Dec 11, 2020 at 07:21:38PM +0530, SelvaKumar S wrote:
> +int blk_copy_emulate(struct block_device *bdev, struct blk_copy_payload
> *payload,
> + gfp_t gfp_mask)
> +{
> + struct request_queue *q = bdev_get_queue(bdev);
> + struct bio *bio;
> + void *buf = NULL;
> +
On Wed, Dec 09, 2020 at 06:32:27PM -0300, Enzo Matsumiya wrote:
> +void nvme_hwmon_exit(struct nvme_ctrl *ctrl)
> +{
> + hwmon_device_unregister(ctrl->dev);
> +}
The hwmon registration uses the devm_ version, so don't we need to use
the devm_hwmon_device_unregister() here?
On Fri, Dec 04, 2020 at 11:25:12AM +, Damien Le Moal wrote:
> On 2020/12/04 20:02, SelvaKumar S wrote:
> > This patchset tries to add support for TP4065a ("Simple Copy Command"),
> > v2020.05.04 ("Ratified")
> >
> > The Specification can be found in following link.
> > https://nvmexpress.org/w
On Tue, Dec 01, 2020 at 11:09:49AM +0530, SelvaKumar S wrote:
> +static void nvme_config_copy(struct gendisk *disk, struct nvme_ns *ns,
> +struct nvme_id_ns *id)
> +{
> + struct nvme_ctrl *ctrl = ns->ctrl;
> + struct request_queue *queue = disk->queue;
>
On Fri, Nov 20, 2020 at 09:02:43AM +0100, Christoph Hellwig wrote:
> On Thu, Nov 19, 2020 at 05:27:37PM -0800, Tom Roeder wrote:
> > This patch changes the NVMe PCI implementation to cache host_mem_descs
> > in non-DMA memory instead of depending on descriptors stored in DMA
> > memory. This change
On Thu, Nov 19, 2020 at 10:59:19AM -0800, Tom Roeder wrote:
> This patch changes the NVMe PCI implementation to cache host_mem_descs
> in non-DMA memory instead of depending on descriptors stored in DMA
> memory. This change is needed under the malicious-hypervisor threat
> model assumed by the AMD
On Thu, Nov 12, 2020 at 04:45:35PM +0100, Niklas Schnelle wrote:
> You got to get something wrong, I hope in this case it's just the subject
> of the cover letter :D
I suppose the change logs could be worded a little better :)
> Thanks for the review, I appreciate it. Might be getting ahead of
>
ode either but I might have missed something of course.
I don't think you missed anything, and the series looks like a
reasonable cleanup. I suspect the code was left over from a time when we
didn't allocate the possible queues up-front.
Reviewed-by: Keith Busch
On Fri, Nov 06, 2020 at 10:00:36AM -0700, Logan Gunthorpe wrote:
> Allow userspace to obtain CMB memory by mmaping the controller's
> char device. The mmap call allocates and returns a hunk of CMB memory,
> (the offset is ignored) so userspace does not have control over the
> address within the CMB
On Thu, Oct 29, 2020 at 11:33:06AM +0900, Keith Busch wrote:
> On Thu, Oct 29, 2020 at 02:20:27AM +, Gloria Tsai wrote:
> > Corrected the description of this bug that SSD will not do GC after
> > receiving shutdown cmd.
> > Do GC before shutdown -> delete IO Q -> sh
On Thu, Oct 29, 2020 at 02:20:27AM +, Gloria Tsai wrote:
> Corrected the description of this bug that SSD will not do GC after receiving
> shutdown cmd.
> Do GC before shutdown -> delete IO Q -> shutdown from host -> breakup GC ->
> D3hot -> enter PS4 -> have a chance swap block -> use wrong
On Wed, Oct 28, 2020 at 12:54:38AM +0900, Jongpil Jung wrote:
> suspend.
>
> When NVMe device receive D3hot from host, NVMe firmware will do
> garbage collection. While NVMe device do Garbage collection,
> firmware has chance to going incorrect address.
> In that case, NVMe storage device goes to
On Wed, Oct 14, 2020 at 11:36:50AM +0800, zhenwei pi wrote:
> Fixes:
> Don't run keep alive work with zero kato.
"Fixes" tags need to have a git commit id followed by the commit
subject. I can't find any commit with that subject, though.
The commit subject is a too long. We should really try to keep these to
50 characters or less.
nvme-pci: fix NULL req in completion handler
Otherwise, looks fine.
Reviewed-by: Keith Busch
s queue_depth is a valid point to mention as well. The
dirver's current indirect check is not necessarily in sync with the
actual tagset.
> Thanks
>
> -Original Message-
> From: Keith Busch [mailto:kbu...@kernel.org]
> Sent: Monday, September 21, 2020 11:0
On Mon, Sep 21, 2020 at 10:10:52AM +0800, Xianting Tian wrote:
> @@ -940,13 +940,6 @@ static inline void nvme_handle_cqe(struct nvme_queue
> *nvmeq, u16 idx)
> struct nvme_completion *cqe = &nvmeq->cqes[idx];
> struct request *req;
>
> - if (unlikely(cqe->command_id >= nvmeq->q_d
On Fri, Sep 18, 2020 at 06:44:20PM +0800, Xianting Tian wrote:
> @@ -940,7 +940,9 @@ static inline void nvme_handle_cqe(struct nvme_queue
> *nvmeq, u16 idx)
> struct nvme_completion *cqe = &nvmeq->cqes[idx];
> struct request *req;
>
> - if (unlikely(cqe->command_id >= nvmeq->q_de
On Thu, Sep 17, 2020 at 11:32:12PM -0400, Tong Zhang wrote:
> Please correct me if I am wrong.
> After a bit more digging I found out that it is indeed command_id got
> corrupted is causing this problem. Although the tag and command_id
> range is checked like you said, the elements in rqs cannot be
On Thu, Sep 17, 2020 at 12:56:59PM -0400, Tong Zhang wrote:
> The command_id in CQE is writable by NVMe controller, driver should
> check its sanity before using it.
We already do that.
On Thu, Sep 17, 2020 at 11:22:54AM -0400, Tong Zhang wrote:
> On Thu, Sep 17, 2020 at 4:30 AM Christoph Hellwig wrote:
> >
> > On Wed, Sep 16, 2020 at 11:37:00AM -0400, Tong Zhang wrote:
> > > the irq might already been released before reset work can run
> >
> > If it is we have a problem with the
On Wed, Sep 16, 2020 at 11:36:49AM -0400, Tong Zhang wrote:
> @@ -960,6 +960,8 @@ static inline void nvme_handle_cqe(struct nvme_queue
> *nvmeq, u16 idx)
> }
>
> req = blk_mq_tag_to_rq(nvme_queue_tagset(nvmeq), cqe->command_id);
> + if (!req)
> + return;
As I mention
On Wed, Sep 09, 2020 at 01:06:39PM -0700, Joe Perches wrote:
> diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
> index eea0f453cfb6..8aac5bc60f4c 100644
> --- a/crypto/tcrypt.c
> +++ b/crypto/tcrypt.c
> @@ -2464,7 +2464,7 @@ static int do_test(const char *alg, u32 type, u32 mask,
> int m, u32 num_m
On Wed, Sep 02, 2020 at 01:48:19PM -0600, David Fugate wrote:
> Over the years, I've been forwarded numerous emails from VMD customers
> praising it's ability to prevent Linux kernel panics upon hot-removals
> and inserts of U.2 NVMe drives.
The same nvme and pcie hotplug drivers are used with or
On Mon, Aug 31, 2020 at 06:55:53PM +0800, Xianting Tian wrote:
> As blk_mq_tag_to_rq() may return null, so it should be check whether it is
> null before using it to prevent a crash.
It may return NULL if the command id exceeds the number of tags. We
already have a check for a valid command id val
On Mon, Aug 31, 2020 at 11:29:03AM -0400, Sasha Levin wrote:
> From: Keith Busch
>
> [ Upstream commit c41ad98bebb8f4f0335b3c50dbb7583a6149dce4 ]
>
> Zoned block devices reuse the chunk_sectors queue limit to define zone
> boundaries. If a such a device happens to also
lowing is console output.
Thanks, this looks good to me.
Reviewed-by: Keith Busch
On Fri, Aug 14, 2020 at 12:11:56PM -0400, Tong Zhang wrote:
> On Fri, Aug 14, 2020 at 11:42 AM Keith Busch wrote:
> > > > On Fri, Aug 14, 2020 at 03:14:31AM -0400, Tong Zhang wrote:
> > > > > diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> > &g
ever demote for memcg reclaim
> mm/numa: new reclaim mode to enable reclaim-based migration
>
> Keith Busch (2):
> mm/migrate: Defer allocating new page until needed
> mm/vmscan: Consider anonymous pages without swap
>
> Yang Shi (1):
>
On Mon, Aug 24, 2020 at 01:00:11PM -0700, Sagi Grimberg wrote:
> > The way 'spin_lock()' and 'spin_lock_irqsave()' are used is not consistent
> > in this function.
> >
> > Use 'spin_lock_irqsave()' also here, as there is no guarantee that
> > interruptions are disabled at that point, according to
On Wed, Aug 19, 2020 at 05:43:29PM -0600, David Fugate wrote:
> There were queries? My key takeaways were a maintainer NAK followed by
> instructions to make the Intel drive align with the driver by
> implementing NOIOB. While I disagree with the rejection as it appeared
> to be based entirely on p
On Wed, Aug 19, 2020 at 03:54:20PM -0600, David Fugate wrote:
> On Wed, 2020-08-19 at 13:25 -0600, Jens Axboe wrote:
> > It's not required, the driver will function quite fine without it. If
> > you
> > want to use ZNS it's required.
>
> The NVMe spec does not require Zone Append for ZNS; a *vend
On Wed, Aug 19, 2020 at 01:11:58PM -0600, David Fugate wrote:
> Intel does not support making *optional* NVMe spec features *required*
> by the NVMe driver.
This is inaccurate. As another example, the spec optionally allows a
zone size to be a power of 2, but linux requires a power of 2 if you
wa
On Tue, Aug 18, 2020 at 07:29:12PM +0200, Javier Gonzalez wrote:
> On 18.08.2020 09:58, Keith Busch wrote:
> > On Tue, Aug 18, 2020 at 11:50:33AM +0200, Javier Gonzalez wrote:
> > > a number of customers are requiring the use of normal writes, which we
> > > want to supp
On Tue, Aug 18, 2020 at 11:50:33AM +0200, Javier Gonzalez wrote:
> a number of customers are requiring the use of normal writes, which we
> want to support.
A device that supports append is completely usable for those customers,
too. There's no need to create divergence in this driver.
On Mon, Aug 17, 2020 at 03:50:11PM +0200, Ahmed S. Darwish wrote:
> Hello,
>
> Below v5.9-rc1 commit reliably breaks my boot on a Thinkpad e480
> laptop. PCI nvme detection fails, and the kernel becomes not able
> anymore to find the rootfs / parse "root=".
>
> Bisecting v5.8=>v5.9-rc1 blames tha
: 61f3b8963097 ("nvme-pci: use unsigned for io queue depth")
> Signed-off-by: John Garry
Looks good to me.
Reviewed-by: Keith Busch
On Fri, Aug 14, 2020 at 11:37:20AM -0400, Tong Zhang wrote:
> On Fri, Aug 14, 2020 at 11:04 AM Keith Busch wrote:
> >
> > On Fri, Aug 14, 2020 at 03:14:31AM -0400, Tong Zhang wrote:
> > > diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> > > index
On Fri, Aug 14, 2020 at 03:14:31AM -0400, Tong Zhang wrote:
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index ba725ae47305..c4f1ce0ee1e3 100644
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -1249,8 +1249,8 @@ static enum blk_eh_timer_return nvme_timeout
There's an unrelated whitespace change in nvme_init_identify().
Otherwise, looks fine.
Reviewed-by: Keith Busch
On Wed, Aug 12, 2020 at 03:01:19PM -0600, Logan Gunthorpe wrote:
> @@ -2971,15 +2971,16 @@ int nvme_get_log(struct nvme_ctrl *ctrl, u32 nsid, u8
> log_page, u8 lsp, u8 csi,
> static struct nvme_cel *nvme_find_cel(struct nvme_ctrl *ctrl, u8 csi)
> {
> struct nvme_cel *cel, *ret = NULL;
> +
On Wed, Jul 29, 2020 at 07:29:08PM +, Lach wrote:
> Hello
>
> I caught a regression in the nvme driver, which shows itself on some
> controllers (In my case, at 126h:2263)
Fix is staged for the next 5.8 pull;
https://git.kernel.dk/cgit/linux-block/commit/?h=block-5.8&id=5bedd3afee8eb01ccd
On Fri, Jul 24, 2020 at 11:25:11AM -0600, Logan Gunthorpe wrote:
> This is v16 of the passthru patchset which make a bunch of cleanup as
> suggested by Christoph.
Thank, looks great. Just the comment on 6/9, which probably isn't super
important anyway.
Reviewed-by: Keith Busch
On Fri, Jul 24, 2020 at 11:25:17AM -0600, Logan Gunthorpe wrote:
> + /*
> + * The passthru NVMe driver may have a limit on the number of segments
> + * which depends on the host's memory fragementation. To solve this,
> + * ensure mdts is limitted to the pages equal to the number
On Mon, Jul 20, 2020 at 04:28:26PM -0700, Sagi Grimberg wrote:
> On 7/20/20 4:17 PM, Keith Busch wrote:
> > On Mon, Jul 20, 2020 at 05:01:19PM -0600, Logan Gunthorpe wrote:
> > > On 2020-07-20 4:35 p.m., Sagi Grimberg wrote:
> > >
> > > > passthr
On Mon, Jul 20, 2020 at 05:01:19PM -0600, Logan Gunthorpe wrote:
> On 2020-07-20 4:35 p.m., Sagi Grimberg wrote:
>
> > passthru commands are in essence REQ_OP_DRV_IN/REQ_OP_DRV_OUT, which
> > means that the driver shouldn't need the ns at all. So if you have a
> > dedicated request queue (mapped to
On Fri, Jul 03, 2020 at 10:49:24AM +0800, Baolin Wang wrote:
> static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req,
> @@ -844,7 +844,7 @@ static blk_status_t nvme_map_metadata(struct nvme_dev
> *dev, struct request *req,
> if (dma_mapping_error(dev->dev, iod->meta_dm
On Tue, Jun 30, 2020 at 04:01:45PM +0200, Maximilian Heyne wrote:
> On 6/30/20 3:36 PM, Christoph Hellwig wrote:
> > And actually - 1.0 did not have the concept of a subsystem. So having
> > a duplicate serial number for a 1.0 controller actually is a pretty
> > nasty bug. Can you point me to thi
On Sun, Jun 28, 2020 at 06:34:46PM +0800, Baolin Wang wrote:
> Move the sg table allocation and free into the init_request() and
> exit_request(), instead of allocating sg table when queuing requests,
> which can benefit the IO performance.
If you want to pre-allocate something per-request, you ca
On Tue, Jun 23, 2020 at 06:27:51PM +0200, Christoph Hellwig wrote:
> On Tue, Jun 23, 2020 at 09:24:33PM +0800, Baolin Wang wrote:
> > Introduce a new capability macro to indicate if the controller
> > supports the memory buffer or not, instead of reading the
> > NVME_REG_CMBSZ register.
>
> This i
On Tue, Jun 23, 2020 at 09:24:32PM +0800, Baolin Wang wrote:
> +void nvme_set_arbitration_burst(struct nvme_ctrl *ctrl)
> +{
> + u32 result;
> + int status;
> +
> + if (!ctrl->rab)
> + return;
> +
> + /*
> + * The Arbitration Burst setting indicates the maximum numb
On Wed, Jun 24, 2020 at 09:34:08AM +0800, Baolin Wang wrote:
> OK, I understaood your concern. Now we will select the RR arbitration as
> default
> in nvme_enable_ctrl(), but for some cases, we will not set the arbitration
> burst
> values from userspace, and we still want to use the defaut arbit
On Tue, Jun 23, 2020 at 10:39:01AM -0700, Sagi Grimberg wrote:
>
> > > From the NVMe spec, "In order to make efficient use of the non-volatile
> > > memory, it is often advantageous to execute multiple commands from a
> > > Submission Queue in parallel. For Submission Queues that are using
> > >
On Tue, Jun 23, 2020 at 09:24:32PM +0800, Baolin Wang wrote:
> From the NVMe spec, "In order to make efficient use of the non-volatile
> memory, it is often advantageous to execute multiple commands from a
> Submission Queue in parallel. For Submission Queues that are using
> weighted round robin w
Fixes: fa46c6fb5d61 ("nvme/pci: move cqe check after device shutdown")
Looks good to me.
Reviewed-by: Keith Busch
On Wed, Apr 29, 2020 at 05:20:09AM +, Williams, Dan J wrote:
> On Tue, 2020-04-28 at 08:27 -0700, David E. Box wrote:
> > On Tue, 2020-04-28 at 16:22 +0200, Christoph Hellwig wrote:
> > > On Tue, Apr 28, 2020 at 07:09:59AM -0700, David E. Box wrote:
> > > > > I'm not sure who came up with the i
On Mon, Oct 07, 2019 at 01:50:11PM -0400, Tyler Ramer wrote:
> Shutdown the controller when nvme_remove_dead_controller is
> reached.
>
> If nvme_remove_dead_controller() is called, the controller won't
> be comming back online, so we should shut it down rather than just
> disabling.
>
> Remove n
On Mon, Oct 07, 2019 at 11:13:12AM -0400, Tyler Ramer wrote:
> > Setting the shutdown to true is
> > usually just to get the queues flushed, but the nvme_kill_queues() that
> > we call accomplishes the same thing.
>
> The intention of this patch was to clean up another location where
> nvme_dev_di
On Fri, Oct 04, 2019 at 11:36:42AM -0400, Tyler Ramer wrote:
> Here's a failure we had which represents the issue the patch is
> intended to solve:
>
> Aug 26 15:00:56 testhost kernel: nvme nvme4: async event result 00010300
> Aug 26 15:01:27 testhost kernel: nvme nvme4: controller is down; will
>
On Tue, Sep 24, 2019 at 11:05:36AM -0700, Sagi Grimberg wrote:
> Looks fine to me,
>
> Reviewed-by: Sagi Grimberg
>
> Keith, Christoph?
Looks good to me, too.
Reviewed-by: Keith Busch
> Signed-off-by: Dan Carpenter
Thanks, patch looks good.
Reviewed-by: Keith Busch
tures has been
> called. This has been proven to resolve the issue across a 5000 sample
> test on previously failing disk/system combinations.
>
> Signed-off-by: Mario Limonciello
This looks good. It clashes with something I posted yesterday, but
I'll rebase after this one.
Reviewed-by: Keith Busch
On Thu, Sep 19, 2019 at 01:47:50PM +, Bharat Kumar Gogada wrote:
> Hi All,
>
> We are testing NVMe cards on ARM64 platform, the card uses MSI-X interrupts.
> We are hitting following case in drivers/nvme/host/pci.c
> /*
> * Did we miss an interrupt?
> */
> if (__nvme_
On Wed, Sep 11, 2019 at 06:42:33PM -0500, Mario Limonciello wrote:
> ---
> drivers/nvme/host/pci.c | 13 +++--
> 1 file changed, 7 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index 732d5b6..9b3fed4 100644
> --- a/drivers/nvme/host/pci
On Wed, Sep 18, 2019 at 03:26:11PM +0200, Christoph Hellwig wrote:
> Even if we had a use case for that the bounce buffer is just too ugly
> to live. And I'm really sick and tired of Intel wasting our time for
> their out of tree monster given that they haven't even tried helping
> to improve the
On Wed, Sep 11, 2019 at 06:42:33PM -0500, Mario Limonciello wrote:
> The action of saving the PCI state will cause numerous PCI configuration
> space reads which depending upon the vendor implementation may cause
> the drive to exit the deepest NVMe state.
>
> In these cases ASPM will typically re
On Mon, Sep 16, 2019 at 12:13:24PM +, Baldyga, Robert wrote:
> Ok, fair enough. We want to keep things hidden behind certain layers,
> and that's definitely a good thing. But there is a problem with these
> layers - they do not expose all the features. For example AFAIK there
> is no clear way
On Sat, Sep 07, 2019 at 06:19:21AM +0800, Ming Lei wrote:
> On Fri, Sep 06, 2019 at 05:50:49PM +, Long Li wrote:
> > >Subject: Re: [PATCH 1/4] softirq: implement IRQ flood detection mechanism
> > >
> > >Why are all 8 nvmes sharing the same CPU for interrupt handling?
> > >Shouldn't matrix_find_
On Fri, Sep 06, 2019 at 11:30:57AM -0700, Sagi Grimberg wrote:
>
> >
> > Ok, so the real problem is per-cpu bounded tasks.
> >
> > I share Thomas opinion about a NAPI like approach.
>
> We already have that, its irq_poll, but it seems that for this
> use-case, we get lower performance for some
On Fri, Sep 06, 2019 at 09:48:21AM +0800, Ming Lei wrote:
> When one IRQ flood happens on one CPU:
>
> 1) softirq handling on this CPU can't make progress
>
> 2) kernel thread bound to this CPU can't make progress
>
> For example, network may require softirq to xmit packets, or another irq
> thr
covered in that
subsystem.
Reviewed-by: Logan Gunthorpe
Signed-off-by: Keith Busch
---
v1 -> v2:
Changelog: reduce sensationalism, fix spelling
drivers/nvme/host/core.c | 21 ++---
1 file changed, 10 insertions(+), 11 deletions(-)
diff --git a/drivers/nvme/host/core.c b/driv
the first controller discovered in
that subsystem.
Reviewed-by: Logan Gunthorpe
Signed-off-by: Keith Busch
---
drivers/nvme/host/core.c | 21 ++---
1 file changed, 10 insertions(+), 11 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 14c0bfb55615..8a
On Wed, Sep 04, 2019 at 11:01:22AM -0600, Logan Gunthorpe wrote:
> Oh, yes that's simpler than the struct/kref method and looks like it
> will accomplish the same thing. I did some brief testing with it and it
> seems to work for me (though I don't have any subsystems with multiple
> controllers).
On Wed, Sep 04, 2019 at 10:07:12AM -0600, Logan Gunthorpe wrote:
> Yes, I agree, we can't solve the mismatch problem in the general case:
> with sequences of hot plug events there will always be a case that
> mismatches. I just think we can do better in the simple common default case.
This may be
On Wed, Sep 04, 2019 at 05:42:15PM +0200, Christoph Hellwig wrote:
> On Wed, Sep 04, 2019 at 08:44:27AM -0600, Keith Busch wrote:
> > Let me step through an example:
> >
> > Ctrl A gets instance 0.
> >
> > Its subsystem gets the same instance, and takes ref
On Wed, Sep 04, 2019 at 08:05:58AM +0200, Christoph Hellwig wrote:
> On Tue, Sep 03, 2019 at 10:46:20AM -0600, Keith Busch wrote:
> > Could we possibly make /dev/nvmeX be a subsystem handle without causing
> > trouble for anyone? This would essentially be the same thing as today
&
On Tue, Sep 03, 2019 at 10:08:01AM -0600, Logan Gunthorpe wrote:
> On 2019-08-31 9:29 a.m., Keith Busch wrote:
> > On Fri, Aug 30, 2019 at 06:01:39PM -0600, Logan Gunthorpe wrote:
> >> To fix this, assign the subsystem's instance based on the instance
> >> number o
On Fri, Aug 30, 2019 at 06:01:39PM -0600, Logan Gunthorpe wrote:
> To fix this, assign the subsystem's instance based on the instance
> number of the controller's instance that first created it. There should
> always be fewer subsystems than controllers so the should not be a need
> to create extra
On Tue, Aug 27, 2019 at 08:34:21AM -0600, Keith Busch wrote:
> I think you should probably just have pci_irq_get_affinity() take a flags
> argument, or make a new function like __pci_irq_get_affinity() that
> pci_irq_get_affinity() can call with a default flag.
Sorry, copied the wrong
nterrupt.
>
> Cc: Long Li
> Cc: Ingo Molnar ,
> Cc: Peter Zijlstra
> Cc: Keith Busch
> Cc: Jens Axboe
> Cc: Christoph Hellwig
> Cc: Sagi Grimberg
> Cc: John Garry
> Cc: Thomas Gleixner
> Cc: Hannes Reinecke
> Cc: linux-n...@lists.infradead.org
>
On Tue, Aug 27, 2019 at 05:09:27PM +0800, Ming Lei wrote:
> On Tue, Aug 27, 2019 at 11:06:20AM +0200, Johannes Thumshirn wrote:
> > On 27/08/2019 10:53, Ming Lei wrote:
> > [...]
> > > + char *devname;
> > > + const struct cpumask *mask;
> > > + unsigned long irqflags = IRQF
RROR_RESPONSE definition
> PCI / PM: Decode D3cold power state correctly
> PCI / PM: Return error when changing power state from D3cold
Series looks good to me.
Reviewed-by: Keith Busch
1 - 100 of 931 matches
Mail list logo