type_ptr += type_ptr[3] + 4; > here
}
Then the typr_ptr got out of bound of the buffer.
Thanks
Jianchao
On 3/14/19 11:19 AM, jianchao.wang wrote:
> Dear all
>
> When our customer probe the lpfc devices, they encountered odd memory
> corruption issues,
> and we get
On 2/15/19 11:14 AM, Ming Lei wrote:
> On Fri, Feb 15, 2019 at 10:34:39AM +0800, jianchao.wang wrote:
>> Hi Ming
>>
>> Thanks for your kindly response.
>>
>> On 2/15/19 10:00 AM, Ming Lei wrote:
>>> On Tue, Feb 12, 2019 at 09:56:25AM +0800,
Hi Ming
Thanks for your kindly response.
On 2/15/19 10:00 AM, Ming Lei wrote:
> On Tue, Feb 12, 2019 at 09:56:25AM +0800, Jianchao Wang wrote:
>> When requeue, if RQF_DONTPREP, rq has contained some driver
>> specific data, so insert it to hctx dispatch list to avoid any
>> merge. Take scsi as ex
Hi Jens
Thanks for your kindly response.
On 2/12/19 7:20 AM, Jens Axboe wrote:
> On 2/11/19 4:15 PM, Jens Axboe wrote:
>> On 2/11/19 8:59 AM, Jens Axboe wrote:
>>> On 2/10/19 10:41 PM, Jianchao Wang wrote:
When requeue, if RQF_DONTPREP, rq has contained some driver
specific data, so ins
On 12/31/18 12:27 AM, Tariq Toukan wrote:
>
>
> On 1/27/2018 2:41 PM, jianchao.wang wrote:
>> Hi Tariq
>>
>> Thanks for your kindly response.
>> That's really appreciated.
>>
>> On 01/25/2018 05:54 PM, Tariq Toukan wrote:
>>>
>&
Ping ?
Thanks
Jianchao
On 12/10/18 11:01 AM, Jianchao Wang wrote:
> Hi Jens
>
> Please consider this patchset for 4.21.
>
> It refactors the code of issue request directly to unify the interface
> and make the code clearer and more readable.
>
> The 1st patch refactors the code of issue reques
On 12/7/18 11:47 AM, Jens Axboe wrote:
> On 12/6/18 8:46 PM, jianchao.wang wrote:
>>
>>
>> On 12/7/18 11:42 AM, Jens Axboe wrote:
>>> On 12/6/18 8:41 PM, jianchao.wang wrote:
>>>>
>>>>
>>>> On 12/7/18 11:34 AM, Jens Axboe wrote:
On 12/7/18 11:42 AM, Jens Axboe wrote:
> On 12/6/18 8:41 PM, jianchao.wang wrote:
>>
>>
>> On 12/7/18 11:34 AM, Jens Axboe wrote:
>>> On 12/6/18 8:32 PM, Jens Axboe wrote:
>>>> On 12/6/18 8:26 PM, jianchao.wang wrote:
>>>>>
>>>&
On 12/7/18 11:34 AM, Jens Axboe wrote:
> On 12/6/18 8:32 PM, Jens Axboe wrote:
>> On 12/6/18 8:26 PM, jianchao.wang wrote:
>>>
>>>
>>> On 12/7/18 11:16 AM, Jens Axboe wrote:
>>>> On 12/6/18 8:09 PM, Jianchao Wang wrote:
>>>>
On 12/7/18 11:16 AM, Jens Axboe wrote:
> On 12/6/18 8:09 PM, Jianchao Wang wrote:
>> Hi Jens
>>
>> Please consider this patchset for 4.21.
>>
>> It refactors the code of issue request directly to unify the interface
>> and make the code clearer and more readable.
>>
>> This patch set is rebased
On 12/6/18 11:19 PM, Jens Axboe wrote:
> On 12/5/18 8:32 PM, Jianchao Wang wrote:
>> It is not necessary to issue request directly with bypass 'true'
>> in blk_mq_sched_insert_requests and handle the non-issued requests
>> itself. Just set bypass to 'false' and let blk_mq_try_issue_directly
>> h
Hi Ming
On 10/29/18 10:49 AM, Ming Lei wrote:
> On Sat, Oct 27, 2018 at 12:01:09AM +0800, Jianchao Wang wrote:
>> Merge blk_mq_try_issue_directly and __blk_mq_try_issue_directly
>> into one interface which is able to handle the return value from
>> .queue_rq callback. Due to we can only issue dire
Hi Tejun
Thanks for your kindly response.
On 09/21/2018 04:53 AM, Tejun Heo wrote:
> Hello,
>
> On Thu, Sep 20, 2018 at 06:18:21PM +0800, Jianchao Wang wrote:
>> -static inline void percpu_ref_get_many(struct percpu_ref *ref, unsigned
>> long nr)
>> +static inline void __percpu_ref_get_many(str
Hi Jens
On 08/25/2018 11:41 PM, Jens Axboe wrote:
> do {
> - set_current_state(TASK_UNINTERRUPTIBLE);
> + if (test_bit(0, &data.flags))
> + break;
>
> - if (!has_sleeper && rq_wait_inc_below(rqw, get_limit(rwb, rw)))
> + W
Hi Ming
On 07/31/2018 12:58 PM, Ming Lei wrote:
> On Tue, Jul 31, 2018 at 12:02:15PM +0800, Jianchao Wang wrote:
>> Currently, we will always set SCHED_RESTART whenever there are
>> requests in hctx->dispatch, then when request is completed and
>> freed the hctx queues will be restarted to avoid I
Hi Keith
On 06/20/2018 12:39 AM, Keith Busch wrote:
> On Tue, Jun 19, 2018 at 04:30:50PM +0800, Jianchao Wang wrote:
>> There is race between nvme_remove and nvme_reset_work that can
>> lead to io hang.
>>
>> nvme_removenvme_reset_work
>> -> change state to DELETING
>>
On 06/20/2018 09:35 AM, Bart Van Assche wrote:
> On Wed, 2018-06-20 at 09:28 +0800, jianchao.wang wrote:
>> Hi Bart
>>
>> Thanks for your kindly response.
>>
>> On 06/19/2018 11:18 PM, Bart Van Assche wrote:
>>> On Tue, 2018-06-19 at 15:00 +0800, J
k_complete_request.
however, the scsi recovery context could clear the ATOM_COMPLETE and requeue
the request before irq
context get it.
Thanks
Jianchao
>
> On 5/28/18, 6:11 PM, "jianchao.wang" wrote:
>
> Hi Himanshu
>
> do you need any other information ?
Hi Omar
Thanks for your kindly and detailed comment.
That's really appreciated. :)
On 05/30/2018 02:55 AM, Omar Sandoval wrote:
> On Wed, May 23, 2018 at 02:33:22PM +0800, Jianchao Wang wrote:
>> Currently, kyber is very unfriendly with merging. kyber depends
>> on ctx rq_list to do merging, howe
Hi Himanshu
do you need any other information ?
Thanks
Jianchao
On 05/25/2018 02:48 PM, jianchao.wang wrote:
> Hi Himanshu
>
> I'm afraid I cannot provide you the vmcore file, it is from our customer.
> If any information needed in the vmcore, I could provide with you.
&
llected for us to look at this in details.
>
> Can you provide me crash/vmlinux/modules for details analysis.
>
> Thanks,
> himanshu
>
> On 5/24/18, 6:49 AM, "Madhani, Himanshu" wrote:
>
>
> > On May 24, 2018, at 2:09 AM, jianchao.wan
at this issue.
>
> Thanks,
> Himanshu
>
>> -Original Message-
>> From: jianchao.wang [mailto:jianchao.w.w...@oracle.com]
>> Sent: Wednesday, May 23, 2018 6:51 PM
>> To: Dept-Eng QLA2xxx Upstream ; Madhani,
>> Himanshu ; jthumsh...@suse.de
>>
Would anyone please take a look at this ?
Thanks in advance
Jianchao
On 05/23/2018 11:55 AM, jianchao.wang wrote:
>
>
> Hi all
>
> Our customer met a panic triggered by BUG_ON in blk_finish_request.
>>From the dmesg log, the BUG_ON was triggered after command abort o
Hi all
Our customer met a panic triggered by BUG_ON in blk_finish_request.
>From the dmesg log, the BUG_ON was triggered after command abort occurred many
>times.
There is a race condition in the following scenario.
cpu A cpu B
kworker
Hi Jens and Holger
Thank for your kindly response.
That's really appreciated.
I will post next version based on Jens' patch.
Thanks
Jianchao
On 05/23/2018 02:32 AM, Holger Hoffstätte wrote:
This looks great but prevents kyber from being built as module,
which is AFAIK supposed to work
Hi Omar
Thanks for your kindly response.
On 05/23/2018 04:02 AM, Omar Sandoval wrote:
> On Tue, May 22, 2018 at 10:48:29PM +0800, Jianchao Wang wrote:
>> Currently, kyber is very unfriendly with merging. kyber depends
>> on ctx rq_list to do merging, however, most of time, it will not
>> leave an
Hi Max
Thanks for kindly review and suggestion for this.
On 05/16/2018 08:18 PM, Max Gurtovoy wrote:
> I don't know exactly what Christoph meant but IMO the best place to allocate
> it is in nvme_rdma_alloc_queue just before calling
>
> "set_bit(NVME_RDMA_Q_ALLOCATED, &queue->flags);"
>
> then
Hi Sagi
On 05/09/2018 11:06 PM, Sagi Grimberg wrote:
> The correct fix would be to add a tag for stop_queue and call
> nvme_rdma_stop_queue() in all the failure cases after
> nvme_rdma_start_queue.
Would you please look at the V2 in following link ?
http://lists.infradead.org/pipermail/linux-nvme
Hi Christoph
On 05/07/2018 08:27 PM, Christoph Hellwig wrote:
> On Fri, May 04, 2018 at 04:02:18PM +0800, Jianchao Wang wrote:
>> BUG: KASAN: double-free or invalid-free in nvme_rdma_free_queue+0xf6/0x110
>> [nvme_rdma]
>> Workqueue: nvme-reset-wq nvme_rdma_reset_ctrl_work [nvme_rdma]
>> Call Tra
Hi Max
On 04/27/2018 04:51 PM, jianchao.wang wrote:
> Hi Max
>
> On 04/26/2018 06:23 PM, Max Gurtovoy wrote:
>> Hi Jianchao,
>> I actually tried this scenario with real HW and was able to repro the hang.
>> Unfortunatly, after applying your patch I got NULL deref:
>
t; I'll add IsraelR proposed fix to nvme-rdma that is currently on hold and see
> what happens.
> Nontheless, I don't like the situation that the reset and delete flows can
> run concurrently.
>
> -Max.
>
> On 4/26/2018 11:27 AM, jianchao.wang wrote:
>> Hi Ma
Hi Tejun and Joseph
On 04/27/2018 02:32 AM, Tejun Heo wrote:
> Hello,
>
> On Tue, Apr 24, 2018 at 02:12:51PM +0200, Paolo Valente wrote:
>> +Tejun (I guess he might be interested in the results below)
>
> Our experiments didn't work out too well either. At this point, it
> isn't clear whether i
gt; blk_freeze_queue
This patch could also fix this issue.
Thanks
Jianchao
On 04/22/2018 11:00 PM, jianchao.wang wrote:
> Hi Max
>
> That's really appreciated!
> Here is my test script.
>
> loop_reset_controller.sh
> #!/bin/bash
> while true
> do
/22/2018 10:48 PM, Max Gurtovoy wrote:
>
>
> On 4/22/2018 5:25 PM, jianchao.wang wrote:
>> Hi Max
>>
>> No, I only tested it on PCIe one.
>> And sorry for that I didn't state that.
>
> Please send your exact test steps and we'll run it using RDMA transport
ax.
>
> On 4/22/2018 4:32 PM, jianchao.wang wrote:
>> Hi keith
>>
>> Would you please take a look at this patch.
>>
>> This issue could be reproduced easily with a driver bind/unbind loop,
>> a reset loop and a IO loop at the same time.
>>
>> Th
Hi keith
Would you please take a look at this patch.
This issue could be reproduced easily with a driver bind/unbind loop,
a reset loop and a IO loop at the same time.
Thanks
Jianchao
On 04/19/2018 04:29 PM, Jianchao Wang wrote:
> There is race between nvme_remove and nvme_reset_work that can
>
Hi Ming
Thanks for your kindly response.
On 04/18/2018 11:40 PM, Ming Lei wrote:
>> Regarding to this patchset, it is mainly to fix the dependency between
>> nvme_timeout and nvme_dev_disable, as your can see:
>> nvme_timeout will invoke nvme_dev_disable, and nvme_dev_disable have to
>> depend on
Hi Ming
On 04/17/2018 11:17 PM, Ming Lei wrote:
> Looks blktest(block/011) can trigger IO hang easily on NVMe PCI device,
> and all are related with nvme_dev_disable():
>
> 1) admin queue may be disabled by nvme_dev_disable() from timeout path
> during resetting, then reset can't move on
>
> 2)
Hi Martin
On 04/17/2018 08:10 PM, Martin Steigerwald wrote:
> For testing it I add it to 4.16.2 with the patches I have already?
You could try to only apply this patch to have a test. :)
>
> - '[PATCH] blk-mq_Directly schedule q->timeout_work when aborting a
> request.mbox'
>
> - '[PATCH v2]
Hi Ming
Thanks for your kindly response.
On 04/16/2018 04:15 PM, Ming Lei wrote:
>> -if (!blk_mq_get_dispatch_budget(hctx))
>> +if (!blk_mq_get_dispatch_budget(hctx)) {
>> +blk_mq_sched_mark_restart_hctx(hctx);
> The RESTART flag still may not take into
Would anyone please take a review on this ?
Thanks in advance
Jianchao
On 04/10/2018 04:48 PM, Jianchao Wang wrote:
> If the cmd has not be returned after aborted by qla2x00_eh_abort,
> we have to wait for it. However, the time is 1000ms at least currently.
> If there are a lot cmds need to be ab
Would anyone please take a review at this patch ?
Thanks in advace
Jianchao
On 03/07/2018 08:29 PM, Jianchao Wang wrote:
> iscsi tcp will first send out data, then calculate and send data
> digest. If we don't have BDI_CAP_STABLE_WRITES, the page cache will
> be written in spite of the on going w
Hi Keith
Would you please take a look at this patch.
I really need your suggestion on this.
Sincerely
Jianchao
On 03/09/2018 10:01 AM, jianchao.wang wrote:
> Hi Keith
>
> Can I have the honor of getting your comment on this patch?
>
> Thanks in advance
> Jianchao
>
>
Hi Keith
Thanks for your precious time for testing and reviewing.
I will send out V3 next.
Sincerely
Jianchao
On 03/13/2018 02:59 AM, Keith Busch wrote:
> Hi Jianchao,
>
> The patch tests fine on all hardware I had. I'd like to queue this up
> for the next 4.16-rc. Could you send a v3 with the
Hi Keith
Can I have the honor of getting your comment on this patch?
Thanks in advance
Jianchao
On 03/08/2018 02:19 PM, Jianchao Wang wrote:
> nvme_dev_disable will issue command on adminq to clear HMB and
> delete io cq/sqs, maybe more in the future. When adminq no response,
> it has to depends
Hi Sagi
Thanks for your precious time for review and comment.
On 03/09/2018 02:21 AM, Sagi Grimberg wrote:
>> +EXPORT_SYMBOL_GPL(nvme_abort_requests_sync);
>> +
>> +static void nvme_comp_req(struct request *req, void *data, bool reserved)
>
> Not a very good name...
Yes, indeed.
>
>> +{
>> +
Hi Ming
Thanks for your precious time for reviewing and comment.
On 03/08/2018 09:11 PM, Ming Lei wrote:
> On Thu, Mar 8, 2018 at 2:19 PM, Jianchao Wang
> wrote:
>> Currently, we use nvme_cancel_request to complete the request
>> forcedly. This has following defects:
>> - It is not safe to race
Hi Christoph
Thanks for your precious time for reviewing this.
On 03/08/2018 03:57 PM, Christoph Hellwig wrote:
>> -u8 flags;
>> u16 status;
>> +unsigned long flags;
> Please align the field name like the others, though
Yes, I will change thi
Hi Martin
Can you take your precious time to review this ?
Thanks in advice.
Jianchao
On 03/03/2018 09:54 AM, Jianchao Wang wrote:
> In scsi core, __scsi_queue_insert should just put request back on
> the queue and retry using the same command as before. However, for
> blk-mq, scsi_mq_requeue_cm
Hi Bart
Thanks for your kindly response and directive.
On 03/03/2018 12:31 AM, Bart Van Assche wrote:
> On Fri, 2018-03-02 at 11:31 +0800, Jianchao Wang wrote:
>> diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
>> index a86df9c..d2f1838 100644
>> --- a/drivers/scsi/scsi_lib.c
>> ++
Hi Christoph
Thanks for your kindly response and directive
On 03/01/2018 12:47 AM, Christoph Hellwig wrote:
> Note that we originally allocates irqs this way, and Keith changed
> it a while ago for good reasons. So I'd really like to see good
> reasons for moving away from this, and some heurist
Hi Keith
Thanks for your kindly directive and precious time for this.
On 03/01/2018 11:15 PM, Keith Busch wrote:
> On Thu, Mar 01, 2018 at 06:05:53PM +0800, jianchao.wang wrote:
>> When the adminq is free, ioq0 irq completion path has to invoke nvme_irq
>> twice, one for its
Hi Andy
Thanks for your precious time for this and kindly reminding.
On 02/28/2018 11:59 PM, Andy Shevchenko wrote:
> On Wed, Feb 28, 2018 at 5:48 PM, Jianchao Wang
> wrote:
>> Currently, adminq and ioq0 share the same irq vector. This is
>> unfair for both amdinq and ioq0.
>> - For adminq, its
Hi martin
On 03/02/2018 09:44 AM, Martin K. Petersen wrote:
>> In scsi core, __scsi_queue_insert should just put request back on the
>> queue and retry using the same command as before. However, for blk-mq,
>> scsi_mq_requeue_cmd is employed here which will unprepare the
>> request. To align with
Hi martin
Thanks for your kindly response.
On 03/02/2018 09:43 AM, Martin K. Petersen wrote:
>
> Jianchao,
>
>> Yes, the block layer core guarantees that scsi_mq_get_budget() will be
>> called before scsi_queue_rq(). I think the full picture is as follows:
>
>> * Before scsi_queue_rq() calls .
Hi Bart
Thanks for your precious time and detailed summary.
On 03/02/2018 01:43 AM, Bart Van Assche wrote:
> Yes, the block layer core guarantees that scsi_mq_get_budget() will be called
> before scsi_queue_rq(). I think the full picture is as follows:
> * Before scsi_queue_rq() calls .queuecomma
Hi sagi
Thanks for your kindly response.
On 03/01/2018 05:28 PM, Sagi Grimberg wrote:
>
>> Note that we originally allocates irqs this way, and Keith changed
>> it a while ago for good reasons. So I'd really like to see good
>> reasons for moving away from this, and some heuristics to figure
>>
Hi Bart
Thanks for your precious time to review this and kindly detailed response.
On 03/01/2018 01:52 AM, Bart Van Assche wrote:
> On Wed, 2018-02-28 at 16:55 +0800, Jianchao Wang wrote:
>> diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
>> index a86df9c..6fa7b0c 100644
>> --- a/d
On 02/28/2018 11:42 PM, jianchao.wang wrote:
> Hi Keith
>
> Thanks for your kindly response and directive
>
> On 02/28/2018 11:27 PM, Keith Busch wrote:
>> On Wed, Feb 28, 2018 at 10:53:31AM +0800, jianchao.wang wrote:
>>> On 02/27/2018 11:13 PM, Keith Busch wrote
Hi Keith
Thanks for your kindly response and directive
On 02/28/2018 11:27 PM, Keith Busch wrote:
> On Wed, Feb 28, 2018 at 10:53:31AM +0800, jianchao.wang wrote:
>> On 02/27/2018 11:13 PM, Keith Busch wrote:
>>> On Tue, Feb 27, 2018 at 04:46:17PM +0800, Jianchao Wang wro
Hi Keith
Thanks for your precious time to review this.
On 02/27/2018 11:13 PM, Keith Busch wrote:
> On Tue, Feb 27, 2018 at 04:46:17PM +0800, Jianchao Wang wrote:
>> Currently, adminq and ioq0 share the same irq vector. This is
>> unfair for both amdinq and ioq0.
>> - For adminq, its completion
Hi Bart
Thanks for your kindly response and precious time to review this.
On 02/28/2018 01:18 AM, Bart Van Assche wrote:
> On Tue, 2018-02-27 at 17:06 +, Bart Van Assche wrote:
>> On Tue, 2018-02-27 at 13:15 +0800, jianchao.wang wrote:
>>> Can you share more details about t
Hi Bart
Thanks for your kindly response.
On 02/27/2018 01:12 PM, Bart Van Assche wrote:
> On Tue, 2018-02-27 at 12:00 +0800, jianchao.wang wrote:
>> On the other hand, this patch is to align the actions between blk-mq and
>> block
>> legacy code in __scsi_queue_insert
Hi Bart
Thanks for your kindly response.
On 02/27/2018 11:41 AM, Bart Van Assche wrote:
> On Tue, 2018-02-27 at 11:28 +0800, jianchao.wang wrote:
>> If that is true, what if aacraid driver uses block legacy instead of blk-mq ?
>> w/ blk-mq disabled, __scsi_queue_insert just requ
Hi Bart
Thanks for your kindly response.
On 02/27/2018 11:08 AM, Bart Van Assche wrote:
> On Mon, 2018-02-26 at 15:58 +0800, Jianchao Wang wrote:
>> In scsi core, __scsi_queue_insert should just put request back on
>> the queue and retry using the same command as before. However, for
>> blk-mq, s
Hi Keith
Can you take a review on this ?
Thanks in advance
Really appreciate
Jianchao
On 02/15/2018 07:13 PM, Jianchao Wang wrote:
> nvme cq irq is freed based on queue_count. When the sq/cq creation
> fails, irq will not be setup. free_irq will warn 'Try to free
> already-free irq'.
>
> To fix
Hi Keith
Thanks for your kindly response and directive.
And 恭喜发财 大吉大利!!
On 02/14/2018 05:52 AM, Keith Busch wrote:
> On Mon, Feb 12, 2018 at 09:05:13PM +0800, Jianchao Wang wrote:
>> @@ -1315,9 +1315,6 @@ static int nvme_suspend_queue(struct nvme_queue *nvmeq)
>> nvmeq->cq_vector = -1;
>>
Hi Keith andn Sagi
Thanks for your kindly response and comment on this.
On 02/13/2018 03:15 AM, Keith Busch wrote:
> On Mon, Feb 12, 2018 at 08:43:58PM +0200, Sagi Grimberg wrote:
>>
>>> Currently, we will unquiesce the queues after the controller is
>>> shutdown to avoid residual requests to be
Hi Sagi
Thanks for your kindly response.
On 02/13/2018 02:37 AM, Sagi Grimberg wrote:
>
>> nvme cq irq is freed based on queue_count. When the sq/cq creation
>> fails, irq will not be setup. free_irq will warn 'Try to free
>> already-free irq'.
>>
>> To fix it, we only increase online_queues whe
Hi Sagi
Just make some supplement here.
On 02/12/2018 10:16 AM, jianchao.wang wrote:
>> I think this is going in the wrong direction. Every state that is needed
>> to handle serialization should be done in core ctrl state. Moreover,
>> please try to avoid handling this locally
Hi Sagi
Thanks for your kindly response.
And sorry for my bad description.
On 02/11/2018 07:17 PM, Sagi Grimberg wrote:
>> namespaces_mutext is used to synchronize the operations on ctrl
>> namespaces list. Most of the time, it is a read operation. It is
>> better to change it from mutex to rwse
Hi Sagi
Thanks for your kindly reminding.
On 02/11/2018 07:19 PM, Sagi Grimberg wrote:
>
>> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
>> index 6fe7af0..00cffed 100644
>> --- a/drivers/nvme/host/pci.c
>> +++ b/drivers/nvme/host/pci.c
>> @@ -2186,7 +2186,10 @@ static void nvme
Hi Sagi
Thanks for your kindly remind and directive.
That's really appreciated.
On 02/11/2018 07:36 PM, Sagi Grimberg wrote:
> Jianchao,
>
>> Currently, the complicated relationship between nvme_dev_disable
>> and nvme_timeout has become a devil that will introduce many
>> circular pattern which
Hi Sagi
Thanks for your kindly response and directive.
That's really appreciated.
On 02/11/2018 07:16 PM, Sagi Grimberg wrote:
>> mutex_lock(&ctrl->namespaces_mutex);
>> list_for_each_entry(ns, &ctrl->namespaces, list) {
>> - if (ns->disk && nvme_revalidate_disk(ns->disk))
>> -
On 02/10/2018 10:32 AM, jianchao.wang wrote:
> Hi Keith
>
> Thanks for your kindly response here.
> That's really appreciated.
>
> On 02/10/2018 01:12 AM, Keith Busch wrote:
>> On Fri, Feb 09, 2018 at 09:50:58AM +0800, jianchao.wang wrote:
>>>
>&
Hi Keith
On 02/10/2018 10:32 AM, jianchao.wang wrote:
> Hi Keith
>
> Thanks for your kindly response here.
> That's really appreciated.
>
> On 02/10/2018 01:12 AM, Keith Busch wrote:
>> On Fri, Feb 09, 2018 at 09:50:58AM +0800, jianchao.wang wrote:
>>>
Hi Keith
Thanks for your kindly response here.
That's really appreciated.
On 02/10/2018 01:12 AM, Keith Busch wrote:
> On Fri, Feb 09, 2018 at 09:50:58AM +0800, jianchao.wang wrote:
>>
>> if we set NVME_REQ_CANCELLED and return BLK_EH_HANDLED as the RESETTING case,
>>
Hi Keith and Sagi
Many thanks for your kindly response.
That's really appreciated.
On 02/09/2018 01:56 AM, Keith Busch wrote:
> On Thu, Feb 08, 2018 at 05:56:49PM +0200, Sagi Grimberg wrote:
>> Given the discussion on this set, you plan to respin again
>> for 4.16?
>
> With the exception of mayb
Hi Keith
Thanks for your precious time and kindly response.
On 02/08/2018 11:15 PM, Keith Busch wrote:
> On Thu, Feb 08, 2018 at 10:17:00PM +0800, jianchao.wang wrote:
>> There is a dangerous scenario which caused by nvme_wait_freeze in
>> nvme_reset_work.
>&g
uests from being
submitted.
Looking forward your precious advice.
Sincerely
Jianchao
On 02/08/2018 09:40 AM, jianchao.wang wrote:
> Hi Keith
>
>
> Really thanks for your your precious time and kindly directive.
> That's really appreciated. :)
>
> On 02/08/2018 12:13 AM,
Hi Keith
Really thanks for your your precious time and kindly directive.
That's really appreciated. :)
On 02/08/2018 12:13 AM, Keith Busch wrote:
> On Wed, Feb 07, 2018 at 10:13:51AM +0800, jianchao.wang wrote:
>> What's the difference ? Can you please point out.
Hi Keith
Sorry for bothering you again.
On 02/07/2018 10:03 AM, jianchao.wang wrote:
> Hi Keith
>
> Thanks for your time and kindly response on this.
>
> On 02/06/2018 11:13 PM, Keith Busch wrote:
>> On Tue, Feb 06, 2018 at 09:46:36AM +0800, jianchao.wang wrote:
>>&g
Hi Keith
Thanks for your time and kindly response on this.
On 02/06/2018 11:13 PM, Keith Busch wrote:
> On Tue, Feb 06, 2018 at 09:46:36AM +0800, jianchao.wang wrote:
>> Hi Keith
>>
>> Thanks for your kindly response.
>>
>> On 02/05/2018 11:13 PM, Keith Busch
Hi Keith
Thanks for your kindly response.
On 02/05/2018 11:13 PM, Keith Busch wrote:
> but how many requests are you letting enter to their demise by
> freezing on the wrong side of the reset?
There are only two difference with this patch from the original one.
1. Don't freeze the queue for the
Hi Keith
Thanks for your kindly response and directive.
On 02/03/2018 02:46 AM, Keith Busch wrote:
> This one makes sense, though I would alter the change log to something
> like:
>
> This patch quiecses new IO prior to disabling device HMB access.
> A controller using HMB may be relying on
Hi Keith
Thanks for your kindly response.
On 02/03/2018 02:24 AM, Keith Busch wrote:
> On Fri, Feb 02, 2018 at 03:00:45PM +0800, Jianchao Wang wrote:
>> Currently, request queue will be frozen and quiesced for both reset
>> and shutdown case. This will trigger ioq requests in RECONNECTING
>> stat
Hi Keith
Thanks for you kindly response and comment.
That's really appreciated.
On 02/03/2018 02:31 AM, Keith Busch wrote:
> On Fri, Feb 02, 2018 at 03:00:47PM +0800, Jianchao Wang wrote:
>> Currently, the complicated relationship between nvme_dev_disable
>> and nvme_timeout has become a devil th
Hi Keith and Sagi
Thanks for your kindly response. :)
On 01/30/2018 04:17 AM, Keith Busch wrote:
> On Mon, Jan 29, 2018 at 09:55:41PM +0200, Sagi Grimberg wrote:
>>> Thanks for the fix. It looks like we still have a problem, though.
>>> Commands submitted with the "shutdown_lock" held need to be
On 01/29/2018 11:07 AM, Jianchao Wang wrote:
> nvme_set_host_mem will invoke nvme_alloc_request without NOWAIT
> flag, it is unsafe for nvme_dev_disable. The adminq driver tags
> may have been used up when the previous outstanding adminq requests
> cannot be completed due to some hardware error.
Hi Tariq
Thanks for your kindly response.
That's really appreciated.
On 01/25/2018 05:54 PM, Tariq Toukan wrote:
>
>
> On 25/01/2018 8:25 AM, jianchao.wang wrote:
>> Hi Eric
>>
>> Thanks for you kindly response and suggestion.
>> That's really appreci
Hi Eric
Thanks for you kindly response and suggestion.
That's really appreciated.
Jianchao
On 01/25/2018 11:55 AM, Eric Dumazet wrote:
> On Thu, 2018-01-25 at 11:27 +0800, jianchao.wang wrote:
>> Hi Tariq
>>
>> On 01/22/2018 10:12 AM, jianchao.wang wrote:
>>
Hi Tariq
On 01/22/2018 10:12 AM, jianchao.wang wrote:
>>> On 19/01/2018 5:49 PM, Eric Dumazet wrote:
>>>> On Fri, 2018-01-19 at 23:16 +0800, jianchao.wang wrote:
>>>>> Hi Tariq
>>>>>
>>>>> Very sad that the crash was reproduced ag
Hi Keith
If you have time, can have a look at this.
That's really appreciated and thanks in advance.
:)
Jianchao
On 01/22/2018 10:03 PM, Jianchao Wang wrote:
> After Sagi's commit (nvme-rdma: fix concurrent reset and reconnect),
> both nvme-fc/rdma have following pattern:
> RESETTING- quiesc
Hi Jason
Thanks for your kindly response.
On 01/22/2018 11:47 PM, Jason Gunthorpe wrote:
>>> Yeah, mlx4 NICs in Google fleet receive trillions of packets per
>>> second, and we never noticed an issue.
>>>
>>> Although we are using a slightly different driver, using order-0 pages
>>> and fast page
Hi Christoph and Keith
Really sorry for this.
On 01/23/2018 05:54 AM, Keith Busch wrote:
> On Mon, Jan 22, 2018 at 09:14:23PM +0100, Christoph Hellwig wrote:
>>> Link:
>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__lkml.org_lkml_2018_1_19_68&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB
Hi Eric
On 01/22/2018 12:43 AM, Eric Dumazet wrote:
> On Sun, 2018-01-21 at 18:24 +0200, Tariq Toukan wrote:
>>
>> On 21/01/2018 11:31 AM, Tariq Toukan wrote:
>>>
>>>
>>> On 19/01/2018 5:49 PM, Eric Dumazet wrote:
>>>> On Fri, 2018-01-
Hi Tariq and all
Many thanks for your kindly and detailed response and comment.
On 01/22/2018 12:24 AM, Tariq Toukan wrote:
>
>
> On 21/01/2018 11:31 AM, Tariq Toukan wrote:
>>
>>
>> On 19/01/2018 5:49 PM, Eric Dumazet wrote:
>>> On Fri, 2018-01-19 at 23:16
On 01/20/2018 10:07 PM, jianchao.wang wrote:
> Hi Keith
>
> Thanks for you kindly response.
>
> On 01/20/2018 10:11 AM, Keith Busch wrote:
>> On Fri, Jan 19, 2018 at 09:56:48PM +0800, jianchao.wang wrote:
>>> In nvme_dev_disable, the outstanding requests wil
Hi Keith
Thanks for you kindly response.
On 01/20/2018 10:11 AM, Keith Busch wrote:
> On Fri, Jan 19, 2018 at 09:56:48PM +0800, jianchao.wang wrote:
>> In nvme_dev_disable, the outstanding requests will be requeued finally.
>> I'm afraid the requests requeued on the q-&
mlx4_en_update_rx_prod_db(struct mlx4_en_rx_ring *ring)
{
+ dma_wmb();
*ring->wqres.db.db = cpu_to_be32(ring->prod & 0x);
}
I analyzed the kdump, it should be a memory corruption.
Thanks
Jianchao
On 01/15/2018 01:50 PM, jianchao.wang wrote:
> Hi Tariq
>
> Tha
1 - 100 of 155 matches
Mail list logo