Re: [PATCH v5 20/23] sg: introduce request state machine

2019-10-18 Thread Hannes Reinecke
On 10/8/19 9:50 AM, Douglas Gilbert wrote: > The introduced request state machine is not wired in so that > the size of one of the following patches is reduced. Bit > operation defines for the request and file descriptor level > are also introduced. Minor rework og sg_rd_appen

Re: [PATCH 6/7] scsi: aacraid: send AIF request post IOP RESET

2019-10-15 Thread kbuild test robot
Hi, Thank you for the patch! Perhaps something to improve: [auto build test WARNING on scsi/for-next] [cannot apply to v5.4-rc3 next-20191014] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system. BTW, we also suggest to use '--base' option to specify

[PATCH 6/7] scsi: aacraid: send AIF request post IOP RESET

2019-10-14 Thread balsundar.p
From: Balsundar P After IOP reset completion AIF request command is not issued to the controller. Driver schedules a worker thread to issue a AIF request command after IOP reset completion. Signed-off-by: Balsundar P --- drivers/scsi/aacraid/aacraid.h | 18 +- drivers/scsi

[PATCH v5 20/23] sg: introduce request state machine

2019-10-08 Thread Douglas Gilbert
The introduced request state machine is not wired in so that the size of one of the following patches is reduced. Bit operation defines for the request and file descriptor level are also introduced. Minor rework og sg_rd_append() function. Signed-off-by: Douglas Gilbert --- drivers/scsi/sg.c

[PATCH 13/14] qedf: Fix race betwen fipvlan request and response path.

2019-08-23 Thread Saurav Kashyap
There is a race b/w fipvlan request and response path. = qedf_fcoe_process_vlan_resp:113]:2: VLAN response, vid=0xffd. qedf_initiate_fipvlan_req:165]:2: vlan = 0x6ffd already set. qedf_set_vlan_id:139]:2: Setting vlan_id=0ffd prio=3. == The request thread sees that vlan is already set and

[PATCH 02/14] qedf: Stop sending fipvlan request on unload.

2019-08-23 Thread Saurav Kashyap
- On some setups fipvlan can be retried for long duration and the connection to switch was not there so it was not getting any reply. - During unload this thread was hanging. Problem Resolution: Check if unload is in progress then quit from fipvlan thread. Signed-off-by: Saurav Kashyap --- driv

Re: [PATCH 2/2] scsi: core: fix dh and multipathing for SCSI hosts without request batching

2019-08-09 Thread kbuild test robot
/Steffen-Maier/scsi-core-fix-missing-cleanup_rq-for-SCSI-hosts-without-request-batching/20190808-052017 config: i386-randconfig-d003-201931 (attached as .config) compiler: gcc-7 (Debian 7.4.0-10) 7.4.0 reproduce: # save the attached .config to linux build tree make ARCH=i386 If you

Re: [PATCH 1/2] scsi: core: fix missing .cleanup_rq for SCSI hosts without request batching

2019-08-08 Thread kbuild test robot
-Maier/scsi-core-fix-missing-cleanup_rq-for-SCSI-hosts-without-request-batching/20190808-052017 config: s390-allmodconfig (attached as .config) compiler: s390-linux-gcc (GCC) 7.4.0 reproduce: wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross

Re: [PATCH 2/2] scsi: core: fix dh and multipathing for SCSI hosts without request batching

2019-08-07 Thread kbuild test robot
-Maier/scsi-core-fix-missing-cleanup_rq-for-SCSI-hosts-without-request-batching/20190808-052017 config: riscv-defconfig (attached as .config) compiler: riscv64-linux-gcc (GCC) 7.4.0 reproduce: wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross

Re: [PATCH 1/2] scsi: core: fix missing .cleanup_rq for SCSI hosts without request batching

2019-08-07 Thread kbuild test robot
-Maier/scsi-core-fix-missing-cleanup_rq-for-SCSI-hosts-without-request-batching/20190808-052017 config: riscv-defconfig (attached as .config) compiler: riscv64-linux-gcc (GCC) 7.4.0 reproduce: wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross

Re: [PATCH 1/2] scsi: core: fix missing .cleanup_rq for SCSI hosts without request batching

2019-08-07 Thread Martin K. Petersen
Ming, >> + .cleanup_rq = scsi_cleanup_rq, >> .busy = scsi_mq_lld_busy, >> .map_queues = scsi_map_queues, >> }; > > This one is a cross-tree thing, either scsi/5.4/scsi-queue needs to > pull for-5.4/block, or do it after both land linus tree. I'll set up

Re: [PATCH 1/2] scsi: core: fix missing .cleanup_rq for SCSI hosts without request batching

2019-08-07 Thread Ming Lei
On Wed, Aug 7, 2019 at 10:55 PM Steffen Maier wrote: > > This was missing from scsi_mq_ops_no_commit of linux-next commit > 8930a6c20791 ("scsi: core: add support for request batching") > from Martin's scsi/5.4/scsi-queue or James' scsi/misc. > > See also

Re: [PATCH 0/2] scsi: core: regression fixes for request batching

2019-08-07 Thread Bart Van Assche
On 8/7/19 7:49 AM, Steffen Maier wrote: Hi James, Martin, Paolo, Ming, multipathing with linux-next is broken since 20190723 in our CI. The patches fix a memleak and a severe dh/multipath functional regression. It would be nice if we could get them to 5.4/scsi-queue and also next. > I would ha

Re: [PATCH 0/2] scsi: core: regression fixes for request batching

2019-08-07 Thread Paolo Bonzini
feature. > > Steffen Maier (2): > scsi: core: fix missing .cleanup_rq for SCSI hosts without request > batching > scsi: core: fix dh and multipathing for SCSI hosts without request > batching > > drivers/scsi/scsi_lib.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > Reviewed-by: Paolo Bonzini

[PATCH 2/2] scsi: core: fix dh and multipathing for SCSI hosts without request batching

2019-08-07 Thread Steffen Maier
This was missing from scsi_device_from_queue() due to the introduction of another new scsi_mq_ops_no_commit of linux-next commit 8930a6c20791 ("scsi: core: add support for request batching") from Martin's scsi/5.4/scsi-queue or James' scsi/misc. Only devicehandle

[PATCH 0/2] scsi: core: regression fixes for request batching

2019-08-07 Thread Steffen Maier
request batching scsi: core: fix dh and multipathing for SCSI hosts without request batching drivers/scsi/scsi_lib.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) -- 2.17.1

[PATCH 1/2] scsi: core: fix missing .cleanup_rq for SCSI hosts without request batching

2019-08-07 Thread Steffen Maier
This was missing from scsi_mq_ops_no_commit of linux-next commit 8930a6c20791 ("scsi: core: add support for request batching") from Martin's scsi/5.4/scsi-queue or James' scsi/misc. See also linux-next commit b7e9e1fb7a92 ("scsi: implement .cleanup_rq callback") fr

PARTNERSHIP REQUEST,

2019-08-06 Thread Simon Beron
Dear Friend, I need you to please let me know if there are fast growing investments in your country in which i can invest money in. I have access to a huge amount of money, which i want to invest in your country, i want to know if you can be an agent/partner to me and i will give you a commission

[PATCH 02/12] mpt3sas: memset request frame before reusing

2019-08-03 Thread Suganath Prabu
Driver gets a request frame from the free pool of DMA able request frames and fill in the required information and pass the address of the frame to IOC/FW to pull the complete request frame. In certain places the driver used the request frame allocated from the free pool without completely

Re: [PATCH V2 0/2] block/scsi/dm-rq: fix leak of request private data in dm-mpath

2019-07-26 Thread Ming Lei
On Fri, Jul 26, 2019 at 06:20:46PM +0200, Benjamin Block wrote: > Hey Ming Lei, > > On Sat, Jul 20, 2019 at 11:06:35AM +0800, Ming Lei wrote: > > Hi, > > > > When one request is dispatched to LLD via dm-rq, if the result is > > BLK_STS_*RESOURCE, dm-rq will f

Re: [PATCH V2 0/2] block/scsi/dm-rq: fix leak of request private data in dm-mpath

2019-07-26 Thread Benjamin Block
Hey Ming Lei, On Sat, Jul 20, 2019 at 11:06:35AM +0800, Ming Lei wrote: > Hi, > > When one request is dispatched to LLD via dm-rq, if the result is > BLK_STS_*RESOURCE, dm-rq will free the request. However, LLD may allocate > private data for this request, so this way will cause

[PATCH V2 0/2] block/scsi/dm-rq: fix leak of request private data in dm-mpath

2019-07-19 Thread Ming Lei
Hi, When one request is dispatched to LLD via dm-rq, if the result is BLK_STS_*RESOURCE, dm-rq will free the request. However, LLD may allocate private data for this request, so this way will cause memory leak. Add .cleanup_rq() callback and implement it in SCSI for fixing the issue, since SCSI

[PATCH 0/2] block/scsi/dm-rq: fix leak of request private data in dm-mpath

2019-07-17 Thread Ming Lei
Hi, When one request is dispatched to LLD via dm-rq, if the result is BLK_STS_*RESOURCE, dm-rq will free the request. However, LLD may allocate private stuff for this request, so this way will cause memory leak. Add .cleanup_rq() callback and implement it in SCSI for fixing the issue. And SCSI

[PATCH 2/3] zfcp: fix request object use-after-free in send path causing wrong traces

2019-07-02 Thread Benjamin Block
When tracing instances where we open and close WKA ports, we also pass the request-ID of the respective FSF command. But after successfully sending the FSF command we must not use the request-object anymore, as this might result in an use-after-free (see "zfcp: fix request object use-after

[PATCH 1/3] zfcp: fix request object use-after-free in send path causing seqno errors

2019-07-02 Thread Benjamin Block
With a recent change to our send path for FSF commands we introduced a possible use-after-free of request-objects, that might further lead to zfcp crafting bad requests, which the FCP channel correctly complains about with an error (FSF_PROT_SEQ_NUMB_ERROR). This error is then handled by an

[PATCH 4/6] bnx2fc: Do not allow both a cleanup completion and abort completion for the same request.

2019-06-24 Thread Saurav Kashyap
ivers/scsi/bnx2fc/bnx2fc_io.c +++ b/drivers/scsi/bnx2fc/bnx2fc_io.c @@ -1048,6 +1048,9 @@ int bnx2fc_initiate_cleanup(struct bnx2fc_cmd *io_req) /* Obtain free SQ entry */ bnx2fc_add_2_sq(tgt, xid); + /* Set flag that cleanup request is pending with the firmware */ + se

[V3 01/10] mpt3sas: function pointers of request descriptor

2019-05-31 Thread Suganath Prabu S
This code refactoring introduces function pointers. Host uses Request Descriptors of different types for posting an entry onto a request queue. Based on controller type and capabilities, host can also use atomic descriptors other than normal descriptors. Using function pointer will avoid if-else

[PATCH 17/19] sg: add multiple request support

2019-05-24 Thread Douglas Gilbert
with. Multiple requests can be combined with shared requests. As a further optimisation, an array of SCSI commands can be passed from the user space via the controlling object's request "pointer". Without that, the multiple request logic would need to visit the user space once per com

[PATCH v2 01/10] mpt3sas: function pointers of request descriptor

2019-05-20 Thread Suganath Prabu S
From: Suganath Prabu This code refactoring introduces function pointers. Host uses Request Descriptors of different types for posting an entry onto a request queue. Based on controller type and capabilities, host can also use atomic descriptors other than normal descriptors. Using function

Re: [PATCH 01/10] mpt3sas: function pointers of request descriptor

2019-05-18 Thread kbuild test robot
Hi Suganath, I love your patch! Perhaps something to improve: [auto build test WARNING on scsi/for-next] [also build test WARNING on v5.1 next-20190517] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system] url: https://github.com/0day-ci/linux/c

[PATCH 02/10] mpt3sas: Add Atomic Request Descriptor support on Aero

2019-05-17 Thread Suganath Prabu S
If the Aero HBA supports Atomic Request Descriptors, it sets the Atomic Request Descriptor Capable bit in the IOCCapabilities field of the IOCFacts Reply message. Driver uses an Atomic Request Descriptor as an alternative method for posting an entry onto a request queue. The posting of an Atomic

[PATCH 01/10] mpt3sas: function pointers of request descriptor

2019-05-17 Thread Suganath Prabu S
This code refactoring introduces function pointers. Host uses Request Descriptors of different types for posting an entry onto a request queue. Based on controller type and capabilities, host can also use atomic descriptors other than normal descriptors. Using function pointer will avoid if-else

Request

2019-05-10 Thread Ahmed Ahmed
Dear Friend, I am. Mr Ahmed Zama Manager Auditing and Accountancy Department,UBA Bank Burkina Faso i am writing to seek for your highly esteemed consent/assistance in a lasting business relationship of mutual benefit involving € 15 Million Euros for investment in your country, under a joint

Re: [PATCH V9 1/7] blk-mq: grab .q_usage_counter when queuing request from plug code path

2019-04-29 Thread Bart Van Assche
On 4/29/19 6:52 PM, Ming Lei wrote: > Just like aio/io_uring, we need to grab 2 refcount for queuing one > request, one is for submission, another is for completion. > > If the request isn't queued from plug code path, the refcount grabbed > in generic_make_request() serve

[PATCH V9 1/7] blk-mq: grab .q_usage_counter when queuing request from plug code path

2019-04-29 Thread Ming Lei
Just like aio/io_uring, we need to grab 2 refcount for queuing one request, one is for submission, another is for completion. If the request isn't queued from plug code path, the refcount grabbed in generic_make_request() serves for submission. In theroy, this refcount should have been rel

[PATCH V9 5/7] blk-mq: always free hctx after request queue is freed

2019-04-29 Thread Ming Lei
In normal queue cleanup path, hctx is released after request queue is freed, see blk_mq_release(). However, in __blk_mq_update_nr_hw_queues(), hctx may be freed because of hw queues shrinking. This way is easy to cause use-after-free, because: one implicit rule is that it is safe to call almost

Re: [PATCH V8 1/7] blk-mq: grab .q_usage_counter when queuing request from plug code path

2019-04-29 Thread Ming Lei
On Mon, Apr 29, 2019 at 11:09:39AM -0700, Bart Van Assche wrote: > On Sun, 2019-04-28 at 16:14 +0800, Ming Lei wrote: > > Just like aio/io_uring, we need to grab 2 refcount for queuing one > > request, one is for submission, another is for completion. > > > > If the req

Re: [PATCH V8 1/7] blk-mq: grab .q_usage_counter when queuing request from plug code path

2019-04-29 Thread Bart Van Assche
On Sun, 2019-04-28 at 16:14 +0800, Ming Lei wrote: > Just like aio/io_uring, we need to grab 2 refcount for queuing one > request, one is for submission, another is for completion. > > If the request isn't queued from plug code path, the refcount grabbed > in generic_make_

Re: [PATCH V8 5/7] blk-mq: always free hctx after request queue is freed

2019-04-28 Thread Ming Lei
On Sun, Apr 28, 2019 at 02:14:26PM +0200, Christoph Hellwig wrote: > On Sun, Apr 28, 2019 at 04:14:06PM +0800, Ming Lei wrote: > > In normal queue cleanup path, hctx is released after request queue > > is freed, see blk_mq_release(). > > > > However, in __blk_mq_update_

Re: [PATCH V8 5/7] blk-mq: always free hctx after request queue is freed

2019-04-28 Thread Christoph Hellwig
On Sun, Apr 28, 2019 at 04:14:06PM +0800, Ming Lei wrote: > In normal queue cleanup path, hctx is released after request queue > is freed, see blk_mq_release(). > > However, in __blk_mq_update_nr_hw_queues(), hctx may be freed because > of hw queues shrinking. This way is easy to

Re: [PATCH V8 1/7] blk-mq: grab .q_usage_counter when queuing request from plug code path

2019-04-28 Thread Christoph Hellwig
Looks good, Reviewed-by: Christoph Hellwig

[PATCH V8 5/7] blk-mq: always free hctx after request queue is freed

2019-04-28 Thread Ming Lei
In normal queue cleanup path, hctx is released after request queue is freed, see blk_mq_release(). However, in __blk_mq_update_nr_hw_queues(), hctx may be freed because of hw queues shrinking. This way is easy to cause use-after-free, because: one implicit rule is that it is safe to call almost

[PATCH V8 1/7] blk-mq: grab .q_usage_counter when queuing request from plug code path

2019-04-28 Thread Ming Lei
Just like aio/io_uring, we need to grab 2 refcount for queuing one request, one is for submission, another is for completion. If the request isn't queued from plug code path, the refcount grabbed in generic_make_request() serves for submission. In theroy, this refcount should have been rel

Re: [PATCH V7 9/9] nvme: hold request queue's refcount in ns's whole lifetime

2019-04-26 Thread Christoph Hellwig
On Sat, Apr 27, 2019 at 06:45:43AM +0800, Ming Lei wrote: > Still no difference between your suggestion and the way in this patch, given > driver specific change is a must. Even it is more clean to hold the > queue refocunt by drivers explicitly because we usually do the get/put pair > in one place

Re: [PATCH V7 9/9] nvme: hold request queue's refcount in ns's whole lifetime

2019-04-26 Thread Ming Lei
On Fri, Apr 26, 2019 at 10:04:23AM -0700, Bart Van Assche wrote: > On Fri, 2019-04-26 at 17:11 +0200, Christoph Hellwig wrote: > > On Thu, Apr 25, 2019 at 09:00:31AM +0800, Ming Lei wrote: > > > The issue is driver(NVMe) specific, the race window is just between > > > between blk_cleanup_queue() an

Re: [PATCH V7 9/9] nvme: hold request queue's refcount in ns's whole lifetime

2019-04-26 Thread Ming Lei
On Fri, Apr 26, 2019 at 05:11:14PM +0200, Christoph Hellwig wrote: > On Thu, Apr 25, 2019 at 09:00:31AM +0800, Ming Lei wrote: > > The issue is driver(NVMe) specific, the race window is just between > > between blk_cleanup_queue() and removing the ns from the controller namspace > > list in nvme_ns

Re: [PATCH V7 9/9] nvme: hold request queue's refcount in ns's whole lifetime

2019-04-26 Thread Bart Van Assche
On Fri, 2019-04-26 at 17:11 +0200, Christoph Hellwig wrote: > On Thu, Apr 25, 2019 at 09:00:31AM +0800, Ming Lei wrote: > > The issue is driver(NVMe) specific, the race window is just between > > between blk_cleanup_queue() and removing the ns from the controller namspace > > list in nvme_ns_remove

Re: [PATCH V7 9/9] nvme: hold request queue's refcount in ns's whole lifetime

2019-04-26 Thread Christoph Hellwig
On Thu, Apr 25, 2019 at 09:00:31AM +0800, Ming Lei wrote: > The issue is driver(NVMe) specific, the race window is just between > between blk_cleanup_queue() and removing the ns from the controller namspace > list in nvme_ns_remove() And I wouldn't be surprised if others have the same issue. > >

Re: [PATCH V7 1/9] blk-mq: grab .q_usage_counter when queuing request from plug code path

2019-04-25 Thread Ming Lei
> > For other callers of blk_mq_sched_insert_requests(), it is guaranteed > > that request queue's ref is held. > > In both Linus' tree and Jens' for-5.2 tree I only see these two > callers of blk_mq_sched_insert_requests. What am I missing? OK, what I meant is that

Re: [PATCH V7 1/9] blk-mq: grab .q_usage_counter when queuing request from plug code path

2019-04-24 Thread Christoph Hellwig
On Thu, Apr 25, 2019 at 08:53:34AM +0800, Ming Lei wrote: > It isn't in other callers of blk_mq_sched_insert_requests(), it is just > needed in some corner case like flush plug context. > > For other callers of blk_mq_sched_insert_requests(), it is guaranteed > that request

Re: [PATCH V7 9/9] nvme: hold request queue's refcount in ns's whole lifetime

2019-04-24 Thread Ming Lei
On Wed, Apr 24, 2019 at 06:27:46PM +0200, Christoph Hellwig wrote: > On Wed, Apr 24, 2019 at 07:02:21PM +0800, Ming Lei wrote: > > Hennes reported the following kernel oops: > > Hannes? > > > + if (!blk_get_queue(ns->queue)) { > > + ret = -ENXIO; > > + goto out_free_queue; >

Re: [PATCH V7 1/9] blk-mq: grab .q_usage_counter when queuing request from plug code path

2019-04-24 Thread Ming Lei
don't we push this into blk_mq_sched_insert_requests? Yes, it > would need a request_queue argument, but that still seems saner > than duplicating it in both callers. It isn't in other callers of blk_mq_sched_insert_requests(), it is just needed in some corner case like flush plug context. For other callers of blk_mq_sched_insert_requests(), it is guaranteed that request queue's ref is held. Thanks, Ming

Re: [PATCH V7 9/9] nvme: hold request queue's refcount in ns's whole lifetime

2019-04-24 Thread Christoph Hellwig
On Wed, Apr 24, 2019 at 07:02:21PM +0800, Ming Lei wrote: > Hennes reported the following kernel oops: Hannes? > + if (!blk_get_queue(ns->queue)) { > + ret = -ENXIO; > + goto out_free_queue; > + } If we always need to hold a reference, shouldn't blk_mq_init_queue

Re: [PATCH V7 1/9] blk-mq: grab .q_usage_counter when queuing request from plug code path

2019-04-24 Thread Christoph Hellwig
On Wed, Apr 24, 2019 at 07:02:13PM +0800, Ming Lei wrote: > if (rq->mq_hctx != this_hctx || rq->mq_ctx != this_ctx) { > if (this_hctx) { > trace_block_unplug(this_q, depth, > !from_schedule); > + > + perc

Re: [PATCH V7 6/9] blk-mq: always free hctx after request queue is freed

2019-04-24 Thread Hannes Reinecke
On 4/24/19 1:02 PM, Ming Lei wrote: In normal queue cleanup path, hctx is released after request queue is freed, see blk_mq_release(). However, in __blk_mq_update_nr_hw_queues(), hctx may be freed because of hw queues shrinking. This way is easy to cause use-after-free, because: one implicit

[PATCH V7 9/9] nvme: hold request queue's refcount in ns's whole lifetime

2019-04-24 Thread Ming Lei
t; should make this issue quite difficult to trigger. However it can't kill the issue completely becasue pre-condition of that patch is to hold request queue's refcount before calling block layer API, and there is still a small window between blk_cleanup_queue() and removing the ns f

[PATCH V7 6/9] blk-mq: always free hctx after request queue is freed

2019-04-24 Thread Ming Lei
In normal queue cleanup path, hctx is released after request queue is freed, see blk_mq_release(). However, in __blk_mq_update_nr_hw_queues(), hctx may be freed because of hw queues shrinking. This way is easy to cause use-after-free, because: one implicit rule is that it is safe to call almost

[PATCH V7 1/9] blk-mq: grab .q_usage_counter when queuing request from plug code path

2019-04-24 Thread Ming Lei
Just like aio/io_uring, we need to grab 2 refcount for queuing one request, one is for submission, another is for completion. If the request isn't queued from plug code path, the refcount grabbed in generic_make_request() serves for submission. In theroy, this refcount should have been rel

Re: [PATCH V6 6/9] blk-mq: always free hctx after request queue is freed

2019-04-23 Thread Hannes Reinecke
tx_list etc). OK, looks the name of 'unused' is better. And I would deallocate q->queue_hw_ctx in blk_mq_exit_queue() to make things more consistent. No, that is wrong. The request queue's refcount is often held when blk_cleanup_queue() is running, and blk_mq_exit_queu

Re: [PATCH V6 6/9] blk-mq: always free hctx after request queue is freed

2019-04-23 Thread Ming Lei
t; > > > blk_mq_realloc_hw_ctxs(). > > > > > > > > However, I would rename the 'dead' elements to 'unused' (ie > > > > unused_hctx_list > > > > etc). > > > > > > OK, looks the name of 'unused' is better. >

Re: [PATCH V6 6/9] blk-mq: always free hctx after request queue is freed

2019-04-23 Thread Ming Lei
ware context elements, given that they might be reassigned during > > > blk_mq_realloc_hw_ctxs(). > > > > > > However, I would rename the 'dead' elements to 'unused' (ie > > > unused_hctx_list > > > etc). > > > > OK,

Re: [PATCH V6 6/9] blk-mq: always free hctx after request queue is freed

2019-04-23 Thread Hannes Reinecke
ht be reassigned during blk_mq_realloc_hw_ctxs(). However, I would rename the 'dead' elements to 'unused' (ie unused_hctx_list etc). OK, looks the name of 'unused' is better. And I would deallocate q->queue_hw_ctx in blk_mq_exit_queue() to make things more consist

Re: [PATCH V6 6/9] blk-mq: always free hctx after request queue is freed

2019-04-23 Thread Ming Lei
> > > On 4/17/19 5:44 AM, Ming Lei wrote: > > > > > In normal queue cleanup path, hctx is released after request queue > > > > > is freed, see blk_mq_release(). > > > > > > > > > > However, in __blk_mq_update_nr_hw_queues(), hctx

Re: [PATCH V6 6/9] blk-mq: always free hctx after request queue is freed

2019-04-23 Thread Hannes Reinecke
On 4/22/19 5:30 AM, Ming Lei wrote: On Wed, Apr 17, 2019 at 08:59:43PM +0800, Ming Lei wrote: On Wed, Apr 17, 2019 at 02:08:59PM +0200, Hannes Reinecke wrote: On 4/17/19 5:44 AM, Ming Lei wrote: In normal queue cleanup path, hctx is released after request queue is freed, see blk_mq_release

Re: [PATCH V6 6/9] blk-mq: always free hctx after request queue is freed

2019-04-21 Thread Ming Lei
On Wed, Apr 17, 2019 at 08:59:43PM +0800, Ming Lei wrote: > On Wed, Apr 17, 2019 at 02:08:59PM +0200, Hannes Reinecke wrote: > > On 4/17/19 5:44 AM, Ming Lei wrote: > > > In normal queue cleanup path, hctx is released after request queue > > > is freed, see blk_mq_rele

Re: [PATCH V6 9/9] nvme: hold request queue's refcount in ns's whole lifetime

2019-04-17 Thread Keith Busch
ialized namespaces. > > Patch "blk-mq: free hw queue's resource in hctx's release handler" > should make this issue quite difficult to trigger. However it can't > kill the issue completely becasue pre-condition of that patch is to > hold request queue

Re: [PATCH V6 6/9] blk-mq: always free hctx after request queue is freed

2019-04-17 Thread Ming Lei
On Wed, Apr 17, 2019 at 02:08:59PM +0200, Hannes Reinecke wrote: > On 4/17/19 5:44 AM, Ming Lei wrote: > > In normal queue cleanup path, hctx is released after request queue > > is freed, see blk_mq_release(). > > > > However, in __blk_mq_update_nr_hw_queues(), hctx may

Re: [PATCH V6 9/9] nvme: hold request queue's refcount in ns's whole lifetime

2019-04-17 Thread Hannes Reinecke
7;s resource in hctx's release handler" should make this issue quite difficult to trigger. However it can't kill the issue completely becasue pre-condition of that patch is to hold request queue's refcount before calling block layer API, and there is still a small window between blk_

Re: [PATCH V6 6/9] blk-mq: always free hctx after request queue is freed

2019-04-17 Thread Hannes Reinecke
On 4/17/19 5:44 AM, Ming Lei wrote: In normal queue cleanup path, hctx is released after request queue is freed, see blk_mq_release(). However, in __blk_mq_update_nr_hw_queues(), hctx may be freed because of hw queues shrinking. This way is easy to cause use-after-free, because: one implicit

[PATCH V6 9/9] nvme: hold request queue's refcount in ns's whole lifetime

2019-04-16 Thread Ming Lei
t; should make this issue quite difficult to trigger. However it can't kill the issue completely becasue pre-condition of that patch is to hold request queue's refcount before calling block layer API, and there is still a small window between blk_cleanup_queue() and removing the ns f

[PATCH V6 1/9] blk-mq: grab .q_usage_counter when queuing request from plug code path

2019-04-16 Thread Ming Lei
Just like aio/io_uring, we need to grab 2 refcount for queuing one request, one is for submission, another is for completion. If the request isn't queued from plug code path, the refcount grabbed in generic_make_request() serves for submission. In theroy, this refcount should have been rel

[PATCH V6 6/9] blk-mq: always free hctx after request queue is freed

2019-04-16 Thread Ming Lei
In normal queue cleanup path, hctx is released after request queue is freed, see blk_mq_release(). However, in __blk_mq_update_nr_hw_queues(), hctx may be freed because of hw queues shrinking. This way is easy to cause use-after-free, because: one implicit rule is that it is safe to call almost

Re: [PATCH 1/1] scsi: ufs: Print real incorrect request response code

2019-04-15 Thread Martin K. Petersen
Stanley, > If UFS device responds an unknown request response code, we can not > know what it was via logs because the code is replaced by "DID_ERROR > << 16" before log printing. > > Fix this to provide precise request response code information for > easie

RE: [PATCH 1/1] scsi: ufs: Print real incorrect request response code

2019-04-15 Thread Winkler, Tomas
> On 4/15/19 5:23 AM, Stanley Chu wrote: > > If UFS device responds an unknown request response code, we can not > > know what it was via logs because the code is replaced by "DID_ERROR > > << 16" before log printing. > > > > Fix this to prov

Re: [PATCH 1/1] scsi: ufs: Print real incorrect request response code

2019-04-15 Thread Bart Van Assche
On 4/15/19 5:23 AM, Stanley Chu wrote: > If UFS device responds an unknown request response code, > we can not know what it was via logs because the code > is replaced by "DID_ERROR << 16" before log printing. > > Fix this to provide precise request response code

[PATCH 1/1] scsi: ufs: Print real incorrect request response code

2019-04-15 Thread Stanley Chu
If UFS device responds an unknown request response code, we can not know what it was via logs because the code is replaced by "DID_ERROR << 16" before log printing. Fix this to provide precise request response code information for easier issue breakdown. Signed-off-by: Stanley C

Re: [PATCH V5 6/9] blk-mq: always free hctx after request queue is freed

2019-04-13 Thread Ming Lei
On Fri, Apr 12, 2019 at 01:06:07PM +0200, Hannes Reinecke wrote: > On 4/12/19 5:30 AM, Ming Lei wrote: > > In normal queue cleanup path, hctx is released after request queue > > is freed, see blk_mq_release(). > > > > However, in __blk_mq_update_nr_hw_queues(), hctx may

Re: [PATCH V5 6/9] blk-mq: always free hctx after request queue is freed

2019-04-12 Thread Hannes Reinecke
On 4/12/19 5:30 AM, Ming Lei wrote: In normal queue cleanup path, hctx is released after request queue is freed, see blk_mq_release(). However, in __blk_mq_update_nr_hw_queues(), hctx may be freed because of hw queues shrinking. This way is easy to cause use-after-free, because: one implicit

Re: [PATCH V5 1/9] blk-mq: grab .q_usage_counter when queuing request from plug code path

2019-04-12 Thread Hannes Reinecke
On 4/12/19 5:30 AM, Ming Lei wrote: Just like aio/io_uring, we need to grab 2 refcount for queuing one request, one is for submission, another is for completion. If the request isn't queued from plug code path, the refcount grabbed in generic_make_request() serves for submission. In t

Re: [PATCH V5 1/9] blk-mq: grab .q_usage_counter when queuing request from plug code path

2019-04-12 Thread Johannes Thumshirn
Looks good, Reviewed-by: Johannes Thumshirn -- Johannes ThumshirnSUSE Labs Filesystems jthumsh...@suse.de+49 911 74053 689 SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg GF: Felix Imendörffer, Mary Higgins, Sri Rasiah HRB 21284 (AG Nürn

[PATCH V5 6/9] blk-mq: always free hctx after request queue is freed

2019-04-11 Thread Ming Lei
In normal queue cleanup path, hctx is released after request queue is freed, see blk_mq_release(). However, in __blk_mq_update_nr_hw_queues(), hctx may be freed because of hw queues shrinking. This way is easy to cause use-after-free, because: one implicit rule is that it is safe to call almost

[PATCH V5 1/9] blk-mq: grab .q_usage_counter when queuing request from plug code path

2019-04-11 Thread Ming Lei
Just like aio/io_uring, we need to grab 2 refcount for queuing one request, one is for submission, another is for completion. If the request isn't queued from plug code path, the refcount grabbed in generic_make_request() serves for submission. In theroy, this refcount should have been rel

Re: [PATCH V4 1/7] blk-mq: grab .q_usage_counter when queuing request from plug code path

2019-04-05 Thread Ming Lei
On Fri, Apr 05, 2019 at 05:26:24PM +0800, Dongli Zhang wrote: > Hi Ming, > > On 04/04/2019 04:43 PM, Ming Lei wrote: > > Just like aio/io_uring, we need to grab 2 refcount for queuing one > > request, one is for submission, another is for completion. > > > > If t

Re: [PATCH V4 1/7] blk-mq: grab .q_usage_counter when queuing request from plug code path

2019-04-05 Thread Dongli Zhang
Hi Ming, On 04/04/2019 04:43 PM, Ming Lei wrote: > Just like aio/io_uring, we need to grab 2 refcount for queuing one > request, one is for submission, another is for completion. > > If the request isn't queued from plug code path, the refcount grabbed > in generic_make_

Re: [PATCH V4 1/7] blk-mq: grab .q_usage_counter when queuing request from plug code path

2019-04-04 Thread Bart Van Assche
On Thu, 2019-04-04 at 16:43 +0800, Ming Lei wrote: > Just like aio/io_uring, we need to grab 2 refcount for queuing one > request, one is for submission, another is for completion. > > If the request isn't queued from plug code path, the refcount grabbed > in generic_make_

Re: [PATCH V4 1/7] blk-mq: grab .q_usage_counter when queuing request from plug code path

2019-04-04 Thread Ming Lei
On Thu, Apr 4, 2019 at 11:58 PM Bart Van Assche wrote: > > On Thu, 2019-04-04 at 16:43 +0800, Ming Lei wrote: > > Just like aio/io_uring, we need to grab 2 refcount for queuing one > > request, one is for submission, another is for completion. > > > > If the reque

Re: [PATCH V4 1/7] blk-mq: grab .q_usage_counter when queuing request from plug code path

2019-04-04 Thread Bart Van Assche
On Thu, 2019-04-04 at 16:43 +0800, Ming Lei wrote: > Just like aio/io_uring, we need to grab 2 refcount for queuing one > request, one is for submission, another is for completion. > > If the request isn't queued from plug code path, the refcount grabbed > in generic_make_

[PATCH v2 01/26] qedf: Do not retry ELS request if qedf_alloc_cmd fails.

2019-03-26 Thread Saurav Kashyap
From: Chad Dupuis If we cannot allocate an ELS middlepath request, simply fail instead of trying to delay and then reallocate. This delay logic is causing soft lockup messages: NMI watchdog: BUG: soft lockup - CPU#2 stuck for 22s! [kworker/2:1:7639] Modules linked in: xt_CHECKSUM

Re: [PATCH] simscsi: use request tag instead of serial_number

2019-03-14 Thread Martin K. Petersen
Hannes, > Use the request tag for logging instead of the scsi command serial > number. Applied to 5.1/scsi-queue, thanks! -- Martin K. Petersen Oracle Linux Engineering

Re: [PATCH] simscsi: use request tag instead of serial_number

2019-03-12 Thread Christoph Hellwig
On Tue, Mar 12, 2019 at 09:08:12AM +0100, Hannes Reinecke wrote: > From: Hannes Reinecke > > Use the request tag for logging instead of the scsi command serial > number. > > Signed-off-by: Hannes Reinecke Looks fine. Although I'd just remove these debugs printks

[PATCH] simscsi: use request tag instead of serial_number

2019-03-12 Thread Hannes Reinecke
From: Hannes Reinecke Use the request tag for logging instead of the scsi command serial number. Signed-off-by: Hannes Reinecke --- arch/ia64/hp/sim/simscsi.c | 7 --- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/arch/ia64/hp/sim/simscsi.c b/arch/ia64/hp/sim/simscsi.c

[PATCH 01/26] qedf: Do not retry ELS request if qedf_alloc_cmd fails.

2019-03-05 Thread Saurav Kashyap
From: Chad Dupuis If we cannot allocate an ELS middlepath request, simply fail instead of trying to delay and then reallocate. This delay logic is causing soft lockup messages: NMI watchdog: BUG: soft lockup - CPU#2 stuck for 22s! [kworker/2:1:7639] Modules linked in: xt_CHECKSUM

Re: [PATCH 2/4] mvumi: use request tag instead of serial_number

2019-02-26 Thread Bart Van Assche
On Tue, 2019-02-26 at 15:56 +0100, Hannes Reinecke wrote: > Use the request tag for logging instead of the scsi command serial > number. Reviewed-by: Bart Van Assche

[PATCH 2/4] mvumi: use request tag instead of serial_number

2019-02-26 Thread Hannes Reinecke
From: Hannes Reinecke Use the request tag for logging instead of the scsi command serial number. Signed-off-by: Hannes Reinecke --- drivers/scsi/mvumi.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/drivers/scsi/mvumi.c b/drivers/scsi/mvumi.c index dbe753fba486

[PATCH 2/4] mvumi: use request tag instead of serial_number

2019-02-26 Thread Hannes Reinecke
From: Hannes Reinecke Use the request tag for logging instead of the scsi command serial number. Signed-off-by: Hannes Reinecke --- drivers/scsi/mvumi.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/drivers/scsi/mvumi.c b/drivers/scsi/mvumi.c index dbe753fba486

[PATCH v3 08/11] qla2xxx: Move marker request behind QPair

2019-02-15 Thread Himanshu Madhani
From: Quinn Tran current code hard code marker request to use request and response queue 0. This patch make use of the qpair as the path to access the request/response queues. It allows marker to be place on any hardware queues. Signed-off-by: Quinn Tran Signed-off-by: Himanshu Madhani

[PATCH v2 09/12] qla2xxx: Move marker request behind QPair

2019-02-13 Thread Himanshu Madhani
From: Quinn Tran current code hard code marker request to use request and response queue 0. This patch make use of the qpair as the path to access the request/response queues. It allows marker to be place on any hardware queues. Signed-off-by: Quinn Tran Signed-off-by: Himanshu Madhani

Re: Request for opinion: ufsutils

2019-02-13 Thread Pedro Sousa
rd way >> to >> interact with the ufs driver. >> >> During bring up/debug without link to device: >> >> - My proposal is to create a debugFS interface: >>uic-cmd/ >>├── dme-get >>├── dme-set >>├── dme-enable >>├

Re: Request for opinion: ufsutils

2019-02-13 Thread Greg KH
al is to create a debugFS interface: >uic-cmd/ >├── dme-get >├── dme-set >├── dme-enable >├── dme-reset >└── ... >test-feature/ >├── PA_tf >└── ... >error-dump/ >├── UECPA >├── UECDL >└── ... >... > &g

Re: Request for opinion: ufsutils

2019-02-13 Thread Pedro Sousa
interact with the ufs driver. During bring up/debug without link to device: - My proposal is to create a debugFS interface: uic-cmd/ ├── dme-get ├── dme-set ├── dme-enable ├── dme-reset └── ... test-feature/ ├── PA_tf └── ... error-dump/ ├── UECPA ├── UECDL └── ... ... I kindly request the ecosystem feedback. Thanks, Pedro

  1   2   3   4   5   6   7   8   9   10   >