On 10/8/19 9:50 AM, Douglas Gilbert wrote:
> The introduced request state machine is not wired in so that
> the size of one of the following patches is reduced. Bit
> operation defines for the request and file descriptor level
> are also introduced. Minor rework og sg_rd_appen
Hi,
Thank you for the patch! Perhaps something to improve:
[auto build test WARNING on scsi/for-next]
[cannot apply to v5.4-rc3 next-20191014]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify
From: Balsundar P
After IOP reset completion AIF request command is not issued to the
controller. Driver schedules a worker thread to issue a AIF request
command after IOP reset completion.
Signed-off-by: Balsundar P
---
drivers/scsi/aacraid/aacraid.h | 18 +-
drivers/scsi
The introduced request state machine is not wired in so that
the size of one of the following patches is reduced. Bit
operation defines for the request and file descriptor level
are also introduced. Minor rework og sg_rd_append() function.
Signed-off-by: Douglas Gilbert
---
drivers/scsi/sg.c
There is a race b/w fipvlan request and response path.
=
qedf_fcoe_process_vlan_resp:113]:2: VLAN response, vid=0xffd.
qedf_initiate_fipvlan_req:165]:2: vlan = 0x6ffd already set.
qedf_set_vlan_id:139]:2: Setting vlan_id=0ffd prio=3.
==
The request thread sees that vlan is already set and
- On some setups fipvlan can be retried for long duration
and the connection to switch was not there so it was not
getting any reply.
- During unload this thread was hanging.
Problem Resolution:
Check if unload is in progress then quit from fipvlan thread.
Signed-off-by: Saurav Kashyap
---
driv
/Steffen-Maier/scsi-core-fix-missing-cleanup_rq-for-SCSI-hosts-without-request-batching/20190808-052017
config: i386-randconfig-d003-201931 (attached as .config)
compiler: gcc-7 (Debian 7.4.0-10) 7.4.0
reproduce:
# save the attached .config to linux build tree
make ARCH=i386
If you
-Maier/scsi-core-fix-missing-cleanup_rq-for-SCSI-hosts-without-request-batching/20190808-052017
config: s390-allmodconfig (attached as .config)
compiler: s390-linux-gcc (GCC) 7.4.0
reproduce:
wget
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O
~/bin/make.cross
-Maier/scsi-core-fix-missing-cleanup_rq-for-SCSI-hosts-without-request-batching/20190808-052017
config: riscv-defconfig (attached as .config)
compiler: riscv64-linux-gcc (GCC) 7.4.0
reproduce:
wget
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O
~/bin/make.cross
-Maier/scsi-core-fix-missing-cleanup_rq-for-SCSI-hosts-without-request-batching/20190808-052017
config: riscv-defconfig (attached as .config)
compiler: riscv64-linux-gcc (GCC) 7.4.0
reproduce:
wget
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O
~/bin/make.cross
Ming,
>> + .cleanup_rq = scsi_cleanup_rq,
>> .busy = scsi_mq_lld_busy,
>> .map_queues = scsi_map_queues,
>> };
>
> This one is a cross-tree thing, either scsi/5.4/scsi-queue needs to
> pull for-5.4/block, or do it after both land linus tree.
I'll set up
On Wed, Aug 7, 2019 at 10:55 PM Steffen Maier wrote:
>
> This was missing from scsi_mq_ops_no_commit of linux-next commit
> 8930a6c20791 ("scsi: core: add support for request batching")
> from Martin's scsi/5.4/scsi-queue or James' scsi/misc.
>
> See also
On 8/7/19 7:49 AM, Steffen Maier wrote:
Hi James, Martin, Paolo, Ming,
multipathing with linux-next is broken since 20190723 in our CI.
The patches fix a memleak and a severe dh/multipath functional regression.
It would be nice if we could get them to 5.4/scsi-queue and also next.
>
I would ha
feature.
>
> Steffen Maier (2):
> scsi: core: fix missing .cleanup_rq for SCSI hosts without request
> batching
> scsi: core: fix dh and multipathing for SCSI hosts without request
> batching
>
> drivers/scsi/scsi_lib.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
Reviewed-by: Paolo Bonzini
This was missing from scsi_device_from_queue() due to the introduction
of another new scsi_mq_ops_no_commit of linux-next commit
8930a6c20791 ("scsi: core: add support for request batching")
from Martin's scsi/5.4/scsi-queue or James' scsi/misc.
Only devicehandle
request
batching
scsi: core: fix dh and multipathing for SCSI hosts without request
batching
drivers/scsi/scsi_lib.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
--
2.17.1
This was missing from scsi_mq_ops_no_commit of linux-next commit
8930a6c20791 ("scsi: core: add support for request batching")
from Martin's scsi/5.4/scsi-queue or James' scsi/misc.
See also linux-next commit b7e9e1fb7a92 ("scsi: implement .cleanup_rq
callback") fr
Dear Friend,
I need you to please let me know if there are fast growing investments
in your country in which i can invest money in. I have access to a
huge amount of money, which i want to invest in your country, i want
to know if you can be an agent/partner to me and i will give you a
commission
Driver gets a request frame from the free pool of DMA able
request frames and fill in the required information and pass
the address of the frame to IOC/FW to pull the complete request
frame. In certain places the driver used the request frame allocated
from the free pool without completely
On Fri, Jul 26, 2019 at 06:20:46PM +0200, Benjamin Block wrote:
> Hey Ming Lei,
>
> On Sat, Jul 20, 2019 at 11:06:35AM +0800, Ming Lei wrote:
> > Hi,
> >
> > When one request is dispatched to LLD via dm-rq, if the result is
> > BLK_STS_*RESOURCE, dm-rq will f
Hey Ming Lei,
On Sat, Jul 20, 2019 at 11:06:35AM +0800, Ming Lei wrote:
> Hi,
>
> When one request is dispatched to LLD via dm-rq, if the result is
> BLK_STS_*RESOURCE, dm-rq will free the request. However, LLD may allocate
> private data for this request, so this way will cause
Hi,
When one request is dispatched to LLD via dm-rq, if the result is
BLK_STS_*RESOURCE, dm-rq will free the request. However, LLD may allocate
private data for this request, so this way will cause memory leak.
Add .cleanup_rq() callback and implement it in SCSI for fixing the issue,
since SCSI
Hi,
When one request is dispatched to LLD via dm-rq, if the result is
BLK_STS_*RESOURCE, dm-rq will free the request. However, LLD may allocate
private stuff for this request, so this way will cause memory leak.
Add .cleanup_rq() callback and implement it in SCSI for fixing the issue.
And SCSI
When tracing instances where we open and close WKA ports, we also pass the
request-ID of the respective FSF command.
But after successfully sending the FSF command we must not use the
request-object anymore, as this might result in an use-after-free (see
"zfcp: fix request object use-after
With a recent change to our send path for FSF commands we introduced a
possible use-after-free of request-objects, that might further lead to zfcp
crafting bad requests, which the FCP channel correctly complains about with
an error (FSF_PROT_SEQ_NUMB_ERROR). This error is then handled by an
ivers/scsi/bnx2fc/bnx2fc_io.c
+++ b/drivers/scsi/bnx2fc/bnx2fc_io.c
@@ -1048,6 +1048,9 @@ int bnx2fc_initiate_cleanup(struct bnx2fc_cmd *io_req)
/* Obtain free SQ entry */
bnx2fc_add_2_sq(tgt, xid);
+ /* Set flag that cleanup request is pending with the firmware */
+ se
This code refactoring introduces function pointers.
Host uses Request Descriptors of different types for posting an entry
onto a request queue. Based on controller type and capabilities,
host can also use atomic descriptors other than normal
descriptors.
Using function pointer will avoid if-else
with. Multiple requests can be combined
with shared requests.
As a further optimisation, an array of SCSI commands can
be passed from the user space via the controlling object's
request "pointer". Without that, the multiple request
logic would need to visit the user space once per com
From: Suganath Prabu
This code refactoring introduces function pointers.
Host uses Request Descriptors of different types for posting an entry
onto a request queue. Based on controller type and capabilities,
host can also use atomic descriptors other than normal
descriptors.
Using function
Hi Suganath,
I love your patch! Perhaps something to improve:
[auto build test WARNING on scsi/for-next]
[also build test WARNING on v5.1 next-20190517]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system]
url:
https://github.com/0day-ci/linux/c
If the Aero HBA supports Atomic Request Descriptors, it sets the Atomic
Request Descriptor Capable bit in the IOCCapabilities field of the
IOCFacts Reply message. Driver uses an Atomic Request Descriptor
as an alternative method for posting an entry onto a request queue.
The posting of an Atomic
This code refactoring introduces function pointers.
Host uses Request Descriptors of different types for posting an entry
onto a request queue. Based on controller type and capabilities,
host can also use atomic descriptors other than normal
descriptors.
Using function pointer will avoid if-else
Dear Friend,
I am. Mr Ahmed Zama Manager Auditing and Accountancy Department,UBA
Bank Burkina Faso
i am writing to seek for your highly esteemed consent/assistance in
a lasting business relationship of mutual benefit involving € 15
Million Euros for investment in your country, under a joint
On 4/29/19 6:52 PM, Ming Lei wrote:
> Just like aio/io_uring, we need to grab 2 refcount for queuing one
> request, one is for submission, another is for completion.
>
> If the request isn't queued from plug code path, the refcount grabbed
> in generic_make_request() serve
Just like aio/io_uring, we need to grab 2 refcount for queuing one
request, one is for submission, another is for completion.
If the request isn't queued from plug code path, the refcount grabbed
in generic_make_request() serves for submission. In theroy, this
refcount should have been rel
In normal queue cleanup path, hctx is released after request queue
is freed, see blk_mq_release().
However, in __blk_mq_update_nr_hw_queues(), hctx may be freed because
of hw queues shrinking. This way is easy to cause use-after-free,
because: one implicit rule is that it is safe to call almost
On Mon, Apr 29, 2019 at 11:09:39AM -0700, Bart Van Assche wrote:
> On Sun, 2019-04-28 at 16:14 +0800, Ming Lei wrote:
> > Just like aio/io_uring, we need to grab 2 refcount for queuing one
> > request, one is for submission, another is for completion.
> >
> > If the req
On Sun, 2019-04-28 at 16:14 +0800, Ming Lei wrote:
> Just like aio/io_uring, we need to grab 2 refcount for queuing one
> request, one is for submission, another is for completion.
>
> If the request isn't queued from plug code path, the refcount grabbed
> in generic_make_
On Sun, Apr 28, 2019 at 02:14:26PM +0200, Christoph Hellwig wrote:
> On Sun, Apr 28, 2019 at 04:14:06PM +0800, Ming Lei wrote:
> > In normal queue cleanup path, hctx is released after request queue
> > is freed, see blk_mq_release().
> >
> > However, in __blk_mq_update_
On Sun, Apr 28, 2019 at 04:14:06PM +0800, Ming Lei wrote:
> In normal queue cleanup path, hctx is released after request queue
> is freed, see blk_mq_release().
>
> However, in __blk_mq_update_nr_hw_queues(), hctx may be freed because
> of hw queues shrinking. This way is easy to
Looks good,
Reviewed-by: Christoph Hellwig
In normal queue cleanup path, hctx is released after request queue
is freed, see blk_mq_release().
However, in __blk_mq_update_nr_hw_queues(), hctx may be freed because
of hw queues shrinking. This way is easy to cause use-after-free,
because: one implicit rule is that it is safe to call almost
Just like aio/io_uring, we need to grab 2 refcount for queuing one
request, one is for submission, another is for completion.
If the request isn't queued from plug code path, the refcount grabbed
in generic_make_request() serves for submission. In theroy, this
refcount should have been rel
On Sat, Apr 27, 2019 at 06:45:43AM +0800, Ming Lei wrote:
> Still no difference between your suggestion and the way in this patch, given
> driver specific change is a must. Even it is more clean to hold the
> queue refocunt by drivers explicitly because we usually do the get/put pair
> in one place
On Fri, Apr 26, 2019 at 10:04:23AM -0700, Bart Van Assche wrote:
> On Fri, 2019-04-26 at 17:11 +0200, Christoph Hellwig wrote:
> > On Thu, Apr 25, 2019 at 09:00:31AM +0800, Ming Lei wrote:
> > > The issue is driver(NVMe) specific, the race window is just between
> > > between blk_cleanup_queue() an
On Fri, Apr 26, 2019 at 05:11:14PM +0200, Christoph Hellwig wrote:
> On Thu, Apr 25, 2019 at 09:00:31AM +0800, Ming Lei wrote:
> > The issue is driver(NVMe) specific, the race window is just between
> > between blk_cleanup_queue() and removing the ns from the controller namspace
> > list in nvme_ns
On Fri, 2019-04-26 at 17:11 +0200, Christoph Hellwig wrote:
> On Thu, Apr 25, 2019 at 09:00:31AM +0800, Ming Lei wrote:
> > The issue is driver(NVMe) specific, the race window is just between
> > between blk_cleanup_queue() and removing the ns from the controller namspace
> > list in nvme_ns_remove
On Thu, Apr 25, 2019 at 09:00:31AM +0800, Ming Lei wrote:
> The issue is driver(NVMe) specific, the race window is just between
> between blk_cleanup_queue() and removing the ns from the controller namspace
> list in nvme_ns_remove()
And I wouldn't be surprised if others have the same issue.
>
>
> > For other callers of blk_mq_sched_insert_requests(), it is guaranteed
> > that request queue's ref is held.
>
> In both Linus' tree and Jens' for-5.2 tree I only see these two
> callers of blk_mq_sched_insert_requests. What am I missing?
OK, what I meant is that
On Thu, Apr 25, 2019 at 08:53:34AM +0800, Ming Lei wrote:
> It isn't in other callers of blk_mq_sched_insert_requests(), it is just
> needed in some corner case like flush plug context.
>
> For other callers of blk_mq_sched_insert_requests(), it is guaranteed
> that request
On Wed, Apr 24, 2019 at 06:27:46PM +0200, Christoph Hellwig wrote:
> On Wed, Apr 24, 2019 at 07:02:21PM +0800, Ming Lei wrote:
> > Hennes reported the following kernel oops:
>
> Hannes?
>
> > + if (!blk_get_queue(ns->queue)) {
> > + ret = -ENXIO;
> > + goto out_free_queue;
>
don't we push this into blk_mq_sched_insert_requests? Yes, it
> would need a request_queue argument, but that still seems saner
> than duplicating it in both callers.
It isn't in other callers of blk_mq_sched_insert_requests(), it is just
needed in some corner case like flush plug context.
For other callers of blk_mq_sched_insert_requests(), it is guaranteed
that request queue's ref is held.
Thanks,
Ming
On Wed, Apr 24, 2019 at 07:02:21PM +0800, Ming Lei wrote:
> Hennes reported the following kernel oops:
Hannes?
> + if (!blk_get_queue(ns->queue)) {
> + ret = -ENXIO;
> + goto out_free_queue;
> + }
If we always need to hold a reference, shouldn't blk_mq_init_queue
On Wed, Apr 24, 2019 at 07:02:13PM +0800, Ming Lei wrote:
> if (rq->mq_hctx != this_hctx || rq->mq_ctx != this_ctx) {
> if (this_hctx) {
> trace_block_unplug(this_q, depth,
> !from_schedule);
> +
> + perc
On 4/24/19 1:02 PM, Ming Lei wrote:
In normal queue cleanup path, hctx is released after request queue
is freed, see blk_mq_release().
However, in __blk_mq_update_nr_hw_queues(), hctx may be freed because
of hw queues shrinking. This way is easy to cause use-after-free,
because: one implicit
t;
should make this issue quite difficult to trigger. However it can't
kill the issue completely becasue pre-condition of that patch is to
hold request queue's refcount before calling block layer API, and
there is still a small window between blk_cleanup_queue() and removing
the ns f
In normal queue cleanup path, hctx is released after request queue
is freed, see blk_mq_release().
However, in __blk_mq_update_nr_hw_queues(), hctx may be freed because
of hw queues shrinking. This way is easy to cause use-after-free,
because: one implicit rule is that it is safe to call almost
Just like aio/io_uring, we need to grab 2 refcount for queuing one
request, one is for submission, another is for completion.
If the request isn't queued from plug code path, the refcount grabbed
in generic_make_request() serves for submission. In theroy, this
refcount should have been rel
tx_list
etc).
OK, looks the name of 'unused' is better.
And I would deallocate q->queue_hw_ctx in blk_mq_exit_queue() to make things
more consistent.
No, that is wrong.
The request queue's refcount is often held when blk_cleanup_queue() is running,
and blk_mq_exit_queu
t; > > > blk_mq_realloc_hw_ctxs().
> > > >
> > > > However, I would rename the 'dead' elements to 'unused' (ie
> > > > unused_hctx_list
> > > > etc).
> > >
> > > OK, looks the name of 'unused' is better.
>
ware context elements, given that they might be reassigned during
> > > blk_mq_realloc_hw_ctxs().
> > >
> > > However, I would rename the 'dead' elements to 'unused' (ie
> > > unused_hctx_list
> > > etc).
> >
> > OK,
ht be reassigned during
blk_mq_realloc_hw_ctxs().
However, I would rename the 'dead' elements to 'unused' (ie unused_hctx_list
etc).
OK, looks the name of 'unused' is better.
And I would deallocate q->queue_hw_ctx in blk_mq_exit_queue() to make things
more consist
> > > On 4/17/19 5:44 AM, Ming Lei wrote:
> > > > > In normal queue cleanup path, hctx is released after request queue
> > > > > is freed, see blk_mq_release().
> > > > >
> > > > > However, in __blk_mq_update_nr_hw_queues(), hctx
On 4/22/19 5:30 AM, Ming Lei wrote:
On Wed, Apr 17, 2019 at 08:59:43PM +0800, Ming Lei wrote:
On Wed, Apr 17, 2019 at 02:08:59PM +0200, Hannes Reinecke wrote:
On 4/17/19 5:44 AM, Ming Lei wrote:
In normal queue cleanup path, hctx is released after request queue
is freed, see blk_mq_release
On Wed, Apr 17, 2019 at 08:59:43PM +0800, Ming Lei wrote:
> On Wed, Apr 17, 2019 at 02:08:59PM +0200, Hannes Reinecke wrote:
> > On 4/17/19 5:44 AM, Ming Lei wrote:
> > > In normal queue cleanup path, hctx is released after request queue
> > > is freed, see blk_mq_rele
ialized namespaces.
>
> Patch "blk-mq: free hw queue's resource in hctx's release handler"
> should make this issue quite difficult to trigger. However it can't
> kill the issue completely becasue pre-condition of that patch is to
> hold request queue
On Wed, Apr 17, 2019 at 02:08:59PM +0200, Hannes Reinecke wrote:
> On 4/17/19 5:44 AM, Ming Lei wrote:
> > In normal queue cleanup path, hctx is released after request queue
> > is freed, see blk_mq_release().
> >
> > However, in __blk_mq_update_nr_hw_queues(), hctx may
7;s resource in hctx's release handler"
should make this issue quite difficult to trigger. However it can't
kill the issue completely becasue pre-condition of that patch is to
hold request queue's refcount before calling block layer API, and
there is still a small window between blk_
On 4/17/19 5:44 AM, Ming Lei wrote:
In normal queue cleanup path, hctx is released after request queue
is freed, see blk_mq_release().
However, in __blk_mq_update_nr_hw_queues(), hctx may be freed because
of hw queues shrinking. This way is easy to cause use-after-free,
because: one implicit
t;
should make this issue quite difficult to trigger. However it can't
kill the issue completely becasue pre-condition of that patch is to
hold request queue's refcount before calling block layer API, and
there is still a small window between blk_cleanup_queue() and removing
the ns f
Just like aio/io_uring, we need to grab 2 refcount for queuing one
request, one is for submission, another is for completion.
If the request isn't queued from plug code path, the refcount grabbed
in generic_make_request() serves for submission. In theroy, this
refcount should have been rel
In normal queue cleanup path, hctx is released after request queue
is freed, see blk_mq_release().
However, in __blk_mq_update_nr_hw_queues(), hctx may be freed because
of hw queues shrinking. This way is easy to cause use-after-free,
because: one implicit rule is that it is safe to call almost
Stanley,
> If UFS device responds an unknown request response code, we can not
> know what it was via logs because the code is replaced by "DID_ERROR
> << 16" before log printing.
>
> Fix this to provide precise request response code information for
> easie
> On 4/15/19 5:23 AM, Stanley Chu wrote:
> > If UFS device responds an unknown request response code, we can not
> > know what it was via logs because the code is replaced by "DID_ERROR
> > << 16" before log printing.
> >
> > Fix this to prov
On 4/15/19 5:23 AM, Stanley Chu wrote:
> If UFS device responds an unknown request response code,
> we can not know what it was via logs because the code
> is replaced by "DID_ERROR << 16" before log printing.
>
> Fix this to provide precise request response code
If UFS device responds an unknown request response code,
we can not know what it was via logs because the code
is replaced by "DID_ERROR << 16" before log printing.
Fix this to provide precise request response code information
for easier issue breakdown.
Signed-off-by: Stanley C
On Fri, Apr 12, 2019 at 01:06:07PM +0200, Hannes Reinecke wrote:
> On 4/12/19 5:30 AM, Ming Lei wrote:
> > In normal queue cleanup path, hctx is released after request queue
> > is freed, see blk_mq_release().
> >
> > However, in __blk_mq_update_nr_hw_queues(), hctx may
On 4/12/19 5:30 AM, Ming Lei wrote:
In normal queue cleanup path, hctx is released after request queue
is freed, see blk_mq_release().
However, in __blk_mq_update_nr_hw_queues(), hctx may be freed because
of hw queues shrinking. This way is easy to cause use-after-free,
because: one implicit
On 4/12/19 5:30 AM, Ming Lei wrote:
Just like aio/io_uring, we need to grab 2 refcount for queuing one
request, one is for submission, another is for completion.
If the request isn't queued from plug code path, the refcount grabbed
in generic_make_request() serves for submission. In t
Looks good,
Reviewed-by: Johannes Thumshirn
--
Johannes ThumshirnSUSE Labs Filesystems
jthumsh...@suse.de+49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG Nürn
In normal queue cleanup path, hctx is released after request queue
is freed, see blk_mq_release().
However, in __blk_mq_update_nr_hw_queues(), hctx may be freed because
of hw queues shrinking. This way is easy to cause use-after-free,
because: one implicit rule is that it is safe to call almost
Just like aio/io_uring, we need to grab 2 refcount for queuing one
request, one is for submission, another is for completion.
If the request isn't queued from plug code path, the refcount grabbed
in generic_make_request() serves for submission. In theroy, this
refcount should have been rel
On Fri, Apr 05, 2019 at 05:26:24PM +0800, Dongli Zhang wrote:
> Hi Ming,
>
> On 04/04/2019 04:43 PM, Ming Lei wrote:
> > Just like aio/io_uring, we need to grab 2 refcount for queuing one
> > request, one is for submission, another is for completion.
> >
> > If t
Hi Ming,
On 04/04/2019 04:43 PM, Ming Lei wrote:
> Just like aio/io_uring, we need to grab 2 refcount for queuing one
> request, one is for submission, another is for completion.
>
> If the request isn't queued from plug code path, the refcount grabbed
> in generic_make_
On Thu, 2019-04-04 at 16:43 +0800, Ming Lei wrote:
> Just like aio/io_uring, we need to grab 2 refcount for queuing one
> request, one is for submission, another is for completion.
>
> If the request isn't queued from plug code path, the refcount grabbed
> in generic_make_
On Thu, Apr 4, 2019 at 11:58 PM Bart Van Assche wrote:
>
> On Thu, 2019-04-04 at 16:43 +0800, Ming Lei wrote:
> > Just like aio/io_uring, we need to grab 2 refcount for queuing one
> > request, one is for submission, another is for completion.
> >
> > If the reque
On Thu, 2019-04-04 at 16:43 +0800, Ming Lei wrote:
> Just like aio/io_uring, we need to grab 2 refcount for queuing one
> request, one is for submission, another is for completion.
>
> If the request isn't queued from plug code path, the refcount grabbed
> in generic_make_
From: Chad Dupuis
If we cannot allocate an ELS middlepath request, simply fail instead
of trying to delay and then reallocate. This delay logic is causing
soft lockup messages:
NMI watchdog: BUG: soft lockup - CPU#2 stuck for 22s! [kworker/2:1:7639]
Modules linked in: xt_CHECKSUM
Hannes,
> Use the request tag for logging instead of the scsi command serial
> number.
Applied to 5.1/scsi-queue, thanks!
--
Martin K. Petersen Oracle Linux Engineering
On Tue, Mar 12, 2019 at 09:08:12AM +0100, Hannes Reinecke wrote:
> From: Hannes Reinecke
>
> Use the request tag for logging instead of the scsi command serial
> number.
>
> Signed-off-by: Hannes Reinecke
Looks fine. Although I'd just remove these debugs printks
From: Hannes Reinecke
Use the request tag for logging instead of the scsi command serial
number.
Signed-off-by: Hannes Reinecke
---
arch/ia64/hp/sim/simscsi.c | 7 ---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/arch/ia64/hp/sim/simscsi.c b/arch/ia64/hp/sim/simscsi.c
From: Chad Dupuis
If we cannot allocate an ELS middlepath request, simply fail instead
of trying to delay and then reallocate. This delay logic is causing
soft lockup messages:
NMI watchdog: BUG: soft lockup - CPU#2 stuck for 22s! [kworker/2:1:7639]
Modules linked in: xt_CHECKSUM
On Tue, 2019-02-26 at 15:56 +0100, Hannes Reinecke wrote:
> Use the request tag for logging instead of the scsi command serial
> number.
Reviewed-by: Bart Van Assche
From: Hannes Reinecke
Use the request tag for logging instead of the scsi command serial
number.
Signed-off-by: Hannes Reinecke
---
drivers/scsi/mvumi.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/scsi/mvumi.c b/drivers/scsi/mvumi.c
index dbe753fba486
From: Hannes Reinecke
Use the request tag for logging instead of the scsi command serial
number.
Signed-off-by: Hannes Reinecke
---
drivers/scsi/mvumi.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/scsi/mvumi.c b/drivers/scsi/mvumi.c
index dbe753fba486
From: Quinn Tran
current code hard code marker request to use request
and response queue 0. This patch make use of the
qpair as the path to access the request/response queues.
It allows marker to be place on any hardware queues.
Signed-off-by: Quinn Tran
Signed-off-by: Himanshu Madhani
From: Quinn Tran
current code hard code marker request to use request
and response queue 0. This patch make use of the
qpair as the path to access the request/response queues.
It allows marker to be place on any hardware queues.
Signed-off-by: Quinn Tran
Signed-off-by: Himanshu Madhani
rd way
>> to
>> interact with the ufs driver.
>>
>> During bring up/debug without link to device:
>>
>> - My proposal is to create a debugFS interface:
>>uic-cmd/
>>├── dme-get
>>├── dme-set
>>├── dme-enable
>>├
al is to create a debugFS interface:
>uic-cmd/
>├── dme-get
>├── dme-set
>├── dme-enable
>├── dme-reset
>└── ...
>test-feature/
>├── PA_tf
>└── ...
>error-dump/
>├── UECPA
>├── UECDL
>└── ...
>...
>
&g
interact with the ufs driver.
During bring up/debug without link to device:
- My proposal is to create a debugFS interface:
uic-cmd/
├── dme-get
├── dme-set
├── dme-enable
├── dme-reset
└── ...
test-feature/
├── PA_tf
└── ...
error-dump/
├── UECPA
├── UECDL
└── ...
...
I kindly request the ecosystem feedback.
Thanks,
Pedro
1 - 100 of 1463 matches
Mail list logo