On 12/27/18 12:33 AM, James Smart wrote:
An hba-wide lock is taken in the nvme io completion routine. The lock
covers null'ing of the nrport pointer in the cmd structure.
The nrport member isn't necessary. After extracting the pointer from
the command, the pointer was dereferenced to get the fc
On 12/27/18 12:33 AM, James Smart wrote:
lpfc_nvme_prep_io_cmd() checks for null pnode, but caller
lpfc_nvme_fcp_io_submit() has already ensured it's non-null.
remove the pnode null check
Signed-off-by: Dick Kennedy
Signed-off-by: James Smart
---
drivers/scsi/lpfc/lpfc_nvme.c | 2 +-
1 fil
On 12/27/18 12:33 AM, James Smart wrote:
Currently, both NVME and SCSI get their IO buffers from separate
pools. XRI's are associated 1:1 with IO buffers, so XRI's are also
split between protocols.
Eliminate the independent pools and use a single pool. Each buffer
structure now has a common sect
On 12/27/18 12:33 AM, James Smart wrote:
There is a extra queue and msix vector for expresslane. Now that
the driver will be doing queues per cpu this oddball queue is no
longer needed. Expresslane will utilize the normal per-cpu queues.
Updated debugfs sli4 queue output to go along with the ch
On 12/27/18 12:33 AM, James Smart wrote:
Currently, both nvme and fcp each have their own concept of an
io_channels, which a combination wq/cq and associated msix.
Different cpus would share an io_channel.
The driver is now moving to per-cpu wq/cq pairs and msix vectors.
The driver will still us
On 12/27/18 12:33 AM, James Smart wrote:
Once the IO buff allocations were made shared, there was a single
XRI buffer list shared by all hardware queues. A single list isn't
great for performance when shared across the per-cpu hardware queues.
Create a separate XRI IO buffer get/put list for ea
On 12/27/18 12:33 AM, James Smart wrote:
Both NVME and SCSI aborts are now processed off the CQ workqueue and
do not generate events for the slowpath any more.
Remove the unused event code.
Signed-off-by: Dick Kennedy
Signed-off-by: James Smart
---
drivers/scsi/lpfc/lpfc.h | 1 -
On 12/27/18 12:33 AM, James Smart wrote:
A scsi host lock is taken on every io completion to check whether
someone is waiting on the io completion. The lock doesn't have to be
taken on all ios, only those that have been marked as aborted.
Rework to avoid the lock on non-aborted ios.
Signed-off-
On 12/27/18 12:33 AM, James Smart wrote:
Similar to the io execution path that reports cpu context
information the debugfs routines for cpu information needs to
be aligned with new hardware queue implementation.
Convert debugfs cnd nvme cpucheck statistics to report
information per Hardware Queu
On 12/27/18 12:33 AM, James Smart wrote:
Many io statics were being sampled and saved using adapter-based
data structures. This was creating a lot of contention and cache
thrashing in the I/O path.
Move the statistics to the hardware queue data structures.
Given the per queue data structures, us
On 12/27/18 12:33 AM, James Smart wrote:
SLI4 nvme functions are passing the SLI3 ring number when posting
wqe to hardware. This should be indicating the hardware queue to
use, not the ring number.
Replace ring number with the hardware queue that should be used.
Note: SCSI avoided this issue as
On 12/27/18 12:33 AM, James Smart wrote:
Now that the lower half has much better per-cpu parallelization
using the hardware queues, the SCSI MQ support needs to be tied
into it.
The involves the following mods:
- Rather than selecting SCSI MQ support at compile time, detect
support at driver
On 12/27/18 12:33 AM, James Smart wrote:
The XRI get/put lists were partitioned per hardware queue. However,
the adapter rarely had sufficient resources to give a large number
of resources per queue. As such, it became common for a cpu to
encounter a lack of XRI resource and request the upper io
On 12/27/18 12:33 AM, James Smart wrote:
Default behavior is to use the information from the upper io
stacks to select the hardware queue to use for io submission.
which typically has good cpu affinity.
However, the driver, when used on some variants of the upstream
kernel, has found queuing inf
On 12/27/18 12:33 AM, James Smart wrote:
The desired affinity for the hardware queue behavior is for
hdwq 0 to be affinitized with cpu 0, hdwq 1 to cpu 1, and so on.
The implementation so far does not do this if the number of
cpus is greating than the number of hardware queues (e.g. hardware
gre
On 12/27/18 12:33 AM, James Smart wrote:
So far msix vectors allocation assumed it would be 1:1 with
hardware queues. However, there are several reasons why fewer
MSIX vectors may be allocated than hardware queues such as the
platform being out of vectors or adapter limits being less than
cpu cou
On 12/27/18 12:33 AM, James Smart wrote:
Review of the eq coalescing logic showed the code was a bit
fragmented. Sometimes it would save/set via an interrupt max
value, while in others it would do so via a usdelay. There were
also two places changing eq delay, one place that issued mailbox
comma
On 12/27/18 12:33 AM, James Smart wrote:
When driving high iop counts, auto_imax coalescing kick in and drives
the performance to extremely small iops levels.
There are two issues:
1) auto_imax is enabled by default. The auto algorithm, when iops
gets high divides the iops by the hdwq count
On 12/27/18 12:33 AM, James Smart wrote:
Current driver uses the older IRQ API for msix allocation
Change driver to utilize pci_alloc_irq_vectors when allocation IRQ
vectors.
Make lpfc_cpu_affinity_check use pci_irq_get_affinity to
determine how the kernel mapped all the IRQs.
Remove msix_entr
On 12/27/18 12:33 AM, James Smart wrote:
The work done to date utilized the number of present cpus when
sizing per-cpu structures. Structures should have been sized based
on the max possible cpu count.
Convert the driver over to possible cpu count for sizing allocation.
Signed-off-by: Dick Kenn
On 12/27/18 12:33 AM, James Smart wrote:
Now that performance mods don't split resources by protocol and
enable both protocols by default, there's no reason not to enable
concurrent SCSI and NVME fc4 support.
Signed-off-by: Dick Kennedy
Signed-off-by: James Smart
---
drivers/scsi/lpfc/lpfc_a
On 12/27/18 12:33 AM, James Smart wrote:
The conversion to enable SCSI and NVME fc4 support ran into an
issue with NPIV support. With NVME NPIV is not currently supported,
but with SCSI it was. The driver reverted to it's lowest setting
meaning NPIV with SCSI was not allowed.
Convert the NPIV ch
On 12/27/18 12:33 AM, James Smart wrote:
When the transport calls into the lpfc target to release an io job
structure, which corresponds to an exchange, and if the driver was
waiting for an exchange in order to post a previously received command
to the transport, the driver immediately takes the
On 12/27/18 12:33 AM, James Smart wrote:
Various null pointer dereference and general protection fault panics
occur when there is a link bounce under load. There are a large number
of "error" message 6413 indicating "bad release".
The issues resolve to list corruptions due to missing or inconsis
On 12/27/18 12:33 AM, James Smart wrote:
Update lpfc version to 12.2.0.0
Signed-off-by: Dick Kennedy
Signed-off-by: James Smart
---
drivers/scsi/lpfc/lpfc_version.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/scsi/lpfc/lpfc_version.h b/drivers/scsi/lpfc/lpfc_
On 11/29/18 6:20 PM, Keith Busch wrote:
On Thu, Nov 29, 2018 at 06:11:59PM +0100, Christoph Hellwig wrote:
diff --git a/block/blk-mq.c b/block/blk-mq.c
index a82830f39933..d0ef540711c7 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -647,7 +647,7 @@ EXPORT_SYMBOL(blk_mq_complete_request);
On 12/21/18 4:29 PM, James Bottomley wrote:
[scsi list cc added]
On Fri, 2018-12-21 at 08:54 +0100, Greg Kroah-Hartman wrote:
We are trying to get rid of BUS_ATTR() and the usage of that in the
fcoe driver can be trivially converted to use BUS_ATTR_WO(), so use
that instead.
At the same time re
Finn Thain a écrit :
On powerpc, setting CONFIG_NVRAM=n builds a kernel with no NVRAM support.
Setting CONFIG_NVRAM=m enables the /dev/nvram misc device module without
enabling NVRAM support in drivers. Setting CONFIG_NVRAM=y enables the
misc device (built-in) and also enables NVRAM support in
On 21/12/2018 21:08, Marc Gonzalez wrote:
> I think I've checked every low-level thingamajig:
> clocks, regulators, power domains, gdsc, voltage spec
I'm printing all but a few writel's but I'm not seeing anything when the
regulators are being set up... Something to investigate.
https://pastebin
On 12/28/2018 1:05 AM, Hannes Reinecke wrote:
That should rather be in the previous patch, no?
I'll double check. If so, and I repost, I'll move it.
-- james
On 12/28/2018 1:10 AM, Hannes Reinecke wrote:
Have you looked at using embedded xri buffers?
Now that you have a common xri buffer structure it should be possible to
switch to embedded xri buffers, and rely on blk-mq sbitmap tag
allocation to manage the xri buffers.
Alternatively one could _ide
On 12/28/2018 1:16 AM, Hannes Reinecke wrote:
On 12/27/18 12:33 AM, James Smart wrote:
A scsi host lock is taken on every io completion to check whether
someone is waiting on the io completion. The lock doesn't have to be
taken on all ios, only those that have been marked as aborted.
Rework to
On 12/28/2018 1:23 AM, Hannes Reinecke wrote:
As indicated previously, once we would be using embedded xris none of
this would be necessary ...
See comments on prior patch...
-- james
On 12/28/2018 1:53 AM, Hannes Reinecke wrote:
Have you considered making 'LPFC_EQ_DELAY_MSECS' configurable?
It looks to me as if it would introduce a completion latency; having it
configurable would allow us to check and possibly modify this.
It could be configurable if desired.
It shouldn't
On 12/28/2018 4:30 AM, Hannes Reinecke wrote:
Doesn't this obsolete patch 16?
no - it just changes the way vectors are obtained.
-- james
On Fri, 28 Dec 2018, LEROY Christophe wrote:
> Finn Thain a ?crit?:
>
> > On powerpc, setting CONFIG_NVRAM=n builds a kernel with no NVRAM support.
> > Setting CONFIG_NVRAM=m enables the /dev/nvram misc device module without
> > enabling NVRAM support in drivers. Setting CONFIG_NVRAM=y enables t
The pull request you sent on Mon, 24 Dec 2018 09:19:53 -0800:
> git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi.git scsi-misc
has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/938edb8a31b976c9a92eb0cd4ff481e93f76c1f1
Thank you!
--
Deet-doot-dot, I am a bot.
htt
Hi Finn,
Am 29.12.2018 um 14:06 schrieb Finn Thain:
On Fri, 28 Dec 2018, LEROY Christophe wrote:
diff --git a/drivers/scsi/atari_scsi.c b/drivers/scsi/atari_scsi.c
index 89f5154c40b6..99e5729d910d 100644
--- a/drivers/scsi/atari_scsi.c
+++ b/drivers/scsi/atari_scsi.c
@@ -755,9 +755,10 @@ static
On Sat, 29 Dec 2018, Michael Schmitz wrote:
>
> IS_BUILTIN(CONFIG_NVRAM) is probably what Christophe really meant to suggest.
>
> Or (really going out on a limb here):
>
> IS_BUILTIN(CONFIG_NVRAM) ||
> ( IS_MODULE(CONFIG_ATARI_SCSI) && IS_ENABLED(CONFIG_NVRAM) )
>
> Not that I'd advocate that,
Hi Finn,
Am 29.12.2018 um 15:34 schrieb Finn Thain:
On Sat, 29 Dec 2018, Michael Schmitz wrote:
IS_BUILTIN(CONFIG_NVRAM) is probably what Christophe really meant to suggest.
Or (really going out on a limb here):
IS_BUILTIN(CONFIG_NVRAM) ||
( IS_MODULE(CONFIG_ATARI_SCSI) && IS_ENABLED(CONFIG
Hi Finn,
Am 26.12.2018 um 13:37 schrieb Finn Thain:
On powerpc, setting CONFIG_NVRAM=n builds a kernel with no NVRAM support.
Setting CONFIG_NVRAM=m enables the /dev/nvram misc device module without
enabling NVRAM support in drivers. Setting CONFIG_NVRAM=y enables the
misc device (built-in) and
On Thu, Dec 27, 2018 at 04:40:55PM +0300, Dan Carpenter wrote:
> On Tue, Dec 25, 2018 at 11:12:20PM +0100, Tom Psyborg wrote:
> > there was discussion about this just some days ago. CC 4-5 lists is
> > more than enough
> >
>
> I don't know who you were discussing this with...
>
> You should CC t
On Fri, 28 Dec 2018, Darrick J. Wong wrote:
> On Thu, Dec 27, 2018 at 04:40:55PM +0300, Dan Carpenter wrote:
> > On Tue, Dec 25, 2018 at 11:12:20PM +0100, Tom Psyborg wrote:
> > > there was discussion about this just some days ago. CC 4-5 lists is
> > > more than enough
> > >
> >
> > I don't kn
43 matches
Mail list logo