Hi Meelis,
> On Jan 24, 2018, at 2:18 PM, Meelis Roos wrote:
>
>>> Hello, I decided to widen the coverage of my kernel testbed and put some
>>> FC cards into servers. This one is a PCI-X QLA2340 in HP Proliant DL 380
>>> G4 (first 64-bit generation of Proliants). I got a UBSAN warning from
>
--
Dear Beneficiary
This is to inform you that your Long Time Awaiting Funds amounting in the
tone of USD$10,400,000.00 {TEN MILLION, FOUR HUNDRED THOUSAND UNITED STATES
DOLLARS) Which is presently in the custody of a finance house in Germany
has been approved for immediate delivery to you in you
On Wed, Jan 24, 2018 at 01:36:00PM -0800, James Bottomley wrote:
> On Wed, 2018-01-24 at 11:20 -0800, Mike Kravetz wrote:
> > On 01/24/2018 11:05 AM, James Bottomley wrote:
> > >
> > > I've got two community style topics, which should probably be
> > > discussed
> > > in the plenary
> > >
> > > 1
Increase cmd_per_lun to allow more I/Os in progress per device,
particularly for NVMe's. The Hyper-V host side can handle the
higher count with no issues.
Signed-off-by: Michael Kelley
---
drivers/scsi/storvsc_drv.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/scs
During link bounce testing in a point-to-point topology, the
host may enter a soft lockup on the lpfc_worker thread:
Call Trace:
lpfc_work_done+0x1f3/0x1390 [lpfc]
lpfc_do_work+0x16f/0x180 [lpfc]
kthread+0xc7/0xe0
ret_from_fork+0x3f/0x70
The driver was simultaneously settin
A stress test repeatedly resetting the adapter while performing
io would eventually report I/O failures and missing nvme namespaces.
The driver was setting the nvmefc_fcp_req->private pointer to NULL
during the IO completion routine before upcalling done().
If the transport was also running an abo
In a test that is doing large numbers of cable swaps on the target,
the nvme controllers wouldn't reconnect.
During the cable swaps, the targets n_port_id would change. This
information was passed to the nvme-fc transport, in the new remoteport
registration. However, the nvme-fc transport didn't u
I/O conditions on the nvme target may have the driver submitting
to a full hardware wq. The hardware wq is a shared resource among
all nvme controllers. When the driver hit a full wq, it failed the
io posting back to the nvme-fc transport, which then escalated it
into errors.
Correct by maintainin
The lpfc driver does not discover a target when the topology
changes from switched-fabric to direct-connect. The target
rejects the PRLI from the initiator in direct-connect as the
driver is using the old S_ID from the switched topology.
The driver was inappropriately clearing the VP bit to regist
> > Hello, I decided to widen the coverage of my kernel testbed and put some
> > FC cards into servers. This one is a PCI-X QLA2340 in HP Proliant DL 380
> > G4 (first 64-bit generation of Proliants). I got a UBSAN warning from
> > qla2xxx before probing for the firmware.
>
> Would it be possib
During SCSI error handling escalation to host reset, the SCSI io
routines were moved off the txcmplq, but the individual io's
ON_CMPLQ flag wasn't cleared. Thus, a background thread saw the
io and attempted to access it as if on the txcmplq.
Clear the flag upon removal.
Signed-off-by: Dick Kenne
When using the special option to suppress the response iu, ensure
the adapter fully supports the feature by checking feature flags
from the adapter.
Signed-off-by: Dick Kennedy
Signed-off-by: James Smart
---
drivers/scsi/lpfc/lpfc_hw4.h | 3 +++
drivers/scsi/lpfc/lpfc_init.c | 13
Ensure nvme localports/targetports are torn down before
dismantling the adapter sli interface on driver detachment.
This aids leaving interfaces live while nvme may be making
callbacks to abort it.
Signed-off-by: Dick Kennedy
Signed-off-by: James Smart
---
drivers/scsi/lpfc/lpfc_init.c | 14 +++
Updated Copyright in files updated 11.4.0.7
Signed-off-by: Dick Kennedy
Signed-off-by: James Smart
---
drivers/scsi/lpfc/lpfc.h | 2 +-
drivers/scsi/lpfc/lpfc_attr.c | 2 +-
drivers/scsi/lpfc/lpfc_crtn.h | 2 +-
drivers/scsi/lpfc/lpfc_els.c | 2 +-
drivers/scsi/lpfc/lp
The driver was inappropriately pulling in the nvme host's
nvme.h header. What it really needed was the standard
header.
Signed-off-by: Dick Kennedy
Signed-off-by: James Smart
---
drivers/scsi/lpfc/lpfc_nvmet.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/scsi/lpf
Currently, write underruns (mismatch of amount transferred vs scsi
status and its residual) detected by the adapter are not being
flagged as an error. Its expected the target controls the data
transfer and would appropriately set the RSP values. Only read
underruns are treated as errors.
Revise t
Revise the NVME PRLI to indicate CONF support.
Signed-off-by: Dick Kennedy
Signed-off-by: James Smart
---
drivers/scsi/lpfc/lpfc_els.c | 3 ++-
drivers/scsi/lpfc/lpfc_hw4.h | 6 +++---
drivers/scsi/lpfc/lpfc_nportdisc.c | 3 ---
3 files changed, 5 insertions(+), 7 deletions(-)
diff
The driver ignored checks on whether the link should be
kept administratively down after a link bounce. Correct the
checks.
Signed-off-by: Dick Kennedy
Signed-off-by: James Smart
---
drivers/scsi/lpfc/lpfc_attr.c | 5 +
1 file changed, 5 insertions(+)
diff --git a/drivers/scsi/lpfc/lpfc_at
Update the driver version to 11.4.0.7
Signed-off-by: Dick Kennedy
Signed-off-by: James Smart
---
drivers/scsi/lpfc/lpfc_version.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/scsi/lpfc/lpfc_version.h b/drivers/scsi/lpfc/lpfc_version.h
index c232bf0e8998..6f4092cb9
Increased CQ and WQ sizes for SCSI FCP, matching those used
for NVMe development.
Signed-off-by: Dick Kennedy
Signed-off-by: James Smart
---
drivers/scsi/lpfc/lpfc.h | 1 +
drivers/scsi/lpfc/lpfc_hw4.h | 3 +++
drivers/scsi/lpfc/lpfc_init.c | 30 ++
drivers/s
When nvme target deferred receive logic waits for exchange
resources, the corresponding receive buffer is not replenished
with the hardware. This can result in a lack of asynchronous
receive buffer resources in the hardware, resulting in a
"2885 Port Status Event: ... error 1=0x52004a01 ..." messag
Make the attribute writeable.
Remove the ramp up to logic as its unnecessary, simply set depth.
Add debug message if depth changed, possibly reducing limit, yet
our outstanding count has yet to catch up with it.
Signed-off-by: Dick Kennedy
Signed-off-by: James Smart
---
drivers/scsi/lpfc/lpfc_a
Existing code was using the wrong field for the completion status
when comparing whether to increment abort statistics
Signed-off-by: Dick Kennedy
Signed-off-by: James Smart
---
drivers/scsi/lpfc/lpfc_nvmet.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/scsi/l
The driver controls when the hardware sends completions that
communicate consumption of elements from the WQ. This is done by
setting a WQEC bit on a WQE.
The current driver sets it on every Nth WQE posting. However, the
driver isn't clearing the bit if the WQE is reused. Thus, if the
queue depth
This patch set provides a number of fixes for the driver.
The patches were cut against the Martin's 4.16/scsi-queue tree.
There are no outside dependencies and are expected to be pulled
via Martins tree.
James Smart (19):
lpfc: Fix frequency of Release WQE CQEs
lpfc: Increase CQ and WQ sizes
Updated/corrected two email addresses ...
> -Original Message-
> From: Michael Kelley (EOSG)
> Sent: Wednesday, January 24, 2018 2:14 PM
> To: KY Srinivasan ; Stephen Hemminger
> ;
> martin.peter...@oracle.com; lo...@microsoft.com; jbottom...@odin.com;
> de...@linuxdriverproject.org; linu
Update the algorithm in storvsc_do_io to look for a channel
starting with the current CPU + 1 and wrap around (within the
current NUMA node). This spreads VMbus interrupts more evenly
across CPUs. Previous code always started with first CPU in
the current NUMA node, skewing the interrupt load to th
On Wed, 2018-01-24 at 19:26 +, Bart Van Assche wrote:
> On Wed, 2018-01-24 at 11:05 -0800, James Bottomley wrote:
> >
> > 2. Handling Internal Conflict
> >
> > My observation here is that actually most conflict is generated by
> > the review process (I know, if we increase reviews as I propos
On Wed, 2018-01-24 at 11:20 -0800, Mike Kravetz wrote:
> On 01/24/2018 11:05 AM, James Bottomley wrote:
> >
> > I've got two community style topics, which should probably be
> > discussed
> > in the plenary
> >
> > 1. Patch Submission Process
> >
> > Today we don't have a uniform patch submissio
On Mon, 2017-09-18 at 13:49 +0300, Meelis Roos wrote:
> Hello, I decided to widen the coverage of my kernel testbed and put some
> FC cards into servers. This one is a PCI-X QLA2340 in HP Proliant DL 380
> G4 (first 64-bit generation of Proliants). I got a UBSAN warning from
> qla2xxx before pro
> Hello again.
And again...
>
> > > On Sep 18, 2017, at 3:49 AM, Meelis Roos wrote:
> > >
> > > Hello, I decided to widen the coverage of my kernel testbed and put some
> > > FC cards into servers. This one is a PCI-X QLA2340 in HP Proliant DL 380
> > > G4 (first 64-bit generation of Proliant
On Wed, 2018-01-24 at 11:05 -0800, James Bottomley wrote:
> 2. Handling Internal Conflict
>
> My observation here is that actually most conflict is generated by the
> review process (I know, if we increase reviews as I propose in 1. we'll
> increase conflict on the lists on the basis of this obser
On 01/24/2018 11:05 AM, James Bottomley wrote:
> I've got two community style topics, which should probably be discussed
> in the plenary
>
> 1. Patch Submission Process
>
> Today we don't have a uniform patch submission process across Storage,
> Filesystems and MM. The question is should we (or
I've got two community style topics, which should probably be discussed
in the plenary
1. Patch Submission Process
Today we don't have a uniform patch submission process across Storage,
Filesystems and MM. The question is should we (or at least should we
adhere to some minimal standards). The s
On Wed, 2018-01-24 at 08:07 -0800, Chad Dupuis wrote:
> When a request times out we set the io_req flag BNX2FC_FLAG_IO_COMPL
> so
> that if a subsequent completion comes in on that task ID we will
> ignore
> it. The issue is that in the check for this flag there is a missing
> return so we will co
When a request times out we set the io_req flag BNX2FC_FLAG_IO_COMPL so
that if a subsequent completion comes in on that task ID we will ignore
it. The issue is that in the check for this flag there is a missing
return so we will continue to process a request which may have already
been returned t
From: Colin Ian King
The pointer ln is assigned a value that is never read, it is re-assigned
a new value in the list_for_each loop hence the initialization is
redundant and can be removed.
Cleans up clang warning:
drivers/scsi/csiostor/csio_lnode.c:117:21: warning: Value stored to 'ln'
during i
In ata_eh_reset, it will reset three times at most for sata disk. For
some drivers through libsas, it calls sas_ata_hard_reset at last. When
device is gone, function sas_ata_hard_reset will return -ENODEV. But
it will still try to reset three times for offline device. This process
lasts a long time
Schönen Tag,
Sie benötigen einen echten Kredit online Ihre Rechnungen zu sichern?
Startet ein neues Unternehmen? Sie benötigen einen persönlichen Kredit
oder Business-Darlehen? Wir bieten ein Darlehen von € 10.000 bis €
500,000.000.00 mit 2% Zinsen pro Jahr und auch mit einem
erschwinglichen Rückz
Looks good,
Reviewed-by: Johannes Thumshirn
--
Johannes Thumshirn Storage
jthumsh...@suse.de+49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG N
Not a real RAID level, but some HBAs support JBOD in addition to
the 'classical' RAID levels.
Signed-off-by: Hannes Reinecke
---
drivers/scsi/raid_class.c | 1 +
include/linux/raid_class.h | 1 +
2 files changed, 2 insertions(+)
diff --git a/drivers/scsi/raid_class.c b/drivers/scsi/raid_class.
Hi all,
as we're trying to get rid of the remaining request_fn drivers here's
a patchset to move the DAC960 driver to the SCSI stack.
As per request from hch I've split up the driver into two new SCSI
drivers called 'myrb' and 'myrs'.
The 'myrb' driver only supports the earlier (V1) firmware inte
42 matches
Mail list logo