I'm planning to introduce new block-layer specific status code ASAP,
so I'd prefer not to add new errno special cases.
I'll port your patches to the new code and will send them out with
my series in a few days, though.
On Tue, Apr 04, 2017 at 12:23:19PM +0530, Mahesh Rajashekhara wrote:
> +#if defined(CONFIG_ARM64)
> +static inline void *
> +aac_pci_alloc_consistent(struct pci_dev *hwdev, size_t size,
> + dma_addr_t *dma_handle) {
> + return dma_alloc_coherent(hwdev == NULL ? NULL : &hwdev->d
There were pci_alloc_consistent() failures on ARM64 platform.
Use dma_alloc_coherent() with GFP_KERNEL flag DMA memory allocations.
Signed-off-by: Mahesh Rajashekhara
---
drivers/scsi/aacraid/aachba.c | 13 +
drivers/scsi/aacraid/aacraid.h | 22 ++
drivers/scsi
On Mon, Apr 03, 2017 at 09:14:02AM -0400, Cathy Avery wrote:
> On 04/03/2017 08:17 AM, Christoph Hellwig wrote:
> > > if (host->transportt == fc_transport_template) {
> > > + struct fc_rport_identifiers ids = {
> > > + .roles = FC_PORT_ROLE_FCP_TARGET,
> > > +
Does this one work better?
---
>From e5a4178cb810be581b6d9b8f48f13b12e88eae74 Mon Sep 17 00:00:00 2001
From: Christoph Hellwig
Date: Thu, 12 Jan 2017 11:17:29 +0100
Subject: csiostor: switch to pci_alloc_irq_vectors
And get automatic MSI-X affinity for free.
Signed-off-by: Christoph Hellwig
--
On 04/04/17 06:53, Mauricio Faria de Oliveira wrote:
> Hi Junichi,
>
> On 03/28/2017 11:29 PM, Junichi Nomura wrote:
>> Since commit 895427bd012c ("scsi: lpfc: NVME Initiator: Base modifications"),
>> "rmmod lpfc" starting to cause panic or corruption due to double free.
>
> Thanks for the report
On Mon, Apr 3, 2017 at 4:12 PM, Logan Gunthorpe wrote:
>
>
> On 03/04/17 04:47 PM, Dan Williams wrote:
>> I wouldn't necessarily conflate supporting pfn_t in the scatterlist
>> with the stalled stuct-page-less DMA effor. A pfn_t_to_page()
>> conversion will still work and be required. However you'
On 03/04/17 04:47 PM, Dan Williams wrote:
> I wouldn't necessarily conflate supporting pfn_t in the scatterlist
> with the stalled stuct-page-less DMA effor. A pfn_t_to_page()
> conversion will still work and be required. However you're right, the
> minute we use pfn_t for this we're into the rea
On Mon, Apr 3, 2017 at 3:10 PM, Logan Gunthorpe wrote:
>
>
> On 03/04/17 03:44 PM, Dan Williams wrote:
>> On Mon, Apr 3, 2017 at 2:20 PM, Logan Gunthorpe wrote:
>>> Hi Christoph,
>>>
>>> What are your thoughts on an approach like the following untested
>>> draft patch.
>>>
>>> The patch (if flesh
On 03/04/17 03:44 PM, Dan Williams wrote:
> On Mon, Apr 3, 2017 at 2:20 PM, Logan Gunthorpe wrote:
>> Hi Christoph,
>>
>> What are your thoughts on an approach like the following untested
>> draft patch.
>>
>> The patch (if fleshed out) makes it so iomem can be used in an sgl
>> and WARN_ONs wil
Hi Junichi,
On 03/28/2017 11:29 PM, Junichi Nomura wrote:
Since commit 895427bd012c ("scsi: lpfc: NVME Initiator: Base modifications"),
"rmmod lpfc" starting to cause panic or corruption due to double free.
Thanks for the report. Can you please check whether the patch just sent
([PATCH] lpfc:
commit 895427bd012c ("scsi: lpfc: NVME Initiator: Base modifications")
binds the CQs and WQs ring pointer (sets it to same address on both).
lpfc_create_wq_cq():
...
rc = lpfc_cq_create(phba, cq, eq, <...>)
...
rc = lpfc_wq_create(phba, wq, cq, qtype);
On Mon, Apr 3, 2017 at 2:20 PM, Logan Gunthorpe wrote:
> Hi Christoph,
>
> What are your thoughts on an approach like the following untested
> draft patch.
>
> The patch (if fleshed out) makes it so iomem can be used in an sgl
> and WARN_ONs will occur in places where drivers attempt to access
> i
Hi Christoph,
What are your thoughts on an approach like the following untested
draft patch.
The patch (if fleshed out) makes it so iomem can be used in an sgl
and WARN_ONs will occur in places where drivers attempt to access
iomem directly through the sgl.
I'd also probably create a p2pmem_allo
On Mon, Apr 3, 2017 at 6:30 AM, Kyle Fortin wrote:
>
> for (i = 0; i < q->max; i++)
> kfree(q->pool[i]);
> - kfree(q->pool);
> + if (q->is_pool_vmalloc)
you could do something like:
if (is_vmalloc_addr(q->pool))
vfree(...);
else
kfree(..);
And then r
https://bugzilla.kernel.org/show_bug.cgi?id=195233
Bug ID: 195233
Summary: sd: discard_granularity set smaller than
minimum_io_size when LBPRZ=1
Product: IO/Storage
Version: 2.5
Kernel Version: 4.4.0
Hardware: All
On 04/03/17 01:13, Stephen Rothwell wrote:
> Hi all,
>
> Changes since 20170331:
>
on i386:
when SCSI_LPFC=y and
CONFIG_NVME_CORE=m
CONFIG_BLK_DEV_NVME=m
CONFIG_BLK_DEV_NVME_SCSI=y
CONFIG_NVME_FABRICS=m
CONFIG_NVME_FC=m
CONFIG_NVME_TARGET=m
drivers/built-in.o: In function `lpfc_nvme_create_loc
On 04/03/2017 12:20 AM, Shie-rei Huang wrote:
> After a PGR command is processed in the kernel, is it possible for the
> user mode to be notified with the command so that the user mode has a
> chance to do its part of PGR processing. Below is one use case of it.
> Suppose two TCMU servers are set u
On Mon, 3 Apr 2017, 9:47am, Jens Axboe wrote:
> On 04/03/2017 10:41 AM, Arun Easi wrote:
> > On Mon, 3 Apr 2017, 8:20am, Bart Van Assche wrote:
> >
> >> On Mon, 2017-04-03 at 09:29 +0200, Hannes Reinecke wrote:
> >>> On 04/03/2017 08:37 AM, Arun Easi wrote:
> If the above is true, then for a
On 04/03/2017 10:41 AM, Arun Easi wrote:
> On Mon, 3 Apr 2017, 8:20am, Bart Van Assche wrote:
>
>> On Mon, 2017-04-03 at 09:29 +0200, Hannes Reinecke wrote:
>>> On 04/03/2017 08:37 AM, Arun Easi wrote:
If the above is true, then for a LLD to get tag# within it's max-tasks
range, it has
On Mon, 3 Apr 2017, 8:20am, Bart Van Assche wrote:
> On Mon, 2017-04-03 at 09:29 +0200, Hannes Reinecke wrote:
> > On 04/03/2017 08:37 AM, Arun Easi wrote:
> > > If the above is true, then for a LLD to get tag# within it's max-tasks
> > > range, it has to report max-tasks / number-of-hw-queues in
On Mon, 3 Apr 2017, 12:29am, Hannes Reinecke wrote:
> On 04/03/2017 08:37 AM, Arun Easi wrote:
> > Hi Folks,
> >
> > I would like to seek your input on a few topics on SCSI / block
> > multi-queue.
> >
> > 1. Tag# generation.
> >
> > The context is with SCSI MQ on. My question is, what should
https://bugzilla.kernel.org/show_bug.cgi?id=191471
Timur Tabi (ti...@codeaurora.org) changed:
What|Removed |Added
CC||ti...@codeaurora.org
On Mon, 2017-04-03 at 09:29 +0200, Hannes Reinecke wrote:
> On 04/03/2017 08:37 AM, Arun Easi wrote:
> > If the above is true, then for a LLD to get tag# within it's max-tasks
> > range, it has to report max-tasks / number-of-hw-queues in can_queue, and
> > in the I/O path, use the tag and hwq# t
On Fri, Mar 31, 2017 at 12:25:27PM +0530, Christoph Hellwig wrote:
> On Fri, Jan 20, 2017 at 07:27:02PM -0500, Martin K. Petersen wrote:
> > > "Christoph" == Christoph Hellwig writes:
> >
> > Christoph> And get automatic MSI-X affinity for free.
> >
> > Chelsio folks: Please review and test!
>
sas_domain_release_transport is unused since at least v3.13, remove it.
Signed-off-by: Johannes Thumshirn
---
drivers/scsi/libsas/sas_init.c | 7 ---
include/scsi/libsas.h | 1 -
2 files changed, 8 deletions(-)
diff --git a/drivers/scsi/libsas/sas_init.c b/drivers/scsi/libsas/sas_i
On Apr 3, 2017, at 9:41 AM, Johannes Thumshirn wrote:
>
> On Mon, Apr 03, 2017 at 06:30:21AM -0700, Kyle Fortin wrote:
>> iscsiadm session login can fail with the following error:
>>
>> iscsiadm: Could not login to [iface: default, target: iqn.1986-03.com...
>> iscsiadm: initiator reported error
On Mon, Apr 03, 2017 at 06:30:21AM -0700, Kyle Fortin wrote:
> iscsiadm session login can fail with the following error:
>
> iscsiadm: Could not login to [iface: default, target: iqn.1986-03.com...
> iscsiadm: initiator reported error (9 - internal error)
>
> When /etc/iscsi/iscsid.conf sets node
iscsiadm session login can fail with the following error:
iscsiadm: Could not login to [iface: default, target: iqn.1986-03.com...
iscsiadm: initiator reported error (9 - internal error)
When /etc/iscsi/iscsid.conf sets node.session.cmds_max = 4096, it results
in 64K-sized kmallocs per session.
On 04/03/2017 08:17 AM, Christoph Hellwig wrote:
if (host->transportt == fc_transport_template) {
+ struct fc_rport_identifiers ids = {
+ .roles = FC_PORT_ROLE_FCP_TARGET,
+ };
I don't think storvsc ever acts as FCP target.
In order to
> if (host->transportt == fc_transport_template) {
> + struct fc_rport_identifiers ids = {
> + .roles = FC_PORT_ROLE_FCP_TARGET,
> + };
I don't think storvsc ever acts as FCP target.
It is quite easily to get URE after power failure and get scary message.
URE is happens due to internal drive crc mismatch due to partial sector
update. Most people interpret such message as "My drive is dying", which
isreasonable assumption if your dmesg is full of complain from disks and
read(2)
EILSEQ is returned due to internal csum error on disk/fabric,
let's add special message to distinguish it from others. Also dump
original numerical error code.
Signed-off-by: Dmitry Monakhov
---
block/blk-core.c | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/block/blk
Included in the current storvsc driver for Hyper-V is the ability
to access luns on an FC fabric via a virtualized fiber channel
adapter exposed by the Hyper-V host. The driver also attaches to
the FC transport to allow host and port names to be published under
/sys/class/fc_host/hostX. Current cus
On 04/03/2017 08:37 AM, Arun Easi wrote:
> Hi Folks,
>
> I would like to seek your input on a few topics on SCSI / block
> multi-queue.
>
> 1. Tag# generation.
>
> The context is with SCSI MQ on. My question is, what should a LLD do to
> get request tag values in the range 0 through can_queue
35 matches
Mail list logo