On Sun, 2017-05-07 at 11:22 +0200, h...@lst.de wrote:
> On Tue, May 02, 2017 at 08:33:15PM -0700, Nicholas A. Bellinger wrote:
> > The larger target/iblock conversion patch looks like post v4.12 material
> > at this point, so to avoid breakage wrt to existing LBPRZ behavior, I'll
> > plan to push t
Hi Bryant,
Given running we're almost out of time for -rc1, I'd like to avoid
having to rebase the handful of patches that are atop the -v3 that was
applied to target-pending/for-next over the weekend...
So if you'd be so kind, please post an incremental patch atop -v3, and
I'll apply that instea
Hi Martin,
Quoting "Martin K. Petersen" :
Gustavo A.,
Properly update the position of the arguments in function call.
Applied to 4.12/scsi-fixes, thank you!
Awesome, glad to help. :)
--
Gustavo A. R. Silva
Colin,
> The 2nd check to see if request_size is less than zero is redundant
> because the first check takes error exit path on this condition. So,
> since it is redundant, remove it.
Applied to 4.12/scsi-fixes.
--
Martin K. Petersen Oracle Linux Engineering
Colin,
> I believe there is a typo on the wq destroy of els_wq, currently the
> driver is checking if els_cq is not null and I think this should be a
> check on els_wq instead.
Applied to 4.12/scsi-fixes. Thanks!
--
Martin K. Petersen Oracle Linux Engineering
Guenter,
> The driver now uses IRQ_POLL and needs to select it to avoid the following
> build error.
Applied to 4.12/scsi-fixes, thank you!
--
Martin K. Petersen Oracle Linux Engineering
Kees,
> Using memcpy() from a string that is shorter than the length copied
> means the destination buffer is being filled with arbitrary data from
> the kernel rodata segment. Instead, use strncpy() which will fill the
> trailing bytes with zeros.
Applied to 4.12/scsi-fixes, thanks!
--
Martin
Christoph,
> Any chance to get a sneak preview of that work?
I have been on the road since LSF/MM and just got back home. I'll make
it a priority.
--
Martin K. Petersen Oracle Linux Engineering
Sebastian,
> Martin, do you see any chance to get this merged? Chad replied to the
> list that he is going to test it on 2017-04-10, didn't respond to the
> ping 10 days later. The series stalled last time in the same way.
I am very reluctant to merge something when a driver has an active
mainta
Christoph,
> Normally we'd just pass the scsi_sense_hdr structure in from the caler
> if we care about sense data. Is this something you considered?
>
> Otherwise this looks fine to me.
I agree with Christoph that passing the sense header would be more
consistent with the rest of the SCSI code.
Dan,
> We store sc_cmd->cmnd[0] which is an unsigned char in io_log->op so
> this should also be unsigned char. The other thing is that this is
> displayed in the debugfs:
Applied to 4.12/scsi-fixes, thanks!
--
Martin K. Petersen Oracle Linux Engineering
Dan,
> There is a double lock bug here so this will deadlock instead of
> unlocking.
Applied to 4.12/scsi-fixes.
--
Martin K. Petersen Oracle Linux Engineering
Gustavo A.,
> Properly update the position of the arguments in function call.
Applied to 4.12/scsi-fixes, thank you!
--
Martin K. Petersen Oracle Linux Engineering
Bart,
> This patch avoids that when building with W=1 the compiler complains
> that __scsi_init_queue() has not been declared. See also commit
> d48777a633d6 ("scsi: remove __scsi_alloc_queue").
Applied to 4.12/scsi-fixes. Thanks!
--
Martin K. Petersen Oracle Linux Engineering
Christoph,
> The open-osd domain doesn't exist anymore, and mails to the list lead
> to really annoying bounced that repeat every day.
>
> Also the primarydata address for Benny bounces, and while I have a new
> one for him he doesn't seem to be maintaining the OSD code any more.
>
> Which beggs
Zhou,
> When a scsi_device is unpluged from scsi controller, if the
> scsi_device is still be used by application layer,it won't be released
> until users release it. In this case, scsi_device_remove just set the
> scsi_device's state to be SDEV_DEL. But if you plug the disk just
> before the old
James,
> Fix is to reset the sli-3 function before sending the mailbox command,
> thus synchronizing the function/driver on mailbox location.
Applied to 4.12/scsi-fixes.
--
Martin K. Petersen Oracle Linux Engineering
Hannes,
> When the FCoE sending side becomes congested libfc tries to reduce the
> queue depth on the host; however due to the built-in lag before
> attempting to ramp down the queue depth _again_ the message log is
> flooded with messages
>
> libfc: queue full, reducing can_queue to 512
>
> With
The driver is sending a response to the actual scsi op that was
aborted by an abort task TM, while LIO is sending a response to
the abort task TM.
ibmvscsis_tgt does not send the response to the client until
release_cmd time. The reason for this was because if we did it
at queue_status time, then
On Mon 2017-05-08 16:40:11, David Woodhouse wrote:
> On Mon, 2017-05-08 at 13:50 +0200, Boris Brezillon wrote:
> > On Mon, 08 May 2017 11:13:10 +0100
> > David Woodhouse wrote:
> >
> > >
> > > On Mon, 2017-05-08 at 11:09 +0200, Hans de Goede wrote:
> > > >
> > > > You're forgetting that the SSD
Hi,
The patch set has been sent twice. Please ignore the later sent/received
patch set.
I had sent the original patch set on Saturday, but I did not receive a
confirmation nor did any mailing list archive(http://marc.info/?l=linux-scsi)
pick them up (should have waited). The next day I res
Hello,
On Mon, May 08, 2017 at 08:56:15PM +0200, Pavel Machek wrote:
> Well... the SMART counter tells us that the device was not shut down
> correctly. Do we have reason to believe that it is _not_ telling us
> truth? It is more than one device.
It also finished power off command successfully.
On Mon 2017-05-08 13:43:03, Tejun Heo wrote:
> Hello,
>
> On Mon, May 08, 2017 at 06:43:22PM +0200, Pavel Machek wrote:
> > What I was trying to point out was that storage people try to treat
> > SSDs as HDDs... and SSDs are very different. Harddrives mostly survive
> > powerfails (with emergency
> Well, you are right.. and I'm responsible.
>
> What I was trying to point out was that storage people try to treat SSDs as
> HDDs...
> and SSDs are very different. Harddrives mostly survive powerfails (with
> emergency
> parking), while it is very, very difficult to make SSD survive random
> p
Hello,
On Mon, May 08, 2017 at 06:43:22PM +0200, Pavel Machek wrote:
> What I was trying to point out was that storage people try to treat
> SSDs as HDDs... and SSDs are very different. Harddrives mostly survive
> powerfails (with emergency parking), while it is very, very difficult
> to make SSD
On Mon 2017-05-08 13:50:05, Boris Brezillon wrote:
> On Mon, 08 May 2017 11:13:10 +0100
> David Woodhouse wrote:
>
> > On Mon, 2017-05-08 at 11:09 +0200, Hans de Goede wrote:
> > > You're forgetting that the SSD itself (this thread is about SSDs) also has
> > > a major software component which is
On Mon, 2017-05-08 at 13:50 +0200, Boris Brezillon wrote:
> On Mon, 08 May 2017 11:13:10 +0100
> David Woodhouse wrote:
>
> >
> > On Mon, 2017-05-08 at 11:09 +0200, Hans de Goede wrote:
> > >
> > > You're forgetting that the SSD itself (this thread is about SSDs) also has
> > > a major software
> On May 5, 2017, at 6:31 PM, Guenter Roeck wrote:
>
> The driver now uses IRQ_POLL and needs to select it to avoid the following
> build error.
>
> ERROR: ".irq_poll_complete" [drivers/scsi/cxlflash/cxlflash.ko] undefined!
> ERROR: ".irq_poll_sched" [drivers/scsi/cxlflash/cxlflash.ko] undefined
Hi!
> > 'clean marker' is a good idea... empty pages have plenty of space.
>
> Well... you lose that space permanently. Although I suppose you could
> do things differently and erase a block immediately prior to using it.
> But in that case why ever write the cleanmarker? Just maintain a set of
>
Boris,
Am 08.05.2017 um 13:48 schrieb Boris Brezillon:
>>> How do you handle the issue during regular write? Always ignore last
>>> successfully written block?
>
> I guess UBIFS can know what was written last, because of the log-based
> approach + the seqnum stored along with FS nodes, but I'm
This should be CC'd to qlogic-storage-upstr...@qlogic.com as well.
regards,
dan carpenter
On Sun, May 07, 2017 at 10:30:20PM +0100, Colin King wrote:
> From: Colin Ian King
>
> iscsi_lookup_endpoint can potentially return null and in 9 out of
> the 10 calls to this function a null return is che
On Mon, 8 May 2017 13:48:07 +0200
Boris Brezillon wrote:
> On Mon, 8 May 2017 13:06:17 +0200
> Richard Weinberger wrote:
>
> > On Mon, May 8, 2017 at 12:49 PM, Pavel Machek wrote:
> > > Aha, nice, so it looks like ubifs is a step back here.
> > >
> > > 'clean marker' is a good idea... empty
On Mon, 08 May 2017 11:13:10 +0100
David Woodhouse wrote:
> On Mon, 2017-05-08 at 11:09 +0200, Hans de Goede wrote:
> > You're forgetting that the SSD itself (this thread is about SSDs) also has
> > a major software component which is doing housekeeping all the time, so even
> > if the main CPU g
On Mon, 8 May 2017 13:06:17 +0200
Richard Weinberger wrote:
> On Mon, May 8, 2017 at 12:49 PM, Pavel Machek wrote:
> > Aha, nice, so it looks like ubifs is a step back here.
> >
> > 'clean marker' is a good idea... empty pages have plenty of space.
>
> If UBI (not UBIFS) faces an empty block,
Reorder 'fail_free_irq' and 'fail_unmap_regs' in order to correctly free
resources in case of error.
Signed-off-by: Christophe JAILLET
---
drivers/scsi/qlogicpti.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/scsi/qlogicpti.c b/drivers/scsi/qlogicpti.c
index
On Mon, 2017-05-08 at 12:49 +0200, Pavel Machek wrote:
> On Mon 2017-05-08 10:34:08, David Woodhouse wrote:
> >
> > On Mon, 2017-05-08 at 11:28 +0200, Pavel Machek wrote:
> > >
> > >
> > > Are you sure you have it right in JFFS2? Do you journal block erases?
> > > Apparently, that was pretty muc
On Mon, May 8, 2017 at 12:49 PM, Pavel Machek wrote:
> Aha, nice, so it looks like ubifs is a step back here.
>
> 'clean marker' is a good idea... empty pages have plenty of space.
If UBI (not UBIFS) faces an empty block, it also re-erases it.
The EC header is uses as clean marker.
> How do you
On Mon 2017-05-08 10:34:08, David Woodhouse wrote:
> On Mon, 2017-05-08 at 11:28 +0200, Pavel Machek wrote:
> >
> > Are you sure you have it right in JFFS2? Do you journal block erases?
> > Apparently, that was pretty much non-issue on older flashes.
>
> It isn't necessary in JFFS2. It is a *pure
On Mon, 2017-05-08 at 11:06 +0200, Ricard Wanderlof wrote:
>
> My point is really that say that the problem is in fact not that the erase
> is cut short due to the power fail, but that the software issues a second
> command before the first erase command has completed, for instance, or
> some o
On Mon, 2017-05-08 at 11:09 +0200, Hans de Goede wrote:
> You're forgetting that the SSD itself (this thread is about SSDs) also has
> a major software component which is doing housekeeping all the time, so even
> if the main CPU gets reset the SSD's controller may still happily be erasing
> blocks
Pavel,
On Mon, May 8, 2017 at 11:28 AM, Pavel Machek wrote:
> Are you sure you have it right in JFFS2? Do you journal block erases?
> Apparently, that was pretty much non-issue on older flashes.
This is what the website says, yes. Do you have hardware where you can
trigger it?
If so, I'd love to
On Mon, 2017-05-08 at 11:28 +0200, Pavel Machek wrote:
>
> Are you sure you have it right in JFFS2? Do you journal block erases?
> Apparently, that was pretty much non-issue on older flashes.
It isn't necessary in JFFS2. It is a *purely* log-structured file
system (which is why it doesn't scale w
On Mon 2017-05-08 08:21:34, David Woodhouse wrote:
> On Sun, 2017-05-07 at 22:40 +0200, Pavel Machek wrote:
> > > > NOTE: unclean SSD power-offs are dangerous and may brick the device in
> > > > the worst case, or otherwise harm it (reduce longevity, damage flash
> > > > blocks). It is also not im
Hi,
On 08-05-17 11:06, Ricard Wanderlof wrote:
On Mon, 8 May 2017, David Woodhouse wrote:
On Mon, 8 May 2017, David Woodhouse wrote:
Our empirical testing trumps your "can never happen" theory :)
I'm sure it does. But what is the explanation then? Has anyone analyzed
what is going on using
On Mon, 8 May 2017, David Woodhouse wrote:
> > On Mon, 8 May 2017, David Woodhouse wrote:
> > > Our empirical testing trumps your "can never happen" theory :)
> >
> > I'm sure it does. But what is the explanation then? Has anyone analyzed
> > what is going on using an oscilloscope to verify rela
On Mon, 2017-05-08 at 10:36 +0200, Ricard Wanderlof wrote:
> On Mon, 8 May 2017, David Woodhouse wrote:
> > Our empirical testing trumps your "can never happen" theory :)
>
> I'm sure it does. But what is the explanation then? Has anyone analyzed
> what is going on using an oscilloscope to verify
On Thu, Apr 27, 2017 at 04:25:03PM +0200, Hannes Reinecke wrote:
> When the FCoE sending side becomes congested libfc tries to
> reduce the queue depth on the host; however due to the built-in
> lag before attempting to ramp down the queue depth _again_ the
> message log is flooded with messages
>
On Mon, 8 May 2017, David Woodhouse wrote:
> > I've got a problem with the underlying mechanism. How long does it take to
> > erase a NAND block? A couple of milliseconds. That means that for an erase
> > to be "weak" du to a power fail, the host CPU must issue an erase command,
> > and then t
On Mon, 2017-05-08 at 09:38 +0200, Ricard Wanderlof wrote:
> On Mon, 8 May 2017, David Woodhouse wrote:
>
> >
> > >
> > > [Issue is, if you powerdown during erase, you get "weakly erased"
> > > page, which will contain expected 0xff's, but you'll get bitflips
> > > there quickly. Similar issue e
Update the driver version to 50834
Signed-off-by: Raghava Aditya Renukunta
---
drivers/scsi/aacraid/aacraid.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/scsi/aacraid/aacraid.h b/drivers/scsi/aacraid/aacraid.h
index 58ccd2a..0995265 100644
--- a/drivers/scsi/aacra
On Mon, 8 May 2017, David Woodhouse wrote:
> > [Issue is, if you powerdown during erase, you get "weakly erased"
> > page, which will contain expected 0xff's, but you'll get bitflips
> > there quickly. Similar issue exists for writes. It is solveable in
> > software, just hard and slow... and we
On Tue, May 02, 2017 at 10:45:03AM -0700, Bart Van Assche wrote:
> This patch avoids that when building with W=1 the compiler
> complains that __scsi_init_queue() has not been declared.
> See also commit d48777a633d6 ("scsi: remove __scsi_alloc_queue").
>
> Signed-off-by: Bart Van Assche
> Cc: Ch
On Sun, 2017-05-07 at 22:40 +0200, Pavel Machek wrote:
> > > NOTE: unclean SSD power-offs are dangerous and may brick the device in
> > > the worst case, or otherwise harm it (reduce longevity, damage flash
> > > blocks). It is also not impossible to get data corruption.
>
> > I get that the incre
On Thu, Apr 27, 2017 at 03:08:26PM -0700, jsmart2...@gmail.com wrote:
> From: James Smart
>
> To select the appropriate shost template, the driver is issuing
> a mailbox command to retrieve the wwn. Turns out the sending of
> the command precedes the reset of the function. On SLI-4 adapters,
> t
54 matches
Mail list logo