On 22:13, Boaz Harrosh wrote:
> All the scsi calls do not need any locks. The scsi LLDS never
> see these threads since commands are queued through the block
> layer.
That's what everybody believes, but nobody seems to know for sure.
Therefore I did what Andi suggested: Make a zero-semantics chan
On 20:29, Andi Kleen wrote:
> > Sure, I can do that if James likes the idea. Since not all case
> > statements need the BKL, we could add it only to those for which it
> > isn't clear that it is unnecessary.
> >
> > And this would actually improve something.
>
> I still think it would be a good
On 19:59, Andi Kleen wrote:
> But perhaps for such a long ioctl handler it would be better to move
> the lock/unlock_kernel()s into the individual case ...: statements;
> then it could be eliminated step by step.
Sure, I can do that if James likes the idea. Since not all case
statements need the
the BKL in the scsi code.
Signed-off-by: Andre Noll <[EMAIL PROTECTED]>
diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c
index f1871ea..3063307 100644
--- a/drivers/scsi/sg.c
+++ b/drivers/scsi/sg.c
@@ -48,6 +48,7 @@ static int sg_version_num = 30534;/* 2 digits for each
com
On 12:05, Mingming Cao wrote:
> > > BTW: Are ext3 filesystem sizes greater than 8T now officially
> > > supported?
> >
> > I think so, but I don't know how much 16TB testing developers and
> > distros are doing - perhaps the linux-ext4 denizens can tell us?
> > -
>
> IBM has done some testing (db
On 10:36, Jens Axboe wrote:
> - Edit .config and set CONFIG_DEBUG_INFO=y (near the bottom)
> - make oldconfig
> - rm block/cfq-iosched.o
> - make block/cfq-iosched.o
> - gdb block/cfq-iosched.o
>
> (gdb) l *cfq_dispatch_insert+0x28
>
> and see what that says. Should not take you more than a minut
On 10:02, Jens Axboe wrote:
> Do you still have the vmlinux? It'd be interesting to see what
>
> $ gbd vmlinux
> (gdb) l *cfq_dispatch_insert+0x28
>
> says,
The vmlinux in the kernel dir is dated March 5 and my bug report
was Feb 28. So I'm afraid it's gone. I tried the gdb command anyway
but i
On 19:46, Jens Axboe wrote:
> On Wed, Feb 28 2007, Andre Noll wrote:
> > On 16:18, Andre Noll wrote:
> >
> > > With 2.6.21-rc2 I am unable to reproduce this BUG message. However,
> > > writing to both raid systems at the same time via lvm still locks u
On 20:39, Andrew Morton wrote:
> On Wed, 28 Feb 2007 16:37:22 +0100 Andre Noll <[EMAIL PROTECTED]> wrote:
>
> > On 16:18, Andre Noll wrote:
> >
> > > With 2.6.21-rc2 I am unable to reproduce this BUG message. However,
> > > writing to both raid syste
On 16:18, Andre Noll wrote:
> With 2.6.21-rc2 I am unable to reproduce this BUG message. However,
> writing to both raid systems at the same time via lvm still locks up
> the system within minutes.
Screenshot of the resulting kernel panic:
http://systemlinux.org/~maan/shots/ker
On 10:51, Andrew Vasquez wrote:
> On Tue, 27 Feb 2007, Andre Noll wrote:
> > [ 68.532665] BUG: at kernel/lockdep.c:1860 trace_hardirqs_on()
>
> Ok, since 2.6.20, there been a patch added to qla2xxx which drops the
> spin_unlock_irq() call while attempting to ramp-up the queue
On 11:11, Andre Noll wrote:
> On 10:26, Andrew Vasquez wrote:
> > You are loading some stale firmware that's left over on the card --
> > I'm not even sure what 4.00.70 is, as the latest release firmware is
> > 4.00.27.
>
> That's the firmware which c
On 10:26, Andrew Vasquez wrote:
> You are loading some stale firmware that's left over on the card --
> I'm not even sure what 4.00.70 is, as the latest release firmware is
> 4.00.27.
That's the firmware which came with the card. Anyway, I just upgraded
the firmware, but the bug remains. The backt
Hi
On linux-2.6.20.1, we're seeing hard lockups with 2 raid systems
connected to a qla2xxx card and used as a single volume via lvm.
The system seems to lock up only if data gets written to both raid
systems at the same time.
On a standard kernel nothing makes it to the log, the system just
freez
14 matches
Mail list logo