On Fri, Apr 30, 2010 at 4:42 AM, Pete French
wrote:
>
> I've copied in the original poster of the problem to see how he is
> doing, but as far as I am concerned the problem has gone away. Certainly
> the things I was doing before to triger it no longer do so. Of course
> in the normal state of thi
Scott Long wrote:
> On Apr 29, 2010, at 10:56 PM, Alexander Motin wrote:
>> Scott Long wrote:
>>> On Apr 29, 2010, at 7:47 AM, Robert Noland wrote:
Scott Long wrote:
> On Apr 29, 2010, at 2:50 AM, Pete French wrote:
>>> Thanks. First step successful - I can steadily reproduce problem o
On Apr 29, 2010, at 10:56 PM, Alexander Motin wrote:
> Scott Long wrote:
>> On Apr 29, 2010, at 7:47 AM, Robert Noland wrote:
>>>
>>> Scott Long wrote:
On Apr 29, 2010, at 2:50 AM, Pete French wrote:
>> Thanks. First step successful - I can steadily reproduce problem on
>> CURRENT. ra
Scott Long wrote:
> On Apr 29, 2010, at 7:47 AM, Robert Noland wrote:
>>
>> Scott Long wrote:
>>> On Apr 29, 2010, at 2:50 AM, Pete French wrote:
> Thanks. First step successful - I can steadily reproduce problem on
> CURRENT. raidtest with 200 I/O streams over gmirror of two disks on same
On Apr 29, 2010, at 7:47 AM, Robert Noland wrote:
>
>
> Scott Long wrote:
>> On Apr 29, 2010, at 2:50 AM, Pete French wrote:
Thanks. First step successful - I can steadily reproduce problem on
CURRENT. raidtest with 200 I/O streams over gmirror of two disks on same
channel triggers
> I'm glad to hear it. But gmirror rebuild itself may be not enough for
> test. It uses very few requests same time. You should manage "Queue
> full" state, so you should make at least 150 concurrent write requests
> to the mirror running same time.
Am going to hammer it for a bit with a number of
Pete French wrote:
> ...and my other test amchine just completed a gmirror rebuild as well, with no
> problems. So intially it does look very much like it
> is fixed. Thanks Alexander! IIf I have any mmore problems I will
> let you know
I'm glad to hear it. But gmirror rebuild itself may be not en
...and my other test amchine just completed a gmirror rebuild as well, with no
problems. So intially it does look very much like it
is fixed. Thanks Alexander! IIf I have any mmore problems I will
let you know
-pete.
___
freebsd-stable@freebsd.org mailin
> Seems like I've found the reason. Attached patch fixes problem for me.
Inetersting - one of my machines has ginished a gmirror resync. The first
time I tried this it did lock up, but with media rea errors (which may be
genuine on these old drives). But this tiime it has finished, and without
the
> Seems like I've found the reason. Attached patch fixes problem for me.
Thanks, am trying this now
-pete.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebs
Alexander Motin wrote:
> Pete French wrote:
>>> I have some 29160N locally and I'll try to reproduce this.
>> I would suggest you try gmirror across two drives - that is how
>> both myself and the original poster first noticed the issue.
>
> Thanks. First step successful - I can steadily reproduce
Scott Long wrote:
On Apr 29, 2010, at 2:50 AM, Pete French wrote:
Thanks. First step successful - I can steadily reproduce problem on
CURRENT. raidtest with 200 I/O streams over gmirror of two disks on same
channel triggers issue in seconds. Any I/O on channel dying after both
disks report "Q
On Apr 29, 2010, at 2:50 AM, Pete French wrote:
>> Thanks. First step successful - I can steadily reproduce problem on
>> CURRENT. raidtest with 200 I/O streams over gmirror of two disks on same
>> channel triggers issue in seconds. Any I/O on channel dying after both
>> disks report "Queue full"
> Thanks. First step successful - I can steadily reproduce problem on
> CURRENT. raidtest with 200 I/O streams over gmirror of two disks on same
> channel triggers issue in seconds. Any I/O on channel dying after both
> disks report "Queue full" error same time. The rest of system works
> fine. If
Pete French wrote:
>> Thanks. First step successful - I can steadily reproduce problem on
>> CURRENT. raidtest with 200 I/O streams over gmirror of two disks on same
>> channel triggers issue in seconds. Any I/O on channel dying after both
>> disks report "Queue full" error same time. The rest of s
Pete French wrote:
>> I have some 29160N locally and I'll try to reproduce this.
>
> I would suggest you try gmirror across two drives - that is how
> both myself and the original poster first noticed the issue.
Thanks. First step successful - I can steadily reproduce problem on
CURRENT. raidtest
> I have some 29160N locally and I'll try to reproduce this.
I would suggest you try gmirror across two drives - that is how
both myself and the original poster first noticed the issue.
cheers,
-pete.
___
freebsd-stable@freebsd.org mailing list
http:/
Andy Farkas wrote:
> RELENG_8 csup'd with date=2010.02.14.00.00 works perfectly for days.
>
> RELENG_8 csup'd with date=2010.02.15.00.00 dead-locks the disk I/O
> subsystem. Network still operational but anything needing disk hangs.
> Power-cycle required.
>
> kernel config is GENERIC with KDB, D
> RELENG_8 csup'd with date=2010.02.14.00.00 works perfectly for days.
>
> RELENG_8 csup'd with date=2010.02.15.00.00 dead-locks the disk I/O
> subsystem. Network still operational but anything needing disk hangs.
> Power-cycle required.
An aditional point (and thanks to Andy for doing all the wor
Hi, firstly:
RELENG_8 csup'd with date=2010.02.14.00.00 works perfectly for days.
RELENG_8 csup'd with date=2010.02.15.00.00 dead-locks the disk I/O
subsystem. Network still operational but anything needing disk hangs.
Power-cycle required.
kernel config is GENERIC with KDB, DDB and BREAK_TO_DEB
20 matches
Mail list logo