And some more workaround:
On 7.2
ifconfig mtu
em0: flags=8843 metric 0 mtu 1500
gif0: flags=8051 metric 0 mtu 1280
Apache running on em0 alias address.
>From remote machine trying to download 10Kb file and looking into tcpdump.
No kernel panic. But 7.2 dows not respond to GET request, but in ap
Hi Marcel,
On Tue, Apr 27, 2010 at 09:46:08PM -0700, Marcel Moolenaar wrote:
>
> On Apr 27, 2010, at 12:47 PM, Paul Schenkeveld wrote:
>
> >puc0: port
> > 0xe500-0xe51f,0xe520-0xe52f,0xe530-0xe537,0xe538-0xe53f,0xe540-0xe547,0xe548-0xe54f
> > irq 10 at device 14.0 on pci0
> *snip*
> > The
Pete French wrote:
>> Thanks. First step successful - I can steadily reproduce problem on
>> CURRENT. raidtest with 200 I/O streams over gmirror of two disks on same
>> channel triggers issue in seconds. Any I/O on channel dying after both
>> disks report "Queue full" error same time. The rest of s
> Thanks. First step successful - I can steadily reproduce problem on
> CURRENT. raidtest with 200 I/O streams over gmirror of two disks on same
> channel triggers issue in seconds. Any I/O on channel dying after both
> disks report "Queue full" error same time. The rest of system works
> fine. If
On Apr 29, 2010, at 2:50 AM, Pete French wrote:
>> Thanks. First step successful - I can steadily reproduce problem on
>> CURRENT. raidtest with 200 I/O streams over gmirror of two disks on same
>> channel triggers issue in seconds. Any I/O on channel dying after both
>> disks report "Queue full"
Scott Long wrote:
On Apr 29, 2010, at 2:50 AM, Pete French wrote:
Thanks. First step successful - I can steadily reproduce problem on
CURRENT. raidtest with 200 I/O streams over gmirror of two disks on same
channel triggers issue in seconds. Any I/O on channel dying after both
disks report "Q
Alexander Motin wrote:
> Pete French wrote:
>>> I have some 29160N locally and I'll try to reproduce this.
>> I would suggest you try gmirror across two drives - that is how
>> both myself and the original poster first noticed the issue.
>
> Thanks. First step successful - I can steadily reproduce
> Seems like I've found the reason. Attached patch fixes problem for me.
Thanks, am trying this now
-pete.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebs
> Seems like I've found the reason. Attached patch fixes problem for me.
Inetersting - one of my machines has ginished a gmirror resync. The first
time I tried this it did lock up, but with media rea errors (which may be
genuine on these old drives). But this tiime it has finished, and without
the
...and my other test amchine just completed a gmirror rebuild as well, with no
problems. So intially it does look very much like it
is fixed. Thanks Alexander! IIf I have any mmore problems I will
let you know
-pete.
___
freebsd-stable@freebsd.org mailin
Pete French wrote:
> ...and my other test amchine just completed a gmirror rebuild as well, with no
> problems. So intially it does look very much like it
> is fixed. Thanks Alexander! IIf I have any mmore problems I will
> let you know
I'm glad to hear it. But gmirror rebuild itself may be not en
> I'm glad to hear it. But gmirror rebuild itself may be not enough for
> test. It uses very few requests same time. You should manage "Queue
> full" state, so you should make at least 150 concurrent write requests
> to the mirror running same time.
Am going to hammer it for a bit with a number of
On Apr 29, 2010, at 7:47 AM, Robert Noland wrote:
>
>
> Scott Long wrote:
>> On Apr 29, 2010, at 2:50 AM, Pete French wrote:
Thanks. First step successful - I can steadily reproduce problem on
CURRENT. raidtest with 200 I/O streams over gmirror of two disks on same
channel triggers
Scott Long wrote:
> On Apr 29, 2010, at 7:47 AM, Robert Noland wrote:
>>
>> Scott Long wrote:
>>> On Apr 29, 2010, at 2:50 AM, Pete French wrote:
> Thanks. First step successful - I can steadily reproduce problem on
> CURRENT. raidtest with 200 I/O streams over gmirror of two disks on same
On Apr 29, 2010, at 10:56 PM, Alexander Motin wrote:
> Scott Long wrote:
>> On Apr 29, 2010, at 7:47 AM, Robert Noland wrote:
>>>
>>> Scott Long wrote:
On Apr 29, 2010, at 2:50 AM, Pete French wrote:
>> Thanks. First step successful - I can steadily reproduce problem on
>> CURRENT. ra
Scott Long wrote:
> On Apr 29, 2010, at 10:56 PM, Alexander Motin wrote:
>> Scott Long wrote:
>>> On Apr 29, 2010, at 7:47 AM, Robert Noland wrote:
Scott Long wrote:
> On Apr 29, 2010, at 2:50 AM, Pete French wrote:
>>> Thanks. First step successful - I can steadily reproduce problem o
16 matches
Mail list logo