FYI -- after upgrade to 9-STABLE no further crashes occurred, even with
IPv6
enabled.
>>> What sort of uptime have you seen with ipv6 enabled ?
>> Now it's 19 days and still no crash occurred.
> Has all been stable still with ipv6 enabled ?
Unfortunately, it crashed after ~30 d
On 3/14/2014 1:00 PM, Przemyslaw Frasunek wrote:
FYI -- after upgrade to 9-STABLE no further crashes occurred, even with IPv6
enabled.
What sort of uptime have you seen with ipv6 enabled ?
Now it's 19 days and still no crash occurred.
Hi,
Has all been stable still with ipv6 enabled
On 3/14/2014 1:00 PM, Przemyslaw Frasunek wrote:
FYI -- after upgrade to 9-STABLE no further crashes occurred, even with IPv6
enabled.
What sort of uptime have you seen with ipv6 enabled ?
Now it's 19 days and still no crash occurred.
I would sometimes get a month on a box with ~ 500 users
>> FYI -- after upgrade to 9-STABLE no further crashes occurred, even with IPv6
>> enabled.
> What sort of uptime have you seen with ipv6 enabled ?
Now it's 19 days and still no crash occurred.
___
freebsd-net@freebsd.org mailing list
http://lists.freeb
On 3/9/2014 7:33 AM, Przemyslaw Frasunek wrote:
I've seen that Mike reported similar issues in October
(http://lists.freebsd.org/pipermail/freebsd-stable/2013-October/075552.html).
Did you managed to resolve it?
I worked around the crash by removing ipv6 from the kernel. The box has
been f
>> I've seen that Mike reported similar issues in October
>> (http://lists.freebsd.org/pipermail/freebsd-stable/2013-October/075552.html).
>> Did you managed to resolve it?
> I worked around the crash by removing ipv6 from the kernel. The box has
> been functioning without a crash since then.
16.07.2012 07:13, Bjoern A. Zeeb пишет:
>
> On 15. Jul 2012, at 20:54 , Mike Tancsa wrote:
>
>> On 7/10/2012 2:24 AM, Przemyslaw Frasunek wrote:
It seems, Przemyslaw Frasunek uses proxyarp?
I have no such problems but I do not use proxyarp.
Could you get rid of it, Przemyslaw?
>>>
On 15. Jul 2012, at 20:54 , Mike Tancsa wrote:
> On 7/10/2012 2:24 AM, Przemyslaw Frasunek wrote:
>>> It seems, Przemyslaw Frasunek uses proxyarp?
>>> I have no such problems but I do not use proxyarp.
>>> Could you get rid of it, Przemyslaw?
>>
>> No, I don't use proxy ARP. I have about 300 PPP
On 7/10/2012 2:24 AM, Przemyslaw Frasunek wrote:
>> It seems, Przemyslaw Frasunek uses proxyarp?
>> I have no such problems but I do not use proxyarp.
>> Could you get rid of it, Przemyslaw?
>
> No, I don't use proxy ARP. I have about 300 PPPoE ng interfaces and 10 VLANs
> with plain IP traffic. A
> It seems, Przemyslaw Frasunek uses proxyarp?
> I have no such problems but I do not use proxyarp.
> Could you get rid of it, Przemyslaw?
No, I don't use proxy ARP. I have about 300 PPPoE ng interfaces and 10 VLANs
with plain IP traffic. ARP table has only < 50 entries, all of them are dynamic.
On Tue, Jul 10, 2012 at 12:03:36PM +0700, Eugene Grosbein wrote:
E> 10.07.2012 03:25, Ryan Stone пишет:
E> > On Mon, Jul 9, 2012 at 4:12 AM, Gleb Smirnoff wrote:
E> >> This looks very much related to a known race in ARP code.
E> >>
E> >> See this email and related thread:
E> >>
E> >> http://lists.
10.07.2012 03:25, Ryan Stone пишет:
> On Mon, Jul 9, 2012 at 4:12 AM, Gleb Smirnoff wrote:
>> This looks very much related to a known race in ARP code.
>>
>> See this email and related thread:
>>
>> http://lists.freebsd.org/pipermail/freebsd-net/2012-March/031865.html
>>
>> Ryan didn't check in an
On Mon, Jul 9, 2012 at 4:12 AM, Gleb Smirnoff wrote:
> This looks very much related to a known race in ARP code.
>
> See this email and related thread:
>
> http://lists.freebsd.org/pipermail/freebsd-net/2012-March/031865.html
>
> Ryan didn't check in any patches since, and I failed to follow on th
On Sat, Jul 07, 2012 at 10:26:46AM +0200, Przemyslaw Frasunek wrote:
P> > After reenabling IPv6, the crash occurred within 6 hours. This time,
crashdump
P> > was properly saved (thanks to patch suggested by Eugene).
P>
P> My PPPoE BRAS was stable for 17 days. This morning, it crashed in another
> Did you set net.isr.direct=0 (and/or direct_force)?
> If so, don't do that. Get back to default 1 for these two sysctls.
Both sysctls are set to default values.
This is my /etc/sysctl.conf:
net.inet6.ip6.redirect=0
net.inet.icmp.drop_redirect=1
net.inet6.icmp6.rediraccept=0
hw.acpi.power_butto
On Sat, Jul 07, 2012 at 10:26:46AM +0200, Przemyslaw Frasunek wrote:
> > After reenabling IPv6, the crash occurred within 6 hours. This time,
> > crashdump
> > was properly saved (thanks to patch suggested by Eugene).
>
> My PPPoE BRAS was stable for 17 days. This morning, it crashed in another w
> After reenabling IPv6, the crash occurred within 6 hours. This time, crashdump
> was properly saved (thanks to patch suggested by Eugene).
My PPPoE BRAS was stable for 17 days. This morning, it crashed in another way:
current process = 2762 (mpd5)
trap number = 9
panic: gene
> It's way better now. I had no crash since IPv6 is disabled.
After reenabling IPv6, the crash occurred within 6 hours. This time, crashdump
was properly saved (thanks to patch suggested by Eugene).
As already stated by bz, panic is definitely related to races in IPv6 code:
(kgdb) bt
#0 doadump
On 18. Jun 2012, at 22:51 , Adrian Chadd wrote:
> Hi,
>
> Is it possible to get you to setup a test BRAS running 9-STABLE, so
> you can provide feedback about how stable ipv4/ipv6 PPPoE is for you?
>
> It's great that you've solved it for 7.x, and I know that bz and
> others know about a variet
On 6/18/2012 6:51 PM, Adrian Chadd wrote:
> Hi,
>
> Is it possible to get you to setup a test BRAS running 9-STABLE, so
> you can provide feedback about how stable ipv4/ipv6 PPPoE is for you?
I have another LNS to deploy soon and I can enable IPv6 and use RELENG9.
I have in the past been able to
Hi,
Is it possible to get you to setup a test BRAS running 9-STABLE, so
you can provide feedback about how stable ipv4/ipv6 PPPoE is for you?
It's great that you've solved it for 7.x, and I know that bz and
others know about a variety of fun issues in the networking stack that
may be related to t
>> Thanks a lot guys. For now, I disabled IPv6 on this BRAS. Let's see if it's
>> going to help.
> Hi,
> Any changes in stability ?
Hi,
It's way better now. I had no crash since IPv6 is disabled.
___
freebsd-net@freebsd.org mailing list
http://li
On 6/15/2012 5:57 PM, Przemyslaw Frasunek wrote:
>> I suspect this isn't related to netgraph, but to IPv6 since prelist_remove()
>> is found in netinet6/nd6_rtr.c.
>>
>> Several times I looked into ND code and found lots of race prone code there.
>> May be some was recently fixed by bz@, but defini
17.06.2012 01:13, Arnaud Lacombe пишет:
> Hi,
>
> On Fri, Jun 15, 2012 at 7:50 AM, Eugene Grosbein wrote:
>> 15.06.2012 18:33, Przemyslaw Frasunek пишет:
>>> Dear All,
>>>
>>> unfortunately, one of my mpd5 PPPoE access servers started panicing every
>>> few
>>> hours.
>>>
>>> I'm running recent
On 15. Jun 2012, at 21:57 , Przemyslaw Frasunek wrote:
>> I suspect this isn't related to netgraph, but to IPv6 since prelist_remove()
>> is found in netinet6/nd6_rtr.c.
>>
>> Several times I looked into ND code and found lots of race prone code there.
>> May be some was recently fixed by bz@, b
Hi,
On Fri, Jun 15, 2012 at 7:50 AM, Eugene Grosbein wrote:
> 15.06.2012 18:33, Przemyslaw Frasunek пишет:
>> Dear All,
>>
>> unfortunately, one of my mpd5 PPPoE access servers started panicing every few
>> hours.
>>
>> I'm running recent 8.3-STABLE (as of 23th May) with WITNESS, INVARIANTS and
>
> I suspect this isn't related to netgraph, but to IPv6 since prelist_remove()
> is found in netinet6/nd6_rtr.c.
>
> Several times I looked into ND code and found lots of race prone code there.
> May be some was recently fixed by bz@, but definitely not merged to stable/8.
Thanks a lot guys. For
On 6/15/2012 4:31 PM, Gleb Smirnoff wrote:
> On Fri, Jun 15, 2012 at 01:33:05PM +0200, Przemyslaw Frasunek wrote:
> P> unfortunately, one of my mpd5 PPPoE access servers started panicing every
> few
> P> hours.
> P>
> P> I'm running recent 8.3-STABLE (as of 23th May) with WITNESS, INVARIANTS and
On Fri, Jun 15, 2012 at 01:33:05PM +0200, Przemyslaw Frasunek wrote:
P> unfortunately, one of my mpd5 PPPoE access servers started panicing every few
P> hours.
P>
P> I'm running recent 8.3-STABLE (as of 23th May) with WITNESS, INVARIANTS and
P> DEBUG_MEMGUARD compiled. Unfortunately, I'm unable to
> One more: does your box has PS/2 keyboard or USB? It matters too.
> For systems having USB keyboard there is another patch needed to obtain
> crashdumps (by Andriy Gapon):
Thanks a lot. I have KVM connected using USB. I'll apply this patch.
___
freebs
15.06.2012 18:50, Eugene Grosbein пишет:
>> unfortunately, one of my mpd5 PPPoE access servers started panicing every few
>> hours.
>>
>> I'm running recent 8.3-STABLE (as of 23th May) with WITNESS, INVARIANTS and
>> DEBUG_MEMGUARD compiled. Unfortunately, I'm unable to catch crashdump. For
>> so
15.06.2012 18:33, Przemyslaw Frasunek пишет:
> Dear All,
>
> unfortunately, one of my mpd5 PPPoE access servers started panicing every few
> hours.
>
> I'm running recent 8.3-STABLE (as of 23th May) with WITNESS, INVARIANTS and
> DEBUG_MEMGUARD compiled. Unfortunately, I'm unable to catch crashdu
Dear All,
unfortunately, one of my mpd5 PPPoE access servers started panicing every few
hours.
I'm running recent 8.3-STABLE (as of 23th May) with WITNESS, INVARIANTS and
DEBUG_MEMGUARD compiled. Unfortunately, I'm unable to catch crashdump. For some
reason, it is not saved on dumpdev.
The only
> T> Ah, I found where EPERM comes from. Will fix it soon.
> Sorry, I was wrong :(
I managed to track down EPERMs back to ng_make_node() call from
ng_mkpeer() in netgraph/ng_base.c:1460.
I'll add some additional printfs inside ng_make_node().
___
freebs
On 11.04.2011 21:37, Przemyslaw Frasunek wrote:
>> You should really consider moving to amd64 if your hardware supports it.
>> For routing with mpd and serving lots of users there is no reason to stay
>> with i386.
>
> Yes, hardware is pretty new and standard - I use Intel SR1630GP platforms on
On 4/11/2011 10:05 AM, Brandon Gooch wrote:
> 2011/4/11 Mike Tancsa :
>> On 4/11/2011 1:49 AM, Gleb Smirnoff wrote:
>>> On Mon, Apr 11, 2011 at 12:34:56AM +0200, Przemyslaw Frasunek wrote:
>>> P> > Use command "vmstat -z|egrep 'ITEM|NetGraph'" and check
FAILURES column.
>>> P> > If you see non-zero
2011/4/11 Mike Tancsa :
> On 4/11/2011 1:49 AM, Gleb Smirnoff wrote:
>> On Mon, Apr 11, 2011 at 12:34:56AM +0200, Przemyslaw Frasunek wrote:
>> P> > Use command "vmstat -z|egrep 'ITEM|NetGraph'" and check FAILURES column.
>> P> > If you see non-zero values there, you need to increase netgraph memor
On 4/11/2011 1:49 AM, Gleb Smirnoff wrote:
> On Mon, Apr 11, 2011 at 12:34:56AM +0200, Przemyslaw Frasunek wrote:
> P> > Use command "vmstat -z|egrep 'ITEM|NetGraph'" and check FAILURES column.
> P> > If you see non-zero values there, you need to increase netgraph memory
> limits
> P> > net.graph.
> Do you use i386 or amd64? I use amd64, it has much more KVA space
> (kernel virtual area).
I use i386. I suppose I'll need to raise KVA_PAGES to 512 and then slightly
raise vm.kmem_size. What are the safe values for i386?
I wonder why I got crash without dump and not the "kmem_map too small" pa
On 11.04.2011 17:51, Przemyslaw Frasunek wrote:
>> Increase sysctl kern.ipc.maxsockbuf.
>> I was forced to rise it upto 80MB (sic!) as 8MB was not enough to me.
>
> Yay, things are getting worse. Increasing maxsockbuf caused crash after 2-3
> hours. There was no crashdump and the last thing in log
> Increase sysctl kern.ipc.maxsockbuf.
> I was forced to rise it upto 80MB (sic!) as 8MB was not enough to me.
Yay, things are getting worse. Increasing maxsockbuf caused crash after 2-3
hours. There was no crashdump and the last thing in log was:
Apr 11 12:32:40 lsm-gw kernel: ad4: FAILURE - out
On Mon, Apr 11, 2011 at 11:56:26AM +0400, Gleb Smirnoff wrote:
T> On Mon, Apr 11, 2011 at 02:21:30PM +0700, Eugene Grosbein wrote:
T> E> On 11.04.2011 13:54, Przemyslaw Frasunek wrote:
T> E> >> IMO, any kind of memory allocation code (malloc, uma, netgraph item
T> E> >> allocator) never return EPER
On Mon, Apr 11, 2011 at 02:21:30PM +0700, Eugene Grosbein wrote:
E> On 11.04.2011 13:54, Przemyslaw Frasunek wrote:
E> >> IMO, any kind of memory allocation code (malloc, uma, netgraph item
E> >> allocator) never return EPERM, they return ENOMEM or ENOBUFS.
E> >>
E> >> So, there is a bug somewhere
On 11.04.2011 13:54, Przemyslaw Frasunek wrote:
>> IMO, any kind of memory allocation code (malloc, uma, netgraph item
>> allocator) never return EPERM, they return ENOMEM or ENOBUFS.
>>
>> So, there is a bug somewhere else.
>
> I think so, but for me it still looks like resource shortage. As I w
> IMO, any kind of memory allocation code (malloc, uma, netgraph item
> allocator) never return EPERM, they return ENOMEM or ENOBUFS.
>
> So, there is a bug somewhere else.
I think so, but for me it still looks like resource shortage. As I wrote
before, when EPERM starts appearing, I'm unable to
On Mon, Apr 11, 2011 at 12:34:56AM +0200, Przemyslaw Frasunek wrote:
P> > Use command "vmstat -z|egrep 'ITEM|NetGraph'" and check FAILURES column.
P> > If you see non-zero values there, you need to increase netgraph memory
limits
P> > net.graph.maxdata and net.graph.maxalloc using /boot/loader.con
On 11.04.2011 05:34, Przemyslaw Frasunek wrote:
>> Use command "vmstat -z|egrep 'ITEM|NetGraph'" and check FAILURES column.
>> If you see non-zero values there, you need to increase netgraph memory limits
>> net.graph.maxdata and net.graph.maxalloc using /boot/loader.conf.
>
> Unfortunately, incre
> Use command "vmstat -z|egrep 'ITEM|NetGraph'" and check FAILURES column.
> If you see non-zero values there, you need to increase netgraph memory limits
> net.graph.maxdata and net.graph.maxalloc using /boot/loader.conf.
Unfortunately, increasing net.graph.maxdata & net.graph.maxalloc didn't
sol
On 11.04.2011 03:45, Przemyslaw Frasunek wrote:
>> Use command "vmstat -z|egrep 'ITEM|NetGraph'" and check FAILURES column.
>> If you see non-zero values there, you need to increase netgraph memory limits
>> net.graph.maxdata and net.graph.maxalloc using /boot/loader.conf.
>
> Thanks, indeed it he
> Use command "vmstat -z|egrep 'ITEM|NetGraph'" and check FAILURES column.
> If you see non-zero values there, you need to increase netgraph memory limits
> net.graph.maxdata and net.graph.maxalloc using /boot/loader.conf.
Thanks, indeed it helped a lot. I also noticed that other zones have
non-ze
On 10.04.2011 16:00, Przemyslaw Frasunek wrote:
> Eventually I found that this issue is related to mbuf exhaustion. In
> periods when sendto() fails with EPERM, the "requests for mbufs denied"
> counter is increasing and "ngctl list" also fails.
>
Use command "vmstat -z|egrep 'ITEM|NetGraph'" and
W dniu 08.04.2011 22:13, Przemyslaw Frasunek pisze:
> I'm still looking for a help in investigating this issue. The problem appears
> on
> two 7.4 boxes, while 7.3 are working OK. Ktrace shows, that indeed some of
> sendto() calls on netgraph control socket are failing with EPERM:
[...]
Eventuall
[...]
> Mar 31 13:48:06 lsm-gw mpd: [B-150] Bundle: Interface ng149 created
> Mar 31 13:48:06 lsm-gw mpd: [B-150] can't create ppp node at ".:"->"b150":
> Operation not permitted
[...]
I'm still looking for a help in investigating this issue. The problem appears on
two 7.4 boxes, while 7.3 are wor
53 matches
Mail list logo