>> I've seen that Mike reported similar issues in October
>> (http://lists.freebsd.org/pipermail/freebsd-stable/2013-October/075552.html).
>> Did you managed to resolve it?
> I worked around the crash by removing ipv6 from the kernel. The box has
> been functioning without a crash since then.
>> FYI -- after upgrade to 9-STABLE no further crashes occurred, even with IPv6
>> enabled.
> What sort of uptime have you seen with ipv6 enabled ?
Now it's 19 days and still no crash occurred.
___
freebsd-net@freebsd.org mailing list
http://lists.freeb
FYI -- after upgrade to 9-STABLE no further crashes occurred, even with
IPv6
enabled.
>>> What sort of uptime have you seen with ipv6 enabled ?
>> Now it's 19 days and still no crash occurred.
> Has all been stable still with ipv6 enabled ?
Unfortunately, it crashed after ~30 d
Dear All,
unfortunately, one of my mpd5 PPPoE access servers started panicing every few
hours.
I'm running recent 8.3-STABLE (as of 23th May) with WITNESS, INVARIANTS and
DEBUG_MEMGUARD compiled. Unfortunately, I'm unable to catch crashdump. For some
reason, it is not saved on dumpdev.
The only
> One more: does your box has PS/2 keyboard or USB? It matters too.
> For systems having USB keyboard there is another patch needed to obtain
> crashdumps (by Andriy Gapon):
Thanks a lot. I have KVM connected using USB. I'll apply this patch.
___
freebs
> I suspect this isn't related to netgraph, but to IPv6 since prelist_remove()
> is found in netinet6/nd6_rtr.c.
>
> Several times I looked into ND code and found lots of race prone code there.
> May be some was recently fixed by bz@, but definitely not merged to stable/8.
Thanks a lot guys. For
>> Thanks a lot guys. For now, I disabled IPv6 on this BRAS. Let's see if it's
>> going to help.
> Hi,
> Any changes in stability ?
Hi,
It's way better now. I had no crash since IPv6 is disabled.
___
freebsd-net@freebsd.org mailing list
http://li
> It's way better now. I had no crash since IPv6 is disabled.
After reenabling IPv6, the crash occurred within 6 hours. This time, crashdump
was properly saved (thanks to patch suggested by Eugene).
As already stated by bz, panic is definitely related to races in IPv6 code:
(kgdb) bt
#0 doadump
> After reenabling IPv6, the crash occurred within 6 hours. This time, crashdump
> was properly saved (thanks to patch suggested by Eugene).
My PPPoE BRAS was stable for 17 days. This morning, it crashed in another way:
current process = 2762 (mpd5)
trap number = 9
panic: gene
> Did you set net.isr.direct=0 (and/or direct_force)?
> If so, don't do that. Get back to default 1 for these two sysctls.
Both sysctls are set to default values.
This is my /etc/sysctl.conf:
net.inet6.ip6.redirect=0
net.inet.icmp.drop_redirect=1
net.inet6.icmp6.rediraccept=0
hw.acpi.power_butto
> It seems, Przemyslaw Frasunek uses proxyarp?
> I have no such problems but I do not use proxyarp.
> Could you get rid of it, Przemyslaw?
No, I don't use proxy ARP. I have about 300 PPPoE ng interfaces and 10 VLANs
with plain IP traffic. ARP table has only < 50 entries, all of
Hello,
I'm using mpd 5.5 on three PPPoE routers, each servicing about 300 PPPoE
concurrent sessions. Routers are based on Intel SR1630GP hardware platforms and
runs FreeBSD 7.3-RELEASE.
I'm experiencing stability issues related to Netgraph. None of above routers can
survive more than 20-30 days o
> In this dump, can we seek for where did 0x74 came from? Can you look at
> ng_name_hash[hash]?
(kgdb) print hash
No symbol "hash" in current context.
(kgdb) info all
eax0xff9a -102
ecx0xe7ce6895 -405903211
edx0xff9a -102
ebx
> (kgdb) print *ng_name_hash[116].lh_first
It looks like this one is corrupted:
(kgdb) print
*ng_name_hash[116].lh_first.nd_nodes.le_next.nd_nodes.le_next.nd_nodes.le_next.nd_nodes.le_next
$19 = {nd_name = "ng258", '\0' , nd_type = 0xc61871a0,
nd_flags = 0, nd_refs = 1, nd_numhooks = 0, nd_priv
> And in this one, can you please show *hook->hk_peer ?
(kgdb) print *hook->hk_peer
$2 = {
hk_name = "\b\000\000\000
\000\000\000\004\000\000\000\001\000\000\000ŐRí\003\003ö\0248cmd4\000\000\000",
hk_private = 0x0, hk_flags = 0, hk_refs = 0,
hk_type = 0, hk_peer = 0x0, hk_node = 0x0, hk_hooks
> On 14.01.2011 18:46, Mike Tancsa wrote:
> I also have very loaded mpd/PPPoE servers that panic all the time:
>
> http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/153255
> http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/153671
I've just got yet another panic on mpd5 box after about 30 days of upt
Hello,
I have upgraded one of my mpd5 based PPPoE access servers from 7.3-RELEASE to
7.4-RELEASE. Just after upgrade, I started getting following errors:
Mar 31 13:48:06 lsm-gw mpd: [B-150] Bundle: Interface ng149 created
Mar 31 13:48:06 lsm-gw mpd: [B-150] can't create ppp node at ".:"->"b150":
> I believe this netgraph/mpd stability problem it solved by glebius' patches.
> I have been testing them in lab and in production for many weeks and they just
> eliminated my panics altogether. Those patched have been commited to HEAD and
> RELENG_8.
Any chance of a backporting them to RELENG_7?
[...]
> Mar 31 13:48:06 lsm-gw mpd: [B-150] Bundle: Interface ng149 created
> Mar 31 13:48:06 lsm-gw mpd: [B-150] can't create ppp node at ".:"->"b150":
> Operation not permitted
[...]
I'm still looking for a help in investigating this issue. The problem appears on
two 7.4 boxes, while 7.3 are wor
W dniu 08.04.2011 22:13, Przemyslaw Frasunek pisze:
> I'm still looking for a help in investigating this issue. The problem appears
> on
> two 7.4 boxes, while 7.3 are working OK. Ktrace shows, that indeed some of
> sendto() calls on netgraph control socket are f
> Use command "vmstat -z|egrep 'ITEM|NetGraph'" and check FAILURES column.
> If you see non-zero values there, you need to increase netgraph memory limits
> net.graph.maxdata and net.graph.maxalloc using /boot/loader.conf.
Thanks, indeed it helped a lot. I also noticed that other zones have
non-ze
> Use command "vmstat -z|egrep 'ITEM|NetGraph'" and check FAILURES column.
> If you see non-zero values there, you need to increase netgraph memory limits
> net.graph.maxdata and net.graph.maxalloc using /boot/loader.conf.
Unfortunately, increasing net.graph.maxdata & net.graph.maxalloc didn't
sol
> IMO, any kind of memory allocation code (malloc, uma, netgraph item
> allocator) never return EPERM, they return ENOMEM or ENOBUFS.
>
> So, there is a bug somewhere else.
I think so, but for me it still looks like resource shortage. As I wrote
before, when EPERM starts appearing, I'm unable to
> Increase sysctl kern.ipc.maxsockbuf.
> I was forced to rise it upto 80MB (sic!) as 8MB was not enough to me.
Yay, things are getting worse. Increasing maxsockbuf caused crash after 2-3
hours. There was no crashdump and the last thing in log was:
Apr 11 12:32:40 lsm-gw kernel: ad4: FAILURE - out
> Do you use i386 or amd64? I use amd64, it has much more KVA space
> (kernel virtual area).
I use i386. I suppose I'll need to raise KVA_PAGES to 512 and then slightly
raise vm.kmem_size. What are the safe values for i386?
I wonder why I got crash without dump and not the "kmem_map too small" pa
> T> Ah, I found where EPERM comes from. Will fix it soon.
> Sorry, I was wrong :(
I managed to track down EPERMs back to ng_make_node() call from
ng_mkpeer() in netgraph/ng_base.c:1460.
I'll add some additional printfs inside ng_make_node().
___
freebs
The following reply was made to PR kern/122290; it has been noted by GNATS.
From: Przemyslaw Frasunek
To: bug-follo...@freebsd.org
Cc:
Subject: Re: kern/122290: [netgraph] [panic] Netgraph related "kmem_map too
small" panics
Date: Tue, 10 May 2011 22:30:31 +0200
It seems to be f
Hello,
we are experiencing interesing behaviour of dummynet enabled on IPv6 interfaces.
When the following rules are added:
add pipe 24 ip from any to any in recv vlan1
add pipe 25 ip from any to any out xmit vlan1
all ICMPv6 packets passing on vlan1 are being damaged:
10:55:53.180801 IP6 fe80:
Dear all,
We are running FreeBSD 9.2-RELEASE-p3 on few PPPoE access servers, each
servicing about 1000 customers. Each server exchanges customers' /32 (for IPv4)
and /64 (for IPv6) routes using OSPF and BIRD.
Few times in a month, we are experiencing routing table corruption, which causes
spuriou
>> Has anyone seen this before?
> Fixed in 9 by r257389 (So you should try either stable or 10.x).
Thanks a lot! I'll schedule an upgrade.
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, se
30 matches
Mail list logo