?
I believe that the card do internally some sort of ether-channel. This
may explain your findings.
--
Best regards,
Adrian Minta
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any
On 02/11/12 21:18, Michael Sierchio wrote:
I'm trying to use mpd5 to build an L2TP server. It generally works as
expected, except I cannot figure out how to push the route to an
attached network to the PPP client. If I manually add a route on the
client (to the ppp0 interface), things work as e
On 02/02/12 19:16, Коньков Евгений wrote:
Здравствуйте, Adrian.
Вы писали 2 февраля 2012 г., 18:09:33:
AM> A multiqueue network card may help, like a dualport Intel igb E1G42ET.
actually it is not. Intel have hardware separation to interrupts.
So having only pptp trafic on interface cause ne
A multiqueue network card may help, like a dualport Intel igb E1G42ET.
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
://www.pfsense.org/
... and ...
http://doc.pfsense.org/index.php/Multi-WAN_2.0
--
Best regards,
Adrian MintaMA3173-RIPE, www.minta.ro
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to
Broadcom bce nics are trash. I see the same packet loss on linux as weel.
The best solution is add/replace with another brand.
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail t
A deeper debug of a failure looks like this:
# grep "L28-6225" /var/log/mpd.log
Jul 5 13:06:39 lns mpd: [L28-6225] L2TP: Incoming call #70 via control
connection 0x80b7bfc10 accepted
Jul 5 13:06:39 lns mpd: [L28-6225] Link: OPEN event
Jul 5 13:06:39 lns mpd: [L28-6225] LCP: Open event
Jul 5
> You should study why existing connections break,
> do clients disconnect themselves or server disconnect them?
> You'll need turn off detailed logs, read mpd's documentation.
>
> Also, there are system-wide queues for NETGRAPH messages that can overflow
> and that's bad thing. Check them out with
8
How big should this be for my server (8 cores) ?
--
Best regards,
Adrian MintaMA3173-RIPE
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
>It seems, enough. But, are you sure your L2TP client will wait
>for overloaded daemon to complete connection? The change will
>proportionally increase responsiveness of mpd - it has not enough CPU
>horsepower to process requests timely.
>
>Eugene Grosbein
Actually something else is happening.
I
On 07/03/2011 10:56 PM, Eugene Grosbein wrote:
There is internal queue of messages in the mpd-5.5 with length 8129.
Messages are generated based on various events and enqueued there, then
processed.
Mpd uses GRED algorithm to prevent overload: it accepts all new L2TP connections
when queue has
lns mpd: Daemon overloaded, ignoring request.
Jul 3 21:21:24 lns mpd: Daemon overloaded, ignoring request.
Jul 3 21:21:24 lns mpd: Daemon overloaded, ignoring request.
Does anybody knows where this limit is set in mpd5 ?
--
Best regards,
Adrian MintaMA3173-RIPE
> !? 6300 ?!
> when I wrote netgraph and Archie wrote mpd I think we were thinking in
> terms of a few tens of sessions.
> of course others have done a lot of work on both since then...
>
>
I'm a linux guy and I'm impressed. You did an excellent job !
--
Best reg
Hi,
Without FLOWTABLE the system is stable an I was able to increase the
number of l2tp sessions. A major improvement came when I replaced the
network card with a multiqueue model (igb). The limit is now around 6300
active sessions. If I try to go over this limit the mpd5 starts to loose
old sessio
Good news !
Last night I remove FLOWTABLE option and since then the server is stable.
No crash what so ever an I was able to increase the number of tunnels.
> On 6/27/2011 4:50 PM, Adrian Minta wrote:
>> Thanks to Vlad Galu I was able to acquire a full crashinfo and kernel
>>
Thanks to Vlad Galu I was able to acquire a full crashinfo and kernel dump
after a system freeze. I put all the files at:
http://pluto.stsisp.ro/fbsd/
I hope this will help somebody in finding the race condition.
___
freebsd-net@freebsd.org mailing li
After recompilation with "*default release=cvs tag=RELENG_8" and pooling
disables the system still crashes around 4200 sessions. The server has a
xeon E5520 CPU and 4G of ram. Here is the crash on the screen:
http://img232.imageshack.us/img232/6751/crashm.png
$uname -a
FreeBSD lns 8.2-STABLE FreeB
Oops i spoke too soon ...
The system is stable without hyperthreading. With hyperthreading activated
i't freezes again.
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "fr
> On 6/23/2011 1:37 PM, Adrian Minta wrote:
>>> *default release=cvs tag=RELENG_8
>>> Then follow the steps to install the kernel and world
>>>
>>> http://www.freebsd.org/doc/handbook/makeworld.html
>>
>> Thank you !
>> My server is stable
> *default release=cvs tag=RELENG_8
>
>
> csup -g -L2 -h cvsup10.freebsd.org /tmp/stable-supfile
>
> This will pull down all the source for the RELENG_8, the most uptodate
> source tree and put it in /usr/src from the cvsup mirror
> cvsup10.freebsd.org. Where you see references to cvsup (the clie
> On 6/23/2011 8:55 AM, Adrian Minta wrote:
>> Hello,
>> I am testing a RAS solution and I experience some crashes when the L2TP
>> tunnels grow above 3500. The IPv6 is disabled on the box. With IPv6
>> enabled the limit is around 1700 (half). Does anyone has a suge
kern.ipc.maxsockbuf=12800
net.graph.maxdgram=1024
net.graph.recvspace=1024
--
Best regards,
Adrian MintaMA3173-RIPE
tel. +4.0212.022.660 +4.0726.110.369
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To
22 matches
Mail list logo