05.07.2011 17:10, Adrian Minta wrote:
CC'ing Alexander Motin, perhaps he knows why this happens
in case of very high volume and rate of incoming connections.
> A deeper debug of a failure looks like this:
>
> # grep "L28-6225" /var/log/mpd.log
> Jul 5 13:06:39 lns mpd: [L28-6225] L2TP: Incoming
A deeper debug of a failure looks like this:
# grep "L28-6225" /var/log/mpd.log
Jul 5 13:06:39 lns mpd: [L28-6225] L2TP: Incoming call #70 via control
connection 0x80b7bfc10 accepted
Jul 5 13:06:39 lns mpd: [L28-6225] Link: OPEN event
Jul 5 13:06:39 lns mpd: [L28-6225] LCP: Open event
Jul 5
> You should study why existing connections break,
> do clients disconnect themselves or server disconnect them?
> You'll need turn off detailed logs, read mpd's documentation.
>
> Also, there are system-wide queues for NETGRAPH messages that can overflow
> and that's bad thing. Check them out with
On Mon, Jul 04, 2011 at 08:16:19PM +0300, Adrian Minta wrote:
> >It seems, enough. But, are you sure your L2TP client will wait
> >for overloaded daemon to complete connection? The change will
> >proportionally increase responsiveness of mpd - it has not enough CPU
> >horsepower to process request
>
> What do you have net.graph.threads set to ? With the load avg so high,
> perhaps you are just running into processing limits with so many
> connections ? amotin would know.
>
> ---Mike
>
No, I didn't touch it.
lns# sysctl net.graph.threads
net.graph.threads: 8
How big should this be f
What do you have net.graph.threads set to ? With the load avg so high,
perhaps you are just running into processing limits with so many
connections ? amotin would know.
---Mike
On 7/4/2011 1:16 PM, Adrian Minta wrote:
>> It seems, enough. But, are you sure your L2TP client will wait
>
>It seems, enough. But, are you sure your L2TP client will wait
>for overloaded daemon to complete connection? The change will
>proportionally increase responsiveness of mpd - it has not enough CPU
>horsepower to process requests timely.
>
>Eugene Grosbein
Actually something else is happening.
I
04.07.2011 15:30, Adrian Minta пишет:
> if I undestand corectly, in order to increase the connection rate I need
> to replace 60 with 600 and 10 with 100 like this:
>
>#define SETOVERLOAD(q)do {\
> int t = (q);
On 07/03/2011 10:56 PM, Eugene Grosbein wrote:
There is internal queue of messages in the mpd-5.5 with length 8129.
Messages are generated based on various events and enqueued there, then
processed.
Mpd uses GRED algorithm to prevent overload: it accepts all new L2TP connections
when queue has
On Sun, Jul 3, 2011 at 2:15 PM, Adrian Minta wrote:
> After looking in the mpd log file I found out that this message appear
> when calls are dropped:
> Jul 3 21:21:21 lns mpd: Daemon overloaded, ignoring request.
> Jul 3 21:21:22 lns mpd: Daemon overloaded, ignoring request.
> Jul 3 21:21:23 l
maybe number of threads
On Sun, Jul 3, 2011 at 10:15 PM, Adrian Minta wrote:
> After looking in the mpd log file I found out that this message appear
> when calls are dropped:
> Jul 3 21:21:21 lns mpd: Daemon overloaded, ignoring request.
> Jul 3 21:21:22 lns mpd: Daemon overloaded, ignori
>> After looking in the mpd log file I found out that this message appear
>> when calls are dropped:
>> Jul 3 21:21:21 lns mpd: Daemon overloaded, ignoring request.
>> Jul 3 21:21:22 lns mpd: Daemon overloaded, ignoring request.
>> Jul 3 21:21:23 lns mpd: Daemon overloaded, ignoring request.
>>
04.07.2011 02:15, Adrian Minta пишет:
> After looking in the mpd log file I found out that this message appear
> when calls are dropped:
> Jul 3 21:21:21 lns mpd: Daemon overloaded, ignoring request.
> Jul 3 21:21:22 lns mpd: Daemon overloaded, ignoring request.
> Jul 3 21:21:23 lns mpd: Daemon
After looking in the mpd log file I found out that this message appear
when calls are dropped:
Jul 3 21:21:21 lns mpd: Daemon overloaded, ignoring request.
Jul 3 21:21:22 lns mpd: Daemon overloaded, ignoring request.
Jul 3 21:21:23 lns mpd: Daemon overloaded, ignoring request.
Jul 3 21:21:23 ln
> !? 6300 ?!
> when I wrote netgraph and Archie wrote mpd I think we were thinking in
> terms of a few tens of sessions.
> of course others have done a lot of work on both since then...
>
>
I'm a linux guy and I'm impressed. You did an excellent job !
--
Best regards,
Adrian MintaMA3173-RIP
On 7/2/11 12:15 PM, Adrian Minta wrote:
Hi,
Without FLOWTABLE the system is stable an I was able to increase the
number of l2tp sessions. A major improvement came when I replaced the
network card with a multiqueue model (igb). The limit is now around 6300
active sessions. If I try to go over this
Hi,
Without FLOWTABLE the system is stable an I was able to increase the
number of l2tp sessions. A major improvement came when I replaced the
network card with a multiqueue model (igb). The limit is now around 6300
active sessions. If I try to go over this limit the mpd5 starts to loose
old sessio
On Tue, Jun 28, 2011 at 11:53 PM, Bjoern A. Zeeb <
bzeeb-li...@lists.zabbadoz.net> wrote:
>
> > Perhaps it would be best to document what those particular workloads are.
> Apparently, systems with small and seldom changing routing tables are good
> candidates. However, the distinction is not immed
> Perhaps it would be best to document what those particular workloads are.
> Apparently, systems with small and seldom changing routing tables are good
> candidates. However, the distinction is not immediately obvious by skimming
> through the list archives.
Start reading here:
http://confere
On Tue, Jun 28, 2011 at 10:44 PM, Bjoern A. Zeeb <
bzeeb-li...@lists.zabbadoz.net> wrote:
> On Jun 28, 2011, at 8:27 PM, Christian Kratzer wrote:
>
> > Hi,
> >
> > On Tue, 28 Jun 2011, Pawel Tyll wrote:
> >> Hi Adrian,
> >>
> >>> Good news !
> >>> Last night I remove FLOWTABLE option and since the
On Jun 28, 2011, at 8:27 PM, Christian Kratzer wrote:
> Hi,
>
> On Tue, 28 Jun 2011, Pawel Tyll wrote:
>> Hi Adrian,
>>
>>> Good news !
>>> Last night I remove FLOWTABLE option and since then the server is stable.
>>> No crash what so ever an I was able to increase the number of tunnels.
>> Yeah
Hi,
On Tue, 28 Jun 2011, Pawel Tyll wrote:
Hi Adrian,
Good news !
Last night I remove FLOWTABLE option and since then the server is stable.
No crash what so ever an I was able to increase the number of tunnels.
Yeah, FLOWTABLE still needs work, good news on the stability. Could
you perhaps dr
On 6/28/2011 3:38 PM, Adrian Minta wrote:
> Good news !
> Last night I remove FLOWTABLE option and since then the server is stable.
> No crash what so ever an I was able to increase the number of tunnels.
Thats great! If you are getting close to the CPU maxing out, consider
getting rid of snmpd.
Hi Adrian,
> Good news !
> Last night I remove FLOWTABLE option and since then the server is stable.
> No crash what so ever an I was able to increase the number of tunnels.
Yeah, FLOWTABLE still needs work, good news on the stability. Could
you perhaps drop us all a note in two weeks if things ke
Good news !
Last night I remove FLOWTABLE option and since then the server is stable.
No crash what so ever an I was able to increase the number of tunnels.
> On 6/27/2011 4:50 PM, Adrian Minta wrote:
>> Thanks to Vlad Galu I was able to acquire a full crashinfo and kernel
>> dump
>> after a sy
On 6/27/2011 4:50 PM, Adrian Minta wrote:
> Thanks to Vlad Galu I was able to acquire a full crashinfo and kernel dump
> after a system freeze. I put all the files at:
> http://pluto.stsisp.ro/fbsd/
>
> I hope this will help somebody in finding the race condition.
Dont know about the race, but on
Thanks to Vlad Galu I was able to acquire a full crashinfo and kernel dump
after a system freeze. I put all the files at:
http://pluto.stsisp.ro/fbsd/
I hope this will help somebody in finding the race condition.
___
freebsd-net@freebsd.org mailing li
On 6/25/2011 5:28 PM, Adrian Minta wrote:
>
> options ALTQ
> options ALTQ_CBQ
> options ALTQ_RED
> options ALTQ_RIO
> options ALTQ_HFSC
> options ALTQ_PRIQ
> # options ALTQ_NOPCC
>
> device pf
> device pflog
> device pfsync
pf and altq add quite a bit of networking overhead. If you are not usin
On Sat, Jun 25, 2011 at 11:28 PM, Adrian Minta wrote:
> After recompilation with "*default release=cvs tag=RELENG_8" and pooling
> disables the system still crashes around 4200 sessions. The server has a
> xeon E5520 CPU and 4G of ram. Here is the crash on the screen:
> http://img232.imageshack.u
After recompilation with "*default release=cvs tag=RELENG_8" and pooling
disables the system still crashes around 4200 sessions. The server has a
xeon E5520 CPU and 4G of ram. Here is the crash on the screen:
http://img232.imageshack.us/img232/6751/crashm.png
$uname -a
FreeBSD lns 8.2-STABLE FreeB
rozhuk...@gmail.com
>> Cc: 'Adrian Minta'; freebsd-net@freebsd.org
>> Subject: Re: FreeBSD 8.2 and MPD5 stability issues
>>
>> On 6/23/2011 4:17 PM, rozhuk...@gmail.com wrote:
>>>
>>> Try:
>>> net.inet.ip.fastforwarding = 0
&g
ex.net]
> Sent: Friday, June 24, 2011 5:40 AM
> To: rozhuk...@gmail.com
> Cc: 'Adrian Minta'; freebsd-net@freebsd.org
> Subject: Re: FreeBSD 8.2 and MPD5 stability issues
>
> On 6/23/2011 4:17 PM, rozhuk...@gmail.com wrote:
> >
> > Try:
> > net.in
On 6/23/2011 4:17 PM, rozhuk...@gmail.com wrote:
>
> Try:
> net.inet.ip.fastforwarding = 0
> net.isr.bindthreads = 1
> net.isr.direct = 0
> net.isr.direct_force = 0
If net.isr.direct is disabled, does setting net.isr.bindthreads do
anything ? Also, why disable the fastforwarding ?
---Mi
4, 2011 4:56 AM
> To: Adrian Minta
> Cc: freebsd-net@freebsd.org
> Subject: Re: FreeBSD 8.2 and MPD5 stability issues
>
> On 6/23/2011 3:18 PM, Adrian Minta wrote:
> > Oops i spoke too soon ...
> > The system is stable without hyperthreading. With hyperthreading
> a
On 6/23/2011 3:18 PM, Adrian Minta wrote:
> Oops i spoke too soon ...
> The system is stable without hyperthreading. With hyperthreading activated
> i't freezes again.
>
I also run with
devd_enable="NO"
in /etc/rc.conf
and
in /etc/syctl.conf
kern.random.sys.harvest.ethernet=0
Does it actually
Oops i spoke too soon ...
The system is stable without hyperthreading. With hyperthreading activated
i't freezes again.
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "fr
On 6/23/2011 1:37 PM, Adrian Minta wrote:
>> *default release=cvs tag=RELENG_8
>> Then follow the steps to install the kernel and world
>>
>> http://www.freebsd.org/doc/handbook/makeworld.html
>
> Thank you !
> My server is stable now with 3572 sessions.
> The issue now seems to be the rate of new
> On 6/23/2011 1:37 PM, Adrian Minta wrote:
>>> *default release=cvs tag=RELENG_8
>>> Then follow the steps to install the kernel and world
>>>
>>> http://www.freebsd.org/doc/handbook/makeworld.html
>>
>> Thank you !
>> My server is stable now with 3572 sessions.
>> The issue now seems to be the r
> *default release=cvs tag=RELENG_8
>
>
> csup -g -L2 -h cvsup10.freebsd.org /tmp/stable-supfile
>
> This will pull down all the source for the RELENG_8, the most uptodate
> source tree and put it in /usr/src from the cvsup mirror
> cvsup10.freebsd.org. Where you see references to cvsup (the clie
On 6/23/2011 11:56 AM, Adrian Minta wrote:
>
> Sorry if I miss something, but just started using freebsd, I use mostly
> linux.
> The server use AMD64 flavor and have 4G of RAM.
> RELENG_8 is not the prerelease for 8.0 ? I have 8.2-RELEASE isn't it newer ?
>
Hi,
RELENG_8 is the present s
On Jun 23, 2011, at 12:55 PM, Adrian Minta wrote:
> Hello,
> I am testing a RAS solution and I experience some crashes when the L2TP
> tunnels grow above 3500.
I guess you mean sessions, not tunnels.
> The IPv6 is disabled on the box. With IPv6 enabled the limit is around 1700
> (half). Does
> On 6/23/2011 8:55 AM, Adrian Minta wrote:
>> Hello,
>> I am testing a RAS solution and I experience some crashes when the L2TP
>> tunnels grow above 3500. The IPv6 is disabled on the box. With IPv6
>> enabled the limit is around 1700 (half). Does anyone has a sugesstion
>> what I should try next
On 6/23/2011 8:55 AM, Adrian Minta wrote:
> Hello,
> I am testing a RAS solution and I experience some crashes when the L2TP
> tunnels grow above 3500. The IPv6 is disabled on the box. With IPv6
> enabled the limit is around 1700 (half). Does anyone has a sugesstion
> what I should try next ?
Ther
Hello,
I am testing a RAS solution and I experience some crashes when the L2TP
tunnels grow above 3500. The IPv6 is disabled on the box. With IPv6
enabled the limit is around 1700 (half). Does anyone has a sugesstion
what I should try next ?
total traffic: ~ 100mbps
load averages: ~1.8
Kern
44 matches
Mail list logo