>Seems to me that spending money on a real packetshaper would be a
>better investment than donating to compromise on the free stuff (not
>that I'd want to discourage anyone from contributing to FreeBSD
>generally).
>Your problem is that at high traffic levels you need to reduce traffic
>flows, not
Barney Cordoba wrote:
Your problem is that at high traffic levels you need to reduce traffic
flows, not just delay it as dummynet does.
Dummynet does not "just adds delay".
The entire point of traffic
shaping is to smooth out your traffic flows; not to make it so choppy
that you have packets
--- On Tue, 10/20/09, rihad wrote:
> From: rihad
> Subject: Re: dummynet dropping too many packets
> To: freebsd-net@freebsd.org
> Date: Tuesday, October 20, 2009, 11:41 AM
> I'm so happy today: finally running a
> "ifp->if_snd.ifq_drv_maxlen = 4096;" an
I'm so happy today: finally running a "ifp->if_snd.ifq_drv_maxlen =
4096;" and HZ=4000 kernel with 4100+ online users @500+ mbps, and, most
importantly, with absolutely 0 drops since boot time! ;-) Even if drops
do come in, I'll know where to look first. I'd like to express my
gratitude to Robe
Oleg Bulyzhin wrote:
One more idea to check:
What happens if you rearrange your rules to shape 'in' packets?
i.e. use 'in recv bce0' instead of 'out recv bce0 xmit bce1'.
Wow, shape incoming packets? That's a good one - the packets could still
buffer up waiting to be output. I'm not sure this
On Fri, Oct 09, 2009 at 07:35:01PM +0500, rihad wrote:
> Oleg Bulyzhin wrote:
> > On Wed, Oct 07, 2009 at 03:52:56PM +0500, rihad wrote:
> >
> >> You probably have some special sources of documentation ;-) According to
> >> man ipfw, both "netgraph/ngtee" and "pipe" decide the fate of the packet
rihad wrote:
I've just split both table(0) and table(2) in two, and the output drops
were brought down to 20-80 up to 150 (in systat -ip). Now there are
around 1700 in each of the tables 0 and 2, and exactly 1500 enries in
each of the tables 10 and 20.
01060 pipe tablearg ip from any to table
rihad wrote:
rihad wrote:
Peter Jeremy wrote:
Since the problem only appears to manifest when table(0) exceeds 2000
entries, have you considered splitting (at least temporarily) that
table (and possibly table(2)) into two (eg table(0) and table(4))?
This would help rule out an (unlikely) probl
rihad wrote:
Peter Jeremy wrote:
Since the problem only appears to manifest when table(0) exceeds 2000
entries, have you considered splitting (at least temporarily) that
table (and possibly table(2)) into two (eg table(0) and table(4))?
This would help rule out an (unlikely) problem with table
Peter Jeremy wrote:
Since the problem only appears to manifest when table(0) exceeds 2000
entries, have you considered splitting (at least temporarily) that
table (and possibly table(2)) into two (eg table(0) and table(4))?
This would help rule out an (unlikely) problem with table sizes.
It was
Peter Jeremy wrote:
On 2009-Oct-04 18:47:23 +0500, rihad wrote:
Hi, we have around 500-600 mbit/s traffic flowing through a 7.1R
Dell PowerEdge w/ 2 GigE bce cards. There are currently around 4
thousand ISP users online limited by dummynet pipes of various
speeds. According to netstat -s output
rihad wrote:
Julian Elischer wrote:
rihad wrote:
The change definitely helped! There are now more than 3200 users
online, 460-500 mbps net traffic load, and normally 10-60 (up to 150
once or twice) consistent drops per second as opposed to several
hundred up to 1000-1500 packets dropped pe
On 2009-Oct-04 18:47:23 +0500, rihad wrote:
>Hi, we have around 500-600 mbit/s traffic flowing through a 7.1R Dell
>PowerEdge w/ 2 GigE bce cards. There are currently around 4 thousand ISP
>users online limited by dummynet pipes of various speeds. According to
>netstat -s output around 500-1000
Robert Watson wrote:
On Sat, 17 Oct 2009, rihad wrote:
P.S.: BTW, there's a small admin-type inconsistency in FreeBSD 7.1:
/etc/rc.firewall gets executed before values set by /etc/sysctl.conf
are in effect, so "queue 2000" isn't allowed in ipfw pipe rules (as
net.inet.ip.dummynet.pipe_slot_l
rihad wrote:
Just rebooted with the "ifp->if_snd.ifq_drv_maxlen = 1024;" kernel, all
ok so far. There's currenlty only 1000 or so entries in the ipfw table
and around 350-400 net mbps load, so I'll wait a few hours for the
numbers to grow to >2000 and 460-480 respectively and see if the drops
Julian Elischer wrote:
rihad wrote:
The change definitely helped! There are now more than 3200 users
online, 460-500 mbps net traffic load, and normally 10-60 (up to 150
once or twice) consistent drops per second as opposed to several
hundred up to 1000-1500 packets dropped per second befor
On Sat, 17 Oct 2009, rihad wrote:
Just rebooted with the "ifp->if_snd.ifq_drv_maxlen = 1024;" kernel, all ok
so far. There's currenlty only 1000 or so entries in the ipfw table and
around 350-400 net mbps load, so I'll wait a few hours for the numbers to
grow to >2000 and 460-480 respectively
On Sat, 17 Oct 2009, rihad wrote:
P.S.: BTW, there's a small admin-type inconsistency in FreeBSD 7.1:
/etc/rc.firewall gets executed before values set by /etc/sysctl.conf are in
effect, so "queue 2000" isn't allowed in ipfw pipe rules (as
net.inet.ip.dummynet.pipe_slot_limit is only 100 by de
rihad wrote:
The change definitely helped! There are now more than 3200 users online,
460-500 mbps net traffic load, and normally 10-60 (up to 150 once or
twice) consistent drops per second as opposed to several hundred up to
1000-1500 packets dropped per second before the rebuild. What's
i
rihad wrote:
rihad wrote:
For now we've mostly disabled dummynet and the drops have stopped,
thanks to some extra unused bandwidth we have. But this isn't a real
solution, of course, so this weekend I'm going to try the suggestion
made by Robert Watson:
> In the driver init code in if_bce,
rihad wrote:
so the rules are silently failing without any trace in the log files
- I only saw the errors at the console.
It turns out to be quite easy to fix the logging:
from /etc/syslog.conf:
# uncomment this to log all writes to /dev/console to /var/log/console.log
#console.info
rihad wrote:
For now we've mostly disabled dummynet and the drops have stopped,
thanks to some extra unused bandwidth we have. But this isn't a real
solution, of course, so this weekend I'm going to try the suggestion
made by Robert Watson:
> In the driver init code in if_bce, the following
Robert Watson wrote:
On Thu, 15 Oct 2009, rihad wrote:
meaning that USABLE_TX_BD is expected to be smaller than MAX_TX_BD.
What if MAX_TX_BD is itself way smaller than 1024, which I'll
eventually set ifq_drv_maxlen to? Can a driver guru please comment on
this? In a few days I'm going to try
On Thu, 15 Oct 2009, rihad wrote:
meaning that USABLE_TX_BD is expected to be smaller than MAX_TX_BD. What if
MAX_TX_BD is itself way smaller than 1024, which I'll eventually set
ifq_drv_maxlen to? Can a driver guru please comment on this? In a few days
I'm going to try it anyway, and if the
For now we've mostly disabled dummynet and the drops have stopped,
thanks to some extra unused bandwidth we have. But this isn't a real
solution, of course, so this weekend I'm going to try the suggestion
made by Robert Watson:
> In the driver init code in if_bce, the following code appears:
>
Oleg Bulyzhin wrote:
On Wed, Oct 07, 2009 at 03:52:56PM +0500, rihad wrote:
You probably have some special sources of documentation ;-) According to
man ipfw, both "netgraph/ngtee" and "pipe" decide the fate of the packet
unless one_pass=0. Or do you mean sprinkling smart skiptos here and
the
On Thu, Oct 08, 2009 at 09:18:23AM -0700, Julian Elischer wrote:
> that seems like a bug to me..
> neither tee should ever terminate a search.
agree. But documented bug is a feature ;) and i'm not sure if we fix this
wouldnt it break POLA?
>
> if you want to terminate it, add a specific rule to d
On Wed, 07 Oct 2009 20:02:21 +0500 rihad wrote:
>Ingo Flaschberger wrote:
>> Hi,
>>
>> can you send me the dmesg ouput from your networkcards when they are
>> detected at booting?
>>
>Hello,
>
>bce0: mem
>0xf400-0xf5ff irq 16 at device 0.0 on pci7
>bce0: Ethernet address: 00:1d:0
Julian Elischer wrote:
you can not do anything about it if one of the custommers sends a burst
of 3000 udp packets at their maximum speed(or maybe some combination of
custommers to something which results in an aggreagate burst rate like
that.
In other words you may always continue to get mome
Julian Elischer wrote:
tee & ngtee are similar with one_pass=0 and different with one_pass=1
that seems like a bug to me..
neither tee should ever terminate a search.
if you want to terminate it, add a specific rule to do so.
Unfortunately I wasn't involved in writing it.
+1
ngtee shouldn'
rihad wrote:
Robert Watson wrote:
I would suggest making just the HZ -> 4000 change for now and see how
it goes.
2018 users online, 73 drops have just occurred.
p.s.: already 123 drops.
It will only get worse after some time.
Traffic load: 440-450 mbps.
top -HS:
last pid: 68314; load averag
Oleg Bulyzhin wrote:
On Wed, Oct 07, 2009 at 09:42:27PM +0500, rihad wrote:
Julian Elischer wrote:
rihad wrote:
Oleg Bulyzhin wrote:
You probably have some special sources of documentation ;-) According
to man ipfw, both "netgraph/ngtee" and "pipe" decide the fate of the
packet unless one_pa
Ian Smith wrote:
On Wed, 7 Oct 2009, rihad wrote:
> Robert Watson wrote:
>
> > I would suggest making just the HZ -> 4000 change for now and see how it
> > goes.
> >
> OK, I will try testing HZ=4000 tomorrow morning, although I'm pretty sure
> there still will be some drops.
Even if
Robert Watson wrote:
I would suggest making just the HZ -> 4000 change for now and see how
it goes.
~4000 online users, ~450-470 mbps traffic, 300-600 global drops per
second. Same ole. Not funny at all.
net.inet.ip.dummynet.io_pkt_drop: 0
net.inet.ip.intr_queue_drops: 0
net.inet.ip.fastforwa
Robert Watson wrote:
I would suggest making just the HZ -> 4000 change for now and see how
it goes.
2018 users online, 73 drops have just occurred.
p.s.: already 123 drops.
It will only get worse after some time.
Traffic load: 440-450 mbps.
top -HS:
last pid: 68314; load averages: 1.35, 1.
Robert Watson wrote:
I would suggest making just the HZ -> 4000 change for now and see how it
goes.
Been running for a few hours under these changed sysctls:
kern.clockrate: { hz = 4000, tick = 250, profhz = 4000, stathz = 129 }
net.inet.ip.dummynet.io_fast: 1
net.inet.ip.dummynet.hash_size: 5
On Wed, 7 Oct 2009, rihad wrote:
> Robert Watson wrote:
>
> > I would suggest making just the HZ -> 4000 change for now and see how it
> > goes.
> >
> OK, I will try testing HZ=4000 tomorrow morning, although I'm pretty sure
> there still will be some drops.
Even if there are, I'd like t
On Wed, Oct 07, 2009 at 09:42:27PM +0500, rihad wrote:
> Julian Elischer wrote:
> > rihad wrote:
> >> Oleg Bulyzhin wrote:
> >
> >> You probably have some special sources of documentation ;-) According
> >> to man ipfw, both "netgraph/ngtee" and "pipe" decide the fate of the
> >> packet unless o
rihad wrote:
Robert Watson wrote:
I would suggest making just the HZ -> 4000 change for now and see how
it goes.
OK, I will try testing HZ=4000 tomorrow morning, although I'm pretty
sure there still will be some drops.
Can someone please say how to increase the "ifnet transmit queue sizes
Julian Elischer wrote:
rihad wrote:
Oleg Bulyzhin wrote:
You probably have some special sources of documentation ;-) According
to man ipfw, both "netgraph/ngtee" and "pipe" decide the fate of the
packet unless one_pass=0. Or do you mean sprinkling smart skiptos here
and there? ;-)
ngtee
rihad wrote:
Oleg Bulyzhin wrote:
You probably have some special sources of documentation ;-) According to
man ipfw, both "netgraph/ngtee" and "pipe" decide the fate of the packet
unless one_pass=0. Or do you mean sprinkling smart skiptos here and
there? ;-)
ngtee should not have any aff
Ingo Flaschberger wrote:
Dear Rihad,
bge network card seems to have small tx/rx rings?
If I understood the src/sys/dev/bge/if_bgereg.h correct, the
ring size is 512 descriptors, while intel based cards
(em) have up to 4096 descriptors.
We have bce, not bge.
I'm gonna try HZ=4000 tomorrow and
Dear Rihad,
bge network card seems to have small tx/rx rings?
If I understood the src/sys/dev/bge/if_bgereg.h correct, the
ring size is 512 descriptors, while intel based cards
(em) have up to 4096 descriptors.
Kind regards,
Ingo Flaschberger
_
Dear Rihad,
can you also send me a lspci and lspci -v ?
Sorry, this is FreeBSD, not Linux ;-)
you find a lspci in ports.
Kind regards,
Ingo Flaschberger
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/fre
Ingo Flaschberger wrote:
Hi,
can you send me the dmesg ouput from your networkcards when they are
detected at booting?
Hello,
bce0: mem
0xf400-0xf5ff irq 16 at device 0.0 on pci7
bce0: Ethernet address: 00:1d:09:xx:xx:xx
bce0: [ITHREAD]
bce0: ASIC (0x57081020); Rev (B2); Bus (PCI
Robert Watson wrote:
In the driver init code in if_bce, the following code appears:
ifp->if_snd.ifq_drv_maxlen = USABLE_TX_BD;
IFQ_SET_MAXLEN(&ifp->if_snd, ifp->if_snd.ifq_drv_maxlen);
IFQ_SET_READY(&ifp->if_snd);
Which evaluates to a architecture-specific value due to v
Hi,
can you send me the dmesg ouput from your networkcards when they are
detected at booting?
can you also send me a lspci and lspci -v ?
Kind regards,
ingo flaschberger
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mail
Robert Watson wrote:
I would suggest making just the HZ -> 4000 change for now and see how it
goes.
OK, I will try testing HZ=4000 tomorrow morning, although I'm pretty
sure there still will be some drops.
Can someone please say how to increase the "ifnet transmit queue sizes"?
Unfortuna
On Wed, 7 Oct 2009, rihad wrote:
Suggestions like increasing timer resolution are intended to spread out the
injection of packets by dummynet to attempt to reduce the peaks of
burstiness that occur when multiple queues inject packets in a burst that
exceeds the queue depth supported by combin
Robert Watson wrote:
Suggestions like increasing timer resolution are intended to spread out
the injection of packets by dummynet to attempt to reduce the peaks of
burstiness that occur when multiple queues inject packets in a burst
that exceeds the queue depth supported by combined hardware de
Oleg Bulyzhin wrote:
On Wed, Oct 07, 2009 at 03:52:56PM +0500, rihad wrote:
P.S. have you tried net.inet.ip.fastforwarding=1?
Yup, it didn't help at all. Reverting it back to 0 for now.
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.o
On Wed, 7 Oct 2009, rihad wrote:
Robert Watson wrote:
On Wed, 7 Oct 2009, rihad wrote:
snapshot of the top -SH output in the steady state? Let top run for a
few minutes and then copy/paste the first 10-20 lines into an e-mail.
Sure. Mind you: now there's only 1800 entries in each of the
Why isn't it enabled by default?
Answering myself: probably because of this:
The IP fastforwarding path does not generate ICMP redirect or source
quench messages.
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/fre
Oleg Bulyzhin wrote:
On Wed, Oct 07, 2009 at 03:52:56PM +0500, rihad wrote:
You probably have some special sources of documentation ;-) According to
man ipfw, both "netgraph/ngtee" and "pipe" decide the fate of the packet
unless one_pass=0. Or do you mean sprinkling smart skiptos here and
the
On Wed, Oct 07, 2009 at 03:52:56PM +0500, rihad wrote:
> You probably have some special sources of documentation ;-) According to
> man ipfw, both "netgraph/ngtee" and "pipe" decide the fate of the packet
> unless one_pass=0. Or do you mean sprinkling smart skiptos here and
> there? ;-)
you ca
Its frightening to me that someone is managing such a large network
with dummynet. Talk about stealing your customer's money.
We have no customers - we're a charity ISP.
Any alternatives? ALTQ?
___
freebsd-net@freebsd.org mailing list
http://lists.free
--- On Wed, 10/7/09, rihad wrote:
> From: rihad
> Subject: Re: dummynet dropping too many packets
> To: "Oleg Bulyzhin"
> Cc: freebsd-net@freebsd.org
> Date: Wednesday, October 7, 2009, 7:23 AM
> rihad wrote:
> > Oleg Bulyzhin wrote:
> >> On W
rihad wrote:
Oleg Bulyzhin wrote:
On Wed, Oct 07, 2009 at 03:16:27PM +0500, rihad wrote:
Oleg Bulyzhin wrote:
On Wed, Oct 07, 2009 at 02:23:47PM +0500, rihad wrote:
Few questions:
1) why are you not using fastforwarding?
2) search_steps/searches ratio is not that good, are you using
'buckets
Oleg Bulyzhin wrote:
On Wed, Oct 07, 2009 at 03:16:27PM +0500, rihad wrote:
Oleg Bulyzhin wrote:
On Wed, Oct 07, 2009 at 02:23:47PM +0500, rihad wrote:
Few questions:
1) why are you not using fastforwarding?
2) search_steps/searches ratio is not that good, are you using 'buckets'
keyword in
On Wed, Oct 07, 2009 at 03:16:27PM +0500, rihad wrote:
> Oleg Bulyzhin wrote:
> > On Wed, Oct 07, 2009 at 02:23:47PM +0500, rihad wrote:
> >
> > Few questions:
> > 1) why are you not using fastforwarding?
> > 2) search_steps/searches ratio is not that good, are you using 'buckets'
> >keyword i
rihad wrote:
net.isr.direct=0
Sorry, net.isr.direct=1
I forgot to revert it back after copy'n'pasting top -SH for Mr. Robert.
top -SH:
last pid: 2528; load averages: 0.69, 0.89, 0.96
up 1+02:15:20 15:26:01
165 processes: 12 running, 137 sleeping, 16 waiting
rihad wrote:
Now the probability of drops (as monitored by netstat -s's "output
packets dropped due to no bufs, etc.") is definitely a function of
traffic load and the number of items in a ipfw table. I've just
decreased the size of the two tables from ~2600 to ~1800 each and the
drops instan
Oleg Bulyzhin wrote:
On Wed, Oct 07, 2009 at 02:23:47PM +0500, rihad wrote:
Few questions:
1) why are you not using fastforwarding?
2) search_steps/searches ratio is not that good, are you using 'buckets'
keyword in your pipe configuration?
3) you have net.inet.ip.fw.one_pass = 0, is it inten
On Wed, Oct 07, 2009 at 02:23:47PM +0500, rihad wrote:
Few questions:
1) why are you not using fastforwarding?
2) search_steps/searches ratio is not that good, are you using 'buckets'
keyword in your pipe configuration?
3) you have net.inet.ip.fw.one_pass = 0, is it intended?
--
Oleg.
==
Robert Watson wrote:
On Wed, 7 Oct 2009, rihad wrote:
snapshot of the top -SH output in the steady state? Let top run for
a few minutes and then copy/paste the first 10-20 lines into an e-mail.
Sure. Mind you: now there's only 1800 entries in each of the two ipfw
tables, so any drops have
On Wed, 7 Oct 2009, rihad wrote:
snapshot of the top -SH output in the steady state? Let top run for a few
minutes and then copy/paste the first 10-20 lines into an e-mail.
Sure. Mind you: now there's only 1800 entries in each of the two ipfw
tables, so any drops have stopped. But it only t
Oleg Bulyzhin wrote:
Please show your 'sysctl net.inet.ip' output.
net.inet.ip.portrange.randomtime: 45
net.inet.ip.portrange.randomcps: 10
net.inet.ip.portrange.randomized: 1
net.inet.ip.portrange.reservedlow: 0
net.inet.ip.portrange.reservedhigh: 1023
net.inet.ip.portrange.hilast: 65535
net.i
Robert Watson wrote:
On Wed, 7 Oct 2009, rihad wrote:
rihad wrote:
I've yet to test how this direct=0 improves extensive dummynet drops.
Ooops... After a couple of minutes, suddenly:
net.inet.ip.intr_queue_drops: 1284
Bumped it up a bit.
Yes, I was going to suggest that moving to deferre
On Wed, 7 Oct 2009, rihad wrote:
rihad wrote:
I've yet to test how this direct=0 improves extensive dummynet drops.
Ooops... After a couple of minutes, suddenly:
net.inet.ip.intr_queue_drops: 1284
Bumped it up a bit.
Yes, I was going to suggest that moving to deferred dispatch has probabl
Please show your 'sysctl net.inet.ip' output.
--
Oleg.
=== Oleg Bulyzhin -- OBUL-RIPN -- OBUL-RIPE -- o...@rinet.ru ===
rihad wrote:
I've yet to test how this direct=0 improves extensive dummynet drops.
Ooops... After a couple of minutes, suddenly:
net.inet.ip.intr_queue_drops: 1284
Bumped it up a bit.
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.or
Robert Watson wrote:
On Wed, 7 Oct 2009, Eugene Grosbein wrote:
On Tue, Oct 06, 2009 at 08:28:35PM +0500, rihad wrote:
I don't think net.inet.ip.intr_queue_maxlen is relevant to this
problem, as net.inet.ip.intr_queue_drops is normally zero or very
close to it at all times.
When net.isr.d
On Tue, Oct 06, 2009 at 09:30:39PM +0400, Oleg Bulyzhin wrote:
> On Tue, Oct 06, 2009 at 12:17:47PM +0200, Luigi Rizzo wrote:
>
> > io_pkt_drop only reports packets dropped to errors (missing pipes,
> > randomly forced packet drops which you don't use, no buffers and so on).
>
> You are mistaken
On Tue, Oct 06, 2009 at 12:17:47PM +0200, Luigi Rizzo wrote:
> io_pkt_drop only reports packets dropped to errors (missing pipes,
> randomly forced packet drops which you don't use, no buffers and so on).
You are mistaken here. io_pkt_drop is total number of packets dropped by
dummynet_io().
--
Robert Watson wrote:
and on current high-performance systems the hardware tends to
take care of that already pretty well (i.e., most modern 10gbps cards).
Do you think that us switching to 10gbps cards would solve the problem
discussed? We're currently at 500-550 mbps and rising, so we might a
On Wed, 7 Oct 2009, Eugene Grosbein wrote:
On Tue, Oct 06, 2009 at 08:28:35PM +0500, rihad wrote:
I don't think net.inet.ip.intr_queue_maxlen is relevant to this problem, as
net.inet.ip.intr_queue_drops is normally zero or very close to it at all
times.
When net.isr.direct is 1, this queue
Eugene Grosbein wrote:
On Tue, Oct 06, 2009 at 08:28:35PM +0500, rihad wrote:
I don't think net.inet.ip.intr_queue_maxlen is relevant to this problem,
as net.inet.ip.intr_queue_drops is normally zero or very close to it at
all times.
When net.isr.direct is 1, this queue is used very seldom.
On Tue, Oct 06, 2009 at 08:28:35PM +0500, rihad wrote:
> I don't think net.inet.ip.intr_queue_maxlen is relevant to this problem,
> as net.inet.ip.intr_queue_drops is normally zero or very close to it at
> all times.
When net.isr.direct is 1, this queue is used very seldom.
Would you change it
Eugene Grosbein wrote:
On Tue, Oct 06, 2009 at 06:14:58PM +0500, rihad wrote:
No, generally handles much more. Please show your ipfw rule(s)
containing 'tablearg'.
01031 xx allow ip from any to any
01040 xx skipto 1100 ip from table(127) to any out
recv
On Tue, Oct 06, 2009 at 06:14:58PM +0500, rihad wrote:
> >No, generally handles much more. Please show your ipfw rule(s)
> >containing 'tablearg'.
>
> 01031 xx allow ip from any to any
> 01040 xx skipto 1100 ip from table(127) to any out
> recv bce0 xmit b
Luigi Rizzo wrote:
On Tue, Oct 06, 2009 at 02:34:32PM +0500, rihad wrote:
Luigi Rizzo wrote:
8664 output packets dropped due to no bufs, etc.
net.inet.ip.dummynet.io_pkt_drop: 111
io_pkt_drop only reports packets dropped to errors (missing pipes,
randomly forced packet drops which you don't
Eugene Grosbein wrote:
On Tue, Oct 06, 2009 at 02:21:38PM +0500, rihad wrote:
Is there some limit on the number of IP addresses in an ipfw table?
No, generally handles much more. Please show your ipfw rule(s)
containing 'tablearg'.
01031 xx allow ip from any to any
0104
On Tue, Oct 06, 2009 at 02:34:32PM +0500, rihad wrote:
> Luigi Rizzo wrote:
> >On Tue, Oct 06, 2009 at 02:21:38PM +0500, rihad wrote:
> >>rihad wrote:
> >>>Julian Elischer wrote:
> rihad wrote:
> >Luigi Rizzo wrote:
> >>2. your test with 'ipfw allow ip from any to any' does not
> >>
On Tue, Oct 06, 2009 at 02:21:38PM +0500, rihad wrote:
> Is there some limit on the number of IP addresses in an ipfw table?
No, generally handles much more. Please show your ipfw rule(s)
containing 'tablearg'.
Eugene
___
freebsd-net@freebsd.org mailin
Luigi Rizzo wrote:
On Tue, Oct 06, 2009 at 02:21:38PM +0500, rihad wrote:
rihad wrote:
Julian Elischer wrote:
rihad wrote:
Luigi Rizzo wrote:
2. your test with 'ipfw allow ip from any to any' does not
prove that the interface queue is not saturating, because
you also remove the burstines
On Tue, Oct 06, 2009 at 02:21:38PM +0500, rihad wrote:
> rihad wrote:
> >Julian Elischer wrote:
> >>rihad wrote:
> >>>Luigi Rizzo wrote:
> 2. your test with 'ipfw allow ip from any to any' does not
> prove that the interface queue is not saturating, because
> you also remove the b
rihad wrote:
Julian Elischer wrote:
rihad wrote:
Luigi Rizzo wrote:
2. your test with 'ipfw allow ip from any to any' does not
prove that the interface queue is not saturating, because
you also remove the burstiness that dummynet introduces,
and so the queue is driven differently.
Julian Elischer wrote:
rihad wrote:
Luigi Rizzo wrote:
2. your test with 'ipfw allow ip from any to any' does not
prove that the interface queue is not saturating, because
you also remove the burstiness that dummynet introduces,
and so the queue is driven differently.
How do I inves
Eugene Grosbein wrote:
On Mon, Oct 05, 2009 at 07:30:15PM +0500, rihad wrote:
How do I investigate and fix this burstiness issue?
Please also show:
sysctl net.isr
sysctl net.inet.ip.intr_queue_maxlen
net.isr.swi_count: 65461359
net.isr.drop: 0
net.isr.queued: 32843752
net.isr.deferred: 0
net
Eugene Grosbein wrote:
Try to increase net.inet.ip.intr_queue_maxlen uptio 4096.
You sure? Packets are never dropped once I add "allow ip from any to
any" before pipes, effectively turning dummynet off. Yet I've doubled it
for starters (50->100) let's see if it works in an hour or so, when i
Eugene Grosbein wrote:
On Mon, Oct 05, 2009 at 07:30:15PM +0500, rihad wrote:
How do I investigate and fix this burstiness issue?
Please also show:
sysctl net.isr
sysctl net.inet.ip.intr_queue_maxlen
net.isr.swi_count: 65461359
net.isr.drop: 0
net.isr.queued: 32843752
net.isr.deferred: 0
net
On Mon, Oct 05, 2009 at 10:49:45AM -0700, Julian Elischer wrote:
> >There is a rumour about FreeBSD's shedulers...
> >That they are not so good for 8 cores and that you may get MORE speed
> >by disabling 4 cores if it's possible for your system.
> >Or even using uniprocessor kernel.
> >Only rumour
On Mon, Oct 05, 2009 at 07:30:15PM +0500, rihad wrote:
> >>How do I investigate and fix this burstiness issue?
> >
> >Please also show:
> >
> >sysctl net.isr
> >sysctl net.inet.ip.intr_queue_maxlen
>
> net.isr.swi_count: 65461359
> net.isr.drop: 0
> net.isr.queued: 32843752
> net.isr.deferred: 0
rihad wrote:
Julian Elischer wrote:
rihad wrote:
Julian Elischer wrote:
Luigi Rizzo wrote:
Is it possible to know what sessions are losing packets?
Yes, of course, by running ipfw pipe show ;-)
There's one confusing thing, though: net.inet.ip.dummynet.io_pkt_drop
isn't increasing while aro
Julian Elischer wrote:
rihad wrote:
Julian Elischer wrote:
Luigi Rizzo wrote:
Is it possible to know what sessions are losing packets?
Yes, of course, by running ipfw pipe show ;-)
There's one confusing thing, though: net.inet.ip.dummynet.io_pkt_drop
isn't increasing while around 800-1000 p
Julian Elischer wrote:
How do I investigate and fix this burstiness issue?
higher Hz rate?
Hmm, mine is 1000. I'll try bumping it up to 2000 (via
/boot/loader.conf) but since a reboot is required I think it'll have to
wait for a while.
___
freeb
rihad wrote:
Julian Elischer wrote:
Luigi Rizzo wrote:
Taildrop does not really help with this. GRED does much better.
i think the first problem here is figure out _why_ we have
the drops, as the original poster said that queues are configured
with a very large amount of buffer (and i think
Eugene Grosbein wrote:
On Mon, Oct 05, 2009 at 08:07:18PM +0500, rihad wrote:
What is CPU load in when the load is maximum?
It has 2 quad-cores, so I'm not sure. Here's the output of top -S:
There is a rumour about FreeBSD's shedulers...
That they are not so good for 8 cores and that you ma
Julian Elischer wrote:
rihad wrote:
Luigi Rizzo wrote:
On Mon, Oct 05, 2009 at 03:52:39PM +0500, rihad wrote:
Eugene Grosbein wrote:
On Mon, Oct 05, 2009 at 02:28:58PM +0500, rihad wrote:
Still not sure why increasing queue size as high as I want doesn't
completely eliminate drops.
The goa
rihad wrote:
Luigi Rizzo wrote:
On Mon, Oct 05, 2009 at 05:12:11PM +0500, rihad wrote:
Luigi Rizzo wrote:
On Mon, Oct 05, 2009 at 04:29:02PM +0500, rihad wrote:
Luigi Rizzo wrote:
...
you keep omitting the important info i.e. whether individual
pipes have drops, significant queue lenghts an
1 - 100 of 144 matches
Mail list logo