rihad wrote:
Luigi Rizzo wrote:
On Mon, Oct 05, 2009 at 04:29:02PM +0500, rihad wrote:
Luigi Rizzo wrote:
...
you keep omitting the important info i.e. whether individual
pipes have drops, significant queue lenghts and so on.
Sorry. Almost everyone has 0 in the last Drp column, but some have
above zero. I'm not just sure how this can be helpful to anyone.
because you were complaining about 'dummynet causing drops and
waste of bandwidth'.
Now, drops could be due to either
1) some saturation in the dummynet machine (memory shortage, cpu
shortage, etc.) which cause unwanted drops;
I too think the box is hitting some other global limit and dropping
packets. If not, then how come that between 4a.m. and 10a.m. when the
traffic load is at 250-330 mbit/s there isn't a single drop?
2) intentional drops introduced by dummynet because a flow exceeds
its queue size. These drops are those shown in the 'Drop'
column in 'ipfw pipe show' (they are cumulative, so you
should do an 'ipfw pipe delete; ipfw pipe 5120 config ...'
whenever you want to re-run the stats, or compute the
differences between subsequent reads, to figure out what
happens.
If all drops you are seeing are of type 2, then there is nothing
you can do to remove them: you set a bandwidth limit, the
client is sending faster than it should, perhaps with UDP
so even RED/GRED won't help you, and you see the drops
once the queue starts to fill up.
Examples below: the entries in bucket 4 and 44
Then I guess I'm left with increasing slots and see how it goes.
Currently it's set to 10000 for each pipe. Thanks for yours and Eugene's
efforts, I appreciate it.
If you are seeing drops that are not listed in 'pipe show'
then yun need to investigate where the packets are lost,
again it could be on the output queue of the interface
(due to the burstiness introduced by dummynet), or shortage
of mbufs (but this did not seem to be the case from your
previous stats) or something else.
This indeed is not a problem, proved by the fact that, like I said,
short-circuiting "ipfw allow ip from any to any" before dummynet pipe
rules instantly eliminates all drops, and bce0 and bce1 load evens out
(bce0 used for input, and bce1 for output).
no it could be a problem because dummy net releases all the packets
for a slot that are going ot be let for a tick out at once, instead of
having them arrive spread out through the tick. also it does one pipe
at a time which means that related packets arrive at once followed by
packets from other sessions.. this may produce differences in some
cases.
It's all up to you to run measurements, possibly
without omitting potentially significant data
(e.g. sysctl -a net.inet.ip)
or making assumptions (e.g. you have configured
5000 slots per queue, but with only 50k mbufs in total
there is no chance to guarantee 5000 slots to each
queue -- all you will achieve is give a lot of slots
to the greedy nodes, and very little to the other ones)
Well, I've been monitoring this stuff. It has never reached above 20000
mbufs (111111 is the current limit).
_______________________________________________
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
_______________________________________________
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"