Peter Jeremy wrote:
On Tue, Dec 11, 2007 at 12:31:00PM +0400, rihad wrote:
Peter Jeremy wrote:
On Tue, Dec 11, 2007 at 09:21:17AM +0400, rihad wrote:
And if I _only_ want to shape IP traffic to given speed, without
prioritizing anything, do I still need queues? This was the whole point.
No you don't. I'm using pipes without queues extensively to simulate
WANs without bothering with any prioritisation.
Great! One fine point remains, though:
# ipfw pipe 1 config bw 128Kbit/s
will use a queue of 50 slots by default. What good are they for, if I
didn't ask for queuing in the first place?
'queue' is used in two distinct ways within the ipfw/dummynet code:
1) There's a "queue" object created with 'ipfw queue NNN config ...'
This is used to support WF2Q+ to allow a fixed bandwidth to be
unevenly shared between different traffic types.
2) There is a "queue" option on the "pipe" object that defines a FIFO
associated with the pipe.
I had assumed you were talking about the former (and my response was
related to this) but given your latest posting, and having re-read the
thread, I suspect I may have been wrong. Whilst I don't use queue
objects, I do use the queue option on my pipes.
Yup, I'm only setting up traffic speed limits.
In your example, you have a pipe that can handle 128kbps (16kBps). If
you write a 1600byte packet to it, then the packet will reappear
100msec later. Any further packets written to that pipe during that
time will be dropped if they can't be placed on a queue. The
practical throughput depends on the number of queue slots available
and the number of writers. I suggest you do some reading on queueing
theory for the gory details.
You've just explained this quite clearly. It follows that pipe queues
are only used as a last line of defense before having to drop the
packet. All fine so far. The reason of my OP was primarily this excerpt
from man ipfw which I seem to be misinterpreting:
Note that for slow speed links you should keep the queue size short or
your traffic might be affected by a significant queueing delay. E.g.,
50 max-sized ethernet packets (1500 bytes) mean 600Kbit or 20s of queue
on a 30Kbit/s pipe. Even worse effects can result if you get packets
from an interface with a much larger MTU, e.g. the loopback interface
with its 16KB packets.
Does it look like they were talking of item 1) or 2) as you explained?
As I only care of bandwidth limitation, and not of any packet
prioritizing, should I be concerned with what they're saying? How on
earth could increasing queue size limit actual throughput? Isn't the
manpage saying that if I give a 128Kbit pipe an unnecessarily large
queue (say, 160Kbyte - 10 seconds worth of data) clients will have to
wait for 10 seconds before starting to get any data?
_______________________________________________
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"