On Friday 29 July 2005 12:35, Julian Elischer wrote: > I do this to great effect.. > consider: > two sites connected by links in which teh bottleneck is 200KB/sec (1 E1?) > when a lot of data is flowing from 1 to 2 then data from 2 to 1 is also > slowed > down because the acks have to go through the queues on ingress side of the > bottleneck router. > > I add a dummynet entry on 1, limiting output to 190KB/sec, so that the > queue is in dummynet and not the intermediate router, and then allow small > ack packets > to bypass that queue. As a result the data from 2 to 1 also flows at > near capacity, > and with a much lower latency. SInce data flows tend to be large packets, > I sometimes actually prioitise ALL small packets allowing interactive > stuff to > bypass ftps etc. and sometimes I do it on both ends.
I tried to update my ipfw setup, but I could not manage to get ipfw + natd to work with stateful rules :( Although since natd runs in userland I may as well switch back to ppp(8) to save extra kernel/userland transitions. I was hoping to be able to use pf + mpd since that puts all of the packet processing into the kernel. I did try pf + ipfw + dummynet (eww) but it appears that when the packets get reinjected back into the system after the pipes they go through pf again which doesn't like them (not 100% sure why.. maybe they don't match a state properly any more?) I'll try my hand at the ng_iface ALTQ patch. -- Daniel O'Connor software and network engineer for Genesis Software - http://www.gsoft.com.au "The nice thing about standards is that there are so many of them to choose from." -- Andrew Tanenbaum GPG Fingerprint - 5596 B766 97C0 0E94 4347 295E E593 DC20 7B3F CE8C
pgpb6NnPBgNXc.pgp
Description: PGP signature