Stuart Henderson [s...@spacehopper.org] wrote:
> 
> It may well be a problem if you're using medium/large altq buffers
> or if you raise net.inet.ip.ifq.maxlen too high..

While I don't disagree in concept (by definition, using sysctl maxlen=
big would create a large buffer), I think in implementation most
people are simply using ethernet-ethernet firewalls and the real
buffering is already happening somehere else (on a router or bridge that
goes from a high-speed link to a low-speed link).

The people who are concerned with buffer issues on their firewall
(whose interfaces are typically all run at equivalent speed) are
looking in the wrong place.

Raising IFQ on a box with two interfaces that both run at 1Gbps
is not going to cause a buffer bloat issue. It just gives openbsd
a longer queue to get more work done when there are large bursts
of traffic. This really isn't the same problem that is bandied about.

That problem looks more like a rack with servers connected at
10Gbps, talking to a client somewhere else at 1Gbps, or 100Mbps.
The 10Gbps clients may fill up the potentially large switch or
router buffers at rates above 1Gbps per second, only to wait for the
buffers to drain. That is the problem in a nutshell - fast source,
slow receiver, equipment in between takes the brunt of the traffic,
high latency for traffic that got delivered, and it repeats all
over.

I'm ignoring that altq can be used to shape for lower speed links,
it seems that large altq queues would have the same effect there.
I guess if you are creating that situation with a queueing algorithm,
then you want ALTQ to have small queues. I don't recall any excessive
buffering in practice with ALTQ. Isn't there some measure of support
for RED and ECN too ? 

Reply via email to