On Nov 10, 2003, at 1:39 AM, Andre Oppermann wrote:

Jonathan Mini wrote:

All in all I don't think it is worth adding this complexity.

I agree.


This is actually a small value for TCP connections which are being
used to forward messages, especially on gigabit links.
Heavily-intensive
web applications that are using small HTTP requests (pipelined inside a
persistent connection) to update small manipulations of state are
a good example of this. I wouldn't be surprised to see chatter
between SQL servers follow similar patterns. Applications which
use XML-based messaging often send several small packets per message,
which is unfortunate.

Do you think such applications manage to send 1000 packets per second with less than 256 bytes payload per packet? I think the network code would collect some data to form a larger packet (unless TCP_NODELAY set)?

Traffic like that only happens when TCP_NODELAY is set. Otherwise, you get what you would expect.

On the other hand, I'm used to looking at proxies, which are not
the general case.  This is why the limits are tunable, after all. =)

Is there way you could monitor such connections and compile some statistics how many small packets per second are sent? I could adjust the patch to just report the fact instead of dropping the connection. Could do it for 4.9-R too, it's fairly easy.

Alas, no. This is from anecdotal experience from our support staff at work.

--
Jonathan Mini
[EMAIL PROTECTED]
http://www.freebsd.org

_______________________________________________
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to