An underlying issue here is why applications decide to set TCP_NODELAY options on sockets, rather than just letting Nagle's algorithm do the right thing. I recall some handwaving about this in the X server some years ago to make mouse movements "smoother".
For the problem at hand, if both the client and server machines didn't do TCP_NODLEAY, then there'd only be one packet smaller than the TCP MSS in flight between the transmitter and receiver at any one time. I think that poking OpenSSH to not set the TCP_NODELAY option "fixed" this problem. I was just pondering the TCP implementation in 4.5-PRERELEASE, and it doesn't look like there's any explicit delay after a write going on, other than Nagle's algorithm, in the TCP packetization code. So setting TCP_NODELAY is almost certain the Wrong Thing for most applications to do. Perhaps there ought to be a warning in the man page about being a poor network citizen, flooding the Internet with tinygrams and otherwise making the performance of your application generally suck. louie To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-hackers" in the body of the message