I did open up a bug with SUN about this. It looks like most clients
dont set TCP_NODELAY on debug sockets but the JDK itself has
TCP_NODELAY hardcoded.

In the meantime is there a way to set or disable Appropriate Byte
Counting on a per interface basis? (I know that its a protocal but the
abiltiy to set protocal options on a per interface basis would seem
nice.)


On 3/9/06, David S. Miller <[EMAIL PROTECTED]> wrote:
> From: Stephen Hemminger <[EMAIL PROTECTED]>
> Date: Thu, 9 Mar 2006 08:33:15 -0800
>
> > A possible solution would be to set cwnd bigger for loopback.
> > If there was a clean way to know that connection was over loopback,
> > then doing something in tcp_init_metrics() to set INIT_CWND
> >
> >       if (IsLoopback(sk))
> >               dst->metrics[RTAX_INIT_CWND-1] = 10;
> > then tcp_init_cwnd() would return a bigger congestion window.
>
> I'm not even going to entertain workaround for applications
> that set socket options and then things go wrong because
> the kernel actually does what the application has asked for.
>
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to