From: Dmitry Yusupov <[EMAIL PROTECTED]>
Date: Fri, 19 Aug 2005 16:55:47 -0700
> Signed-off-by: Dmitry Yusupov <[EMAIL PROTECTED]>
>
> diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
> --- a/net/ipv4/tcp_output.c
> +++ b/net/ipv4/tcp_output.c
> @@ -925,10 +925,6 @@ static int tcp_tso_s
For jumbo traffic, if cong. window is big, than TSO defer will not
happen that often. Hence, most of the traffic will be non-TSO and that
is why we saw performance degradation on our setup. This was the case
for 10G network where we tend to set tcp window very big, i.e. 1M+
This patch forces to def
From: "Leonid Grossman" <[EMAIL PROTECTED]>
Date: Fri, 12 Aug 2005 14:04:47 -0400
> Why the total length has to be in the header, there may be other
> ways to pass the total TSO length to the driver?
Because the packet has to look like a legal IPV4 frame
for the rest of the networking stack, for
> -Original Message-
> From: David S. Miller [mailto:[EMAIL PROTECTED]
> Sent: Thursday, August 11, 2005 4:35 PM
> To: [EMAIL PROTECTED]
> Cc: Leonid Grossman; netdev@vger.kernel.org
> Subject: Re: Super TSO performance drop
>
> From: Dmitry Yusupov <[EMAIL
From: Dmitry Yusupov <[EMAIL PROTECTED]>
Date: Thu, 11 Aug 2005 16:30:49 -0700
> But even with new TSO schema, this logic can not send more than
> MAX_SKB_FRAGS which at best case ~60K. With old TSO schema.
> I thought with SuperTSO we could send >> than 60K. And than it could be
> very beneficial
On Thu, 2005-08-11 at 16:15 -0700, David S. Miller wrote:
> From: "Leonid Grossman" <[EMAIL PROTECTED]>
> Date: Thu, 11 Aug 2005 19:05:22 -0400
>
> > Basically, it looks like with SuperTSO most of the traffic in our tests
> > comes down to the driver with mss 0 (TSO is mostly "off").
> > With the
From: "Leonid Grossman" <[EMAIL PROTECTED]>
Date: Thu, 11 Aug 2005 19:05:22 -0400
> Basically, it looks like with SuperTSO most of the traffic in our tests
> comes down to the driver with mss 0 (TSO is mostly "off").
> With the original TSO, is was always "on" (see below).
> Could you describe how
> -Original Message-
> From: David S. Miller [mailto:[EMAIL PROTECTED]
> Sent: Friday, August 05, 2005 1:19 AM
> To: Leonid Grossman
> Cc: netdev@vger.kernel.org
> Subject: Re: Super TSO performance drop
> I'm not talking about the application, I'm
From: "Leonid Grossman" <[EMAIL PROTECTED]>
Date: Thu, 4 Aug 2005 16:10:30 -0400
> I'm not sure there is anything specific about ntttcp packet patterns,
> it's a generic tcp benchmark...
> We did not try iperf or netperf, but typically these programs perform
> extremely close to ntttcp numbers.
I
> -Original Message-
> From: David S. Miller [mailto:[EMAIL PROTECTED]
> Sent: Thursday, August 04, 2005 6:52 AM
> To: Leonid Grossman
> Cc: netdev@vger.kernel.org
> Subject: Re: Super TSO performance drop
>
> From: "Leonid Grossman" <[EMAIL PROT
From: "Leonid Grossman" <[EMAIL PROTECTED]>
Date: Wed, 3 Aug 2005 21:07:56 -0400
> We can either provide a remote to the setup, or test incremental patches
> if #16 can be broken in smaller pieces.
I think it would be more productive for you to work on trying
to figure out what about the packet p
We went through all 16 Super TSO patches to see which one causes the
performance degradation (relative to the original TSO implementation)
that was observed earlier, and it appears to be the last patch #16;
results below.
We can either provide a remote to the setup, or test incremental patches
if
12 matches
Mail list logo