SANGTAE HA wrote:
On Jan 9, 2008 9:56 AM, John Heffner <[EMAIL PROTECTED]> wrote:
I also wonder how much of a problem this is (for now, with window sizes
of order 1 packets. My understanding is that the biggest problems
arise from O(N^2) time for recovery because every ack was expensive.
Ha
On Jan 9, 2008 9:56 AM, John Heffner <[EMAIL PROTECTED]> wrote:
> >> I also wonder how much of a problem this is (for now, with window sizes
> >> of order 1 packets. My understanding is that the biggest problems
> >> arise from O(N^2) time for recovery because every ack was expensive.
> >> Hav
David Miller wrote:
From: John Heffner <[EMAIL PROTECTED]>
Date: Tue, 08 Jan 2008 23:27:08 -0500
I also wonder how much of a problem this is (for now, with window sizes
of order 1 packets. My understanding is that the biggest problems
arise from O(N^2) time for recovery because every ack
> Postponing freeing of the skb has major drawbacks. Some time ago I
Yes, the trick would be to make sure that it also does not tie up
too much memory. e.g. it would need some throttling at least.
Also the fast path of kmem_cache_free() is actually not that
much different from just putting someth
On Tue, 8 Jan 2008, John Heffner wrote:
> Andi Kleen wrote:
> > David Miller <[EMAIL PROTECTED]> writes:
> > > The big problem is that recovery from even a single packet loss in a
> > > window makes us run kfree_skb() for a all the packets in a full
> > > window's worth of data when recovery compl
Hi.
On Wed, Jan 09, 2008 at 08:03:18AM +0100, Andi Kleen ([EMAIL PROTECTED]) wrote:
> > It adds severe spikes in CPU utilization that are even moderate
> > line rates begins to affect RTTs.
> >
> > Or do you think it's OK to process 500,000 SKBs while locked
> > in a software interrupt.
>
> You
From: "Ilpo_Järvinen" <[EMAIL PROTECTED]>
Date: Tue, 8 Jan 2008 14:12:47 +0200 (EET)
> If I'd hint my boss that I'm involved in something like this I'd
> bet that he also would get quite crazy... ;-) I'm partially paid
> for making TCP more RFCish :-), or at least that the places where
> thing div
From: Andi Kleen <[EMAIL PROTECTED]>
Date: Wed, 9 Jan 2008 08:03:18 +0100
> Also even freeing a lot of objects doesn't have to be
> that expensive. I suspect the most cost is in taking
> the slab locks, but that could be batched.
We're touching SKB struct members, doing atomics on them, etc. for
> It adds severe spikes in CPU utilization that are even moderate
> line rates begins to affect RTTs.
>
> Or do you think it's OK to process 500,000 SKBs while locked
> in a software interrupt.
You can always push it into a work queue. Even put it to
other cores if you want.
In fact this is al
From: John Heffner <[EMAIL PROTECTED]>
Date: Tue, 08 Jan 2008 23:27:08 -0500
> I also wonder how much of a problem this is (for now, with window sizes
> of order 1 packets. My understanding is that the biggest problems
> arise from O(N^2) time for recovery because every ack was expensive.
From: Andi Kleen <[EMAIL PROTECTED]>
Date: Wed, 09 Jan 2008 03:25:05 +0100
> David Miller <[EMAIL PROTECTED]> writes:
> >
> > The big problem is that recovery from even a single packet loss in a
> > window makes us run kfree_skb() for a all the packets in a full
> > window's worth of data when rec
From: "Lachlan Andrew" <[EMAIL PROTECTED]>
Date: Tue, 8 Jan 2008 17:34:03 -0800
> John also suggested freeing the packets as a lower priority task, just
> doing it after they're acknowledged.
>
> When the ACK finally comes, you could do something like moving John's
> entire list of packets to a "
Just some idle brainstorming on the subject...
It seems the only way to handle network pipes sigificantly larger (delay *
bandwidth product) than the processor cache is to make freeing retransmit
data o(n).
Now, there are some ways to reduce the constant factor. The one that
comes to mind first
Andi Kleen wrote:
David Miller <[EMAIL PROTECTED]> writes:
The big problem is that recovery from even a single packet loss in a
window makes us run kfree_skb() for a all the packets in a full
window's worth of data when recovery completes.
Why exactly is it a problem to free them all at once?
David Miller <[EMAIL PROTECTED]> writes:
>
> The big problem is that recovery from even a single packet loss in a
> window makes us run kfree_skb() for a all the packets in a full
> window's worth of data when recovery completes.
Why exactly is it a problem to free them all at once? Are you worrie
Greetings David,
On 08/01/2008, David Miller <[EMAIL PROTECTED]> wrote:
> From: John Heffner <[EMAIL PROTECTED]>
>
> > I haven't thought about this too hard, but can we approximate this by
> > moving scaked data into a sacked queue, then if something bad happens
> > merge this back into the retran
From: John Heffner <[EMAIL PROTECTED]>
Date: Tue, 08 Jan 2008 11:51:53 -0500
> I haven't thought about this too hard, but can we approximate this by
> moving scaked data into a sacked queue, then if something bad happens
> merge this back into the retransmit queue?
That defeats the impetus for
, it would simplify all of this scanning code trying to figure out
which holes to fill during recovery.
And for SACK scoreboard marking, the RB trie would become very nearly
unecessary as far as I can tell.
I would not even entertain this kind of crazy idea unless I thought
the fundamental
the control about it is left to the
user.
> Next, it would simplify all of this scanning code trying to figure out
> which holes to fill during recovery.
>
> And for SACK scoreboard marking, the RB trie would become very nearly
> unecessary as far as I can tell.
I've been co
ll of this scanning code trying to figure out
which holes to fill during recovery.
And for SACK scoreboard marking, the RB trie would become very nearly
unecessary as far as I can tell.
I would not even entertain this kind of crazy idea unless I thought
the fundamental complexity simplification p
> and fixups in your patch is purely because of this.
Yeah, had just too much time while waiting for person who never
arrived... :-) It would have covered the typical case quite well
tough, for sure it was very intrusive.
> If we maintain SACK scoreboard information seperately, outside of
>
21 matches
Mail list logo