From: Herbert Xu <[EMAIL PROTECTED]>
Date: Thu, 10 May 2007 22:10:10 +1000
> On Thu, May 10, 2007 at 04:55:34AM -0700, David Miller wrote:
> >
> > > [NET_SCHED]: Rationalise return value of qdisc_restart
> >
> > Fair enough, patch applied :-)
>
> Sorry, reading Thomas's patch made me realise tha
On Thu, May 10, 2007 at 07:56:02PM +0530, Krishna Kumar2 wrote:
>
> Herbert,
>
> -while (qdisc_restart(dev) < 0 && !netif_queue_stopped(dev))
> -/* NOTHING */;
> +do {
> +if (!qdisc_restart(dev))
> +
I would guess that even with this change, qdisc_restart can get called with
no
skbs in the queue due to softirq which was enabled (but by the time it ran,
more skbs enqueue could have dequeue'd those packets). But it would
definitely reduce by once each iteration of qdisc_run (if one packet then
50
On Thu, 2007-10-05 at 23:52 +1000, Herbert Xu wrote:
> On Thu, May 10, 2007 at 09:18:04AM -0400, jamal wrote:
> >
> > I wonder how much performance improvement this give now that the extra
> > incursion into qdisc_restart is avoided.
>
> Probably very little since the only way it can make a diffe
On Thu, May 10, 2007 at 09:18:04AM -0400, jamal wrote:
>
> I wonder how much performance improvement this give now that the extra
> incursion into qdisc_restart is avoided.
Probably very little since the only way it can make a difference is if
there is lots of contention. But if there is lots of
On Thu, 2007-10-05 at 22:59 +1000, Herbert Xu wrote:
> jamal <[EMAIL PROTECTED]> wrote:
> This release qlock isn't in our source code :)
This is why i defered this to you ;->
For completion sake, this is how it looks like:
CPU0 CPU1
| wait qlock
jamal <[EMAIL PROTECTED]> wrote:
>
> Ok, see if this makes sense:
>
> CPU0 CPU1 (holding qdisc running)
> .|
> .|
> .+ grab qlock
> .|
> .| deq pkt
> .+ re
Never mind, I was wrong. qdisc run will be invoked by cpu0; i.e:
CPU0 CPU1
+ grab qlock |
|+ find that return code is 0
| enq pkt X + release qdisc running
||
+ grab qdisc running | ==> outta here
| call qdisc_r
On Thu, 2007-10-05 at 04:55 -0700, David Miller wrote:
> From: Herbert Xu <[EMAIL PROTECTED]>
> Date: Thu, 10 May 2007 21:50:39 +1000
>
> T_SCHED]: Rationalise return value of qdisc_restart
> >
> > The current return value scheme and associated comment was invented
> > back in the 20th century w
On Thu, May 10, 2007 at 04:55:34AM -0700, David Miller wrote:
>
> > [NET_SCHED]: Rationalise return value of qdisc_restart
>
> Fair enough, patch applied :-)
Sorry, reading Thomas's patch made me realise that I've just added
that bug all over again.
[NET_SCHED]: Reread dev->qdisc for NETDEV_TX_O
From: Herbert Xu <[EMAIL PROTECTED]>
Date: Thu, 10 May 2007 21:50:39 +1000
> On Thu, May 10, 2007 at 10:42:59AM +0530, Krishna Kumar2 wrote:
> >
> > But RUNNING is never relinquished till all the way back out to
> > __qdisc_run. Only the lock is dropped before calling xmit and
> > re-got after xm
On Thu, May 10, 2007 at 10:42:59AM +0530, Krishna Kumar2 wrote:
>
> But RUNNING is never relinquished till all the way back out to
> __qdisc_run. Only the lock is dropped before calling xmit and
> re-got after xmit is finished (all the while holding RUNNING).
> After xmit both queue_lock and RUNNI
Hi Jamal,
J Hadi Salim <[EMAIL PROTECTED]> wrote on 05/09/2007 09:22:44 PM:
> > Since I am holding a lock in these two changed areas till I return
> > back to __qdisc_run (which clears the RUNNING bit) and then drop the
> > lock, there is no way packets can be on the queue while I falsely
> > ret
Krishna,
On Wed, 2007-09-05 at 20:17 +0530, Krishna Kumar2 wrote:
> Concurrently is not possible, since everyone needs the queue_lock
> to add/delete. Did I misunderstand your comment ?
>
I think so, more below where you explain it:
> The dev->queue_lock is held by both enqueue'r and dequeue'r
Hi Jamal,
J Hadi Salim <[EMAIL PROTECTED]> wrote on 05/09/2007 06:26:43 PM:
> On Wed, 2007-09-05 at 01:12 -0700, David Miller wrote:
>
> > Something this evening is obviously making it impossible
> > for my brain to understand this function and your patch,
> > so I'm going to sleep on it and try
On Wed, 2007-09-05 at 01:12 -0700, David Miller wrote:
> Something this evening is obviously making it impossible
> for my brain to understand this function and your patch,
> so I'm going to sleep on it and try again tomorrow :-)
It is one of those areas that are hard to size-up in a blink;->
Gut
From: Krishna Kumar2 <[EMAIL PROTECTED]>
Date: Wed, 9 May 2007 12:53:05 +0530
> David Miller <[EMAIL PROTECTED]> wrote on 05/09/2007 12:06:30 PM:
>
> > Your change will make the kernel essentially hang instead
> > when this bug check is triggered.
>
> By returning -1, we end up freeing all the s
David Miller <[EMAIL PROTECTED]> wrote on 05/09/2007 12:06:30 PM:
> But this only makes sense for the second hunk you changed.
I think it should work fine for both.
> The first hunk is a bug case, an improper looping device
> decapsulation condition, and we do want to always return
> -1 in that
From: Krishna Kumar2 <[EMAIL PROTECTED]>
Date: Wed, 9 May 2007 10:05:57 +0530
> This change will not change existing behavior in case there are
> packets in the queue, it will return -1 each time as does the
> original code. But when there are no packets, the original
> qdisc_restart returns -1 an
Hi Dave,
This change will not change existing behavior in case there are packets in
the queue, it will return -1 each time as does the original code. But when
there are no packets, the original qdisc_restart returns -1 and the caller
will
unnecessarily call qdisc_restart which branches to the bot
From: Krishna Kumar <[EMAIL PROTECTED]>
Date: Tue, 08 May 2007 13:01:32 +0530
> Optimize return value of qdisc_restart so that it is not called an
> extra time if there are no more packets on the queue to be sent out.
> It is also not required to check for gso_skb (though the lock is
> dropped) si
Optimize return value of qdisc_restart so that it is not called an
extra time if there are no more packets on the queue to be sent out.
It is also not required to check for gso_skb (though the lock is
dropped) since another cpu which added this would have done a
netif_schedule.
Patch against net-2
22 matches
Mail list logo