On Tue, 2007-09-10 at 12:00 +0800, Herbert Xu wrote:
>
> OK, after waking up a bit more
me too;->
> What I'm worried about is would we see worse behaviour with
> drivers that do all their TX clean-up with the TX lock held
Good point Herbert.
When i looked around i only found one driver that
On Wed, Sep 19, 2007 at 10:43:03PM -0400, jamal wrote:
>
> [NET_SCHED] explict hold dev tx lock
>
> For N cpus, with full throttle traffic on all N CPUs, funneling traffic
> to the same ethernet device, the devices queue lock is contended by all
> N CPUs constantly. The TX lock is only contended b
On Tue, 2007-25-09 at 19:28 -0700, David Miller wrote:
> I've applied this to net-2.6.24, although I want to study more deeply
> the implications of this change myself at some point :)
sounds reasonable. Ive done a lot of testing with my 2-3 NIC variants;
ive cced whoever i thought was a stakehol
From: jamal <[EMAIL PROTECTED]>
Date: Wed, 19 Sep 2007 22:43:03 -0400
> [NET_SCHED] explict hold dev tx lock
>
> For N cpus, with full throttle traffic on all N CPUs, funneling traffic
> to the same ethernet device, the devices queue lock is contended by all
> N CPUs constantly. The TX lock is on
Ok, this is from the net-2.6.24 of about an hour ago.
cheers,
jamal
[NET_SCHED] explict hold dev tx lock
For N cpus, with full throttle traffic on all N CPUs, funneling traffic
to the same ethernet device, the devices queue lock is contended by all
N CPUs constantly. The TX lock is only conte
On Wed, 2007-19-09 at 09:09 -0700, David Miller wrote:
> Sure, along with a description as to why you want to make this
> change.
Will do. And if you feel that i should sit on it a little more i can do
that too. The good news is it doesnt make things any worse than they
already are and infact sho
From: jamal <[EMAIL PROTECTED]>
Date: Wed, 19 Sep 2007 09:33:52 -0400
> On Mon, 2007-17-09 at 22:48 -0400, jamal wrote:
>
> > Nothing much has changed from what it was before.
> > The only difference is we let go of the queue lock before grabbing
> > the tx lock which never mattered for LLTX.
>
On Mon, 2007-17-09 at 22:48 -0400, jamal wrote:
> Nothing much has changed from what it was before.
> The only difference is we let go of the queue lock before grabbing
> the tx lock which never mattered for LLTX.
> Once we grab the tx lock it is the same logic and so far is working well
> on bot
On Mon, 2007-17-09 at 19:01 -0700, David Miller wrote:
> Hardirq should never try to grab the netif_tx_lock(), it is
> only for base and softirq context.
>
> Any hardirq context code taking that lock needs to be fixed.
> We could assert this if we don't already.
I snooped around it looks pretty
From: jamal <[EMAIL PROTECTED]>
Date: Sun, 16 Sep 2007 16:41:24 -0400
> Ok, maybe i am thinking too hard with that patch, so help me out:->
> When i looked at that code path as it is today: i felt the softirq could
> be interupted on the same CPU it is running on while it already grabbed
> that tx
On Mon, Sep 17, 2007 at 09:03:58AM -0400, jamal ([EMAIL PROTECTED]) wrote:
> > Did I understand you right, that you replaced trylock with lock and
> > thus removed collision handling and got better results?
>
> Yes, a small one with the 4 CPUs and no irq binding. Note that in the
> test cases i ru
On Mon, 2007-17-09 at 14:27 +0400, Evgeniy Polyakov wrote:
>
> How many cpu collisions you are seeing?
On 4 CPUs which were always transmitting very few - there was contention
in the range of 100 per million attempts.
Note: it doesnt matter that 4 cpus were busy, this lock is contended at
max (f
On Sun, Sep 16, 2007 at 05:10:00PM -0400, jamal ([EMAIL PROTECTED]) wrote:
> On Sun, 2007-16-09 at 16:52 -0400, jamal wrote:
>
> > What i should say is
> > if i grabbed the lock explicitly without disabling irqs it wont be much
> > different than what is done today and should always work.
> > No?
On Sun, 2007-16-09 at 16:52 -0400, jamal wrote:
> What i should say is
> if i grabbed the lock explicitly without disabling irqs it wont be much
> different than what is done today and should always work.
> No?
And to be more explicit, heres a patch using the macros from previous
patch. So far te
On Sun, 2007-16-09 at 16:41 -0400, jamal wrote:
> indeed.
> Ok, maybe i am thinking too hard with that patch, so help me out:->
Ok, that was probably too much of an explanation. What i should say is
if i grabbed the lock explicitly without disabling irqs it wont be much
different than what is do
On Sun, 2007-16-09 at 12:31 -0700, David Miller wrote:
> From: jamal <[EMAIL PROTECTED]>
> Date: Sun, 16 Sep 2007 12:14:34 -0400
> > So - what side effects do people see in doing this? If none, i will
> > clean it up and submit.
>
> I tried this 4 years ago, it doesn't work. :-)
>
;->
[good r
From: jamal <[EMAIL PROTECTED]>
Date: Sun, 16 Sep 2007 12:14:34 -0400
> Changes:
> I made changes to the code path as defined in the patch included to
> and noticed a slight increase (2-3%) in performance with both e1000 and
> tg3; which was a relief because i thought the spinlock_irq (which is
>
While trying to port my batching changes to net-2.6.24 from this morning
i realized this is something i had wanted to probe people on
Challenge:
For N Cpus, with full throttle traffic on all N CPUs, funneling traffic
to the same ethernet device, the devices queue lock is contended by all
N CP
18 matches
Mail list logo