Hi all,

Very preliminary testing with 20 procs on E1000 driver gives me following
result:

skbsz        Org BW         New BW              %               Org demand
New Demand     %
32              315.98           347.48                9.97%         21090
20958               0.62%
96              833.67           882.92                5.91%         7939
9107                 -14.71

But this is test running for just 30 secs (just too short) and netperf2
(not netperf4,
which I am going to use later). Using single E1000 card cross-cable'd on
2.6.21.1
kernel on 2 CPU 2.8Ghz Xseries systems.

I will have more detailed report next week, especially once I run netperf4.
I am taking
off for the next two days, so will reply later on this thread.

Thanks,

- KK

David Miller <[EMAIL PROTECTED]> wrote on 05/11/2007 03:36:05 AM:

> From: Gagan Arneja <[EMAIL PROTECTED]>
> Date: Thu, 10 May 2007 14:50:19 -0700
>
> > David Miller wrote:
> >
> > > If you drop the TX lock, the number of free slots can change
> > > as another cpu gets in there queuing packets.
> >
> > Can you ever have more than one thread inside the driver? Isn't
> > xmit_lock held while we're in there?
>
> There are restrictions wrt. when the xmit_lock and the
> queue lock can be held at the same time.
>
> The devil is definitely in the details if you try to
> implemen this.  It definitely lends support for Eric D.'s
> assertion that this change will only add bugs and doing
> something simple like prefetches is proabably a safer
> route to go down.

-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to