On Wed, 2007-12-09 at 14:50 +0100, James Chapman wrote:
> By low traffic, I assume you mean a rate at which the NAPI driver
> doesn't stay in polled mode.
i.e:
"one interupt per packet per napi poll" which cause about 1-2 more IOs
in comparison to the case where you didnt do NAPI.
> The proble
From: "Mandeep Baines" <[EMAIL PROTECTED]>
Date: Wed, 12 Sep 2007 09:47:46 -0700
> Why would disabling IRQ's be expensive on non-MSI PCI devices?
> Wouldn't it just require a single MMIO write to clear the interrupt
> mask of the device.
MMIO's are the most expensive part of the whole interrupt
s
On 9/12/07, Stephen Hemminger <[EMAIL PROTECTED]> wrote:
> But if you compare this to non-NAPI driver the same softirq
> overhead happens. The problem is that for many older devices disabling IRQ's
> require an expensive non-cached PCI access. Smarter, newer devices
> all use MSI which is pure edge
David Miller wrote:
From: James Chapman <[EMAIL PROTECTED]>
Date: Thu, 6 Sep 2007 15:16:00 +0100
First, do we need to encourage consistency in NAPI poll drivers? A
survey of current NAPI drivers shows different strategies being used
in their poll(). Some such as r8169 do the napi_complete() if
Stephen Hemminger wrote:
On Wed, 12 Sep 2007 14:50:01 +0100
James Chapman <[EMAIL PROTECTED]> wrote:
By low traffic, I assume you mean a rate at which the NAPI driver
doesn't stay in polled mode. The problem is that that rate is getting
higher all the time, as interface and CPU speeds increase.
From: James Chapman <[EMAIL PROTECTED]>
Date: Thu, 6 Sep 2007 15:16:00 +0100
> First, do we need to encourage consistency in NAPI poll drivers? A
> survey of current NAPI drivers shows different strategies being used
> in their poll(). Some such as r8169 do the napi_complete() if poll()
> does les
On Wed, 12 Sep 2007 14:50:01 +0100
James Chapman <[EMAIL PROTECTED]> wrote:
> jamal wrote:
> > On Wed, 2007-12-09 at 03:04 -0400, Bill Fink wrote:
> >> On Fri, 07 Sep 2007, jamal wrote:
> >
> >>> I am going to be the devil's advocate[1]:
> >> So let me be the angel's advocate. :-)
> >
> > I thi
jamal wrote:
On Wed, 2007-12-09 at 03:04 -0400, Bill Fink wrote:
On Fri, 07 Sep 2007, jamal wrote:
I am going to be the devil's advocate[1]:
So let me be the angel's advocate. :-)
I think this would make you God's advocate ;->
(http://en.wikipedia.org/wiki/God%27s_advocate)
I view his re
On Wed, 2007-12-09 at 03:04 -0400, Bill Fink wrote:
> On Fri, 07 Sep 2007, jamal wrote:
> > I am going to be the devil's advocate[1]:
>
> So let me be the angel's advocate. :-)
I think this would make you God's advocate ;->
(http://en.wikipedia.org/wiki/God%27s_advocate)
> I view his results m
On Fri, 07 Sep 2007, jamal wrote:
> On Fri, 2007-07-09 at 10:31 +0100, James Chapman wrote:
> > Not really. I used 3-year-old, single CPU x86 boxes with e100
> > interfaces.
> > The idle poll change keeps them in polled mode. Without idle
> > poll, I get twice as many interrupts as packets, one
On Mon, 2007-10-09 at 10:20 +0100, James Chapman wrote:
> jamal wrote:
>
> > If the problem i am trying to solve is "reduce cpu use at lower rate",
> > then this is not the right answer because your cpu use has gone up.
>
> The problem I'm trying to solve is "reduce the max interrupt rate from
>
On Sat, 2007-08-09 at 09:42 -0700, Mandeep Singh Baines wrote:
> Reading the "interrupt pending" register would require an MMIO read.
> MMIO reads are very expensive. In some systems the latency of an MMIO
> read can be 1000x that of an L1 cache access.
Indeed.
> However, work_done() doesn't hav
Mandeep Singh Baines wrote:
Why would using a timer to hold off the napi_complete() rather than
jiffy count limit the polls per packet to 2?
I was thinking a timer could be used in the way suggested in Jamal's
paper. The driver would do nothing (park) until the timer expires. So
there would be
Andi Kleen wrote:
James Chapman <[EMAIL PROTECTED]> writes:
On some platforms the precise timers (like ktime_get()) can be slow,
but often they are fast. It might make sense to use a shorter
constant time wait on those with fast timers at least. Right now this
cannot be known by portable code
Jason Lunz wrote:
I'd be particularly interested to see what happens to your latency when
other apps are hogging the cpu. I assume from your description that your
cpu is mostly free to schedule the niced softirqd for the device polling
duration, but this won't always be the case. If other tasks a
jamal wrote:
If the problem i am trying to solve is "reduce cpu use at lower rate",
then this is not the right answer because your cpu use has gone up.
The problem I'm trying to solve is "reduce the max interrupt rate from
NAPI drivers while minimizing latency". In modern systems, the interru
James Chapman ([EMAIL PROTECTED]) wrote:
> Hi Mandeep,
>
> Mandeep Singh Baines wrote:
>> Hi James,
>> I like the idea of staying in poll longer.
>> My comments are similar to what Jamal and Stephen have already
>> said.
>> A tunable (via sysfs) would be nice.
>> A timer might be preferred to jiff
James Chapman <[EMAIL PROTECTED]> writes:
>
> Clearly, keeping a device in polled mode for 1-2 jiffies
1-2 jiffies can be a long time on a HZ=100 kernel (20ms). A fast CPU
could do a lot of loops in this time, which would be waste of power
and CPU time.
On some platforms the precise timers (lik
In gmane.linux.network, you wrote:
> But the CPU has done more work. The flood ping will always show
> increased CPU with these changes because the driver always stays in the
> NAPI poll list. For typical LAN traffic, the average CPU usage doesn't
> increase as much, though more measurements wou
On Fri, 2007-07-09 at 10:31 +0100, James Chapman wrote:
> Not really. I used 3-year-old, single CPU x86 boxes with e100
> interfaces.
> The idle poll change keeps them in polled mode. Without idle
> poll, I get twice as many interrupts as packets, one for txdone and one
> for rx. NAPI is contin
Hi Mandeep,
Mandeep Singh Baines wrote:
Hi James,
I like the idea of staying in poll longer.
My comments are similar to what Jamal and Stephen have already said.
A tunable (via sysfs) would be nice.
A timer might be preferred to jiffy polling. Jiffy polling will not increase
latency the way
jamal wrote:
On Thu, 2007-06-09 at 15:16 +0100, James Chapman wrote:
>> First, do we need to encourage consistency in NAPI poll drivers?
not to stiffle the discussion, but Stephen Hemminger is planning to
write a new howto; that would be a good time to bring up the topic. The
challenge is th
Hi James,
I like the idea of staying in poll longer.
My comments are similar to what Jamal and Stephen have already said.
A tunable (via sysfs) would be nice.
A timer might be preferred to jiffy polling. Jiffy polling will not increase
latency the way a timer would. However, jiffy polling will
On Thu, 2007-06-09 at 15:16 +0100, James Chapman wrote:
> First, do we need to encourage consistency in NAPI poll drivers? A
> survey of current NAPI drivers shows different strategies being used
> in their poll(). Some such as r8169 do the napi_complete() if poll()
> does less work than their
Stephen Hemminger wrote:
On Thu, 06 Sep 2007 16:30:30 +0100
James Chapman <[EMAIL PROTECTED]> wrote:
Stephen Hemminger wrote:
What about the latency that NAPI imposes? Right now there are certain
applications that
don't like NAPI because it add several more microseconds, and this may make it
On Thu, 06 Sep 2007 16:30:30 +0100
James Chapman <[EMAIL PROTECTED]> wrote:
> Stephen Hemminger wrote:
>
> > What about the latency that NAPI imposes? Right now there are certain
> > applications that
> > don't like NAPI because it add several more microseconds, and this may make
> > it worse.
Stephen Hemminger wrote:
What about the latency that NAPI imposes? Right now there are certain
applications that
don't like NAPI because it add several more microseconds, and this may make it
worse.
Latency is something that I think this approach will actually improve,
at the expense of add
On Thu, 6 Sep 2007 15:16:00 +0100
James Chapman <[EMAIL PROTECTED]> wrote:
> This RFC suggests some possible improvements to NAPI in the area of
> minimizing interrupt rates. A possible scheme to reduce interrupt rate for
> the low packet rate / fast CPU case is described.
>
> First, do we nee
28 matches
Mail list logo