On 06/29, Oleg Nesterov wrote:
>
> Suppose we have the tasklets T1 and T2, both are scheduled on the
> same CPU. T1 takes some spinlock LOCK.
>
> Currently it is possible to do
>
> spin_lock(LOCK);
> disable_tasklet(T2);
>
> With this patch, the above code hangs.
I am stupid. Yes, f
Hello!
> Also, create_workqueue() is very costly. The last 2 lines should be
> reverted.
Indeed.
The result improves from 3988 nanoseconds to 3975. :-)
Actually, the difference is within statistical variance,
which is about 20 ns.
Alexey
-
To unsubscribe from this list: send the line "unsubscri
(the email address of Matthew Wilcox looks wrong, changed to [EMAIL PROTECTED])
On 06/29, Oleg Nesterov wrote:
>
> Steven, unless you have some objections, could you change tasklet_kill() ?
>
> > +static inline void tasklet_kill(struct tasklet_struct *t)
> > {
> > - return test_bit(TASKLET
On 06/29, Alexey Kuznetsov wrote:
>
> > If I understand correctly, this is because tasklet_head.list is protected
> > by local_irq_save(), and t could be scheduled on another CPU, so we just
> > can't steal it, yes?
>
> Yes. All that code is written to avoid synchronization as much as possible.
Hello!
> What changed?
softirq remains raised for such tasklet. Old times softirq was processed
once per invocation, in schedule and on syscall exit and this was relatively
harmless. Since softirqs are very weakly moderated, it results in strong
cpu hogging.
> And can it be fixed?
With curre
Hello!
> If I understand correctly, this is because tasklet_head.list is protected
> by local_irq_save(), and t could be scheduled on another CPU, so we just
> can't steal it, yes?
Yes. All that code is written to avoid synchronization as much as possible.
> If we use worqueues, we can change t
On 06/29, Alexey Kuznetsov wrote:
>
> > Just look at the tasklet_disable() logic.
>
> Do not count this.
A slightly off-topic question, tasklet_kill(t) doesn't try to steal
t from tasklet_head.list if t was scheduled, but waits until t completes.
If I understand correctly, this is because taskle
Hello!
> again, there is no reason why this couldnt be done in a hardirq context.
> If a hardirq preempts another hardirq and the first hardirq already
> processes the 'softnet work', you dont do it from the second one but
> queue it with the first one. (into the already existing
> sd->complet
Hello!
> Not a very accurate measurement (jiffies that is).
Believe me or not, but the measurement has nanosecond precision.
> Since the work queue *is* a thread, you are running a busy loop here. Even
> though you call schedule, this thread still may have quota available, and
> will not yeild
On 06/29, Steven Rostedt wrote:
>
> On Fri, 29 Jun 2007, Alexey Kuznetsov wrote:
> >
> > static void measure_workqueue(void)
> > {
> > int i;
> > int cnt = 0;
> > unsigned long start;
> > DECLARE_WORK(test, do_test_wq, 0);
> > struct workqueue_struct * wq;
> >
> > start = j
Steven Rostedt wrote:
I had very little hope for this magic switch to get into mainline. (maybe
get it into -mm) But the thing was is that tasklets IMHO are over used.
As Ingo said, there are probably only 2 or 3 places in the kernel that a
a switch to work queue conversion couldn't solve.
Thi
> Old days that was acceptable, you had not gazillion of attempts
> but just a few, but since some time (also old already) it became
> disasterous.
What changed? And can it be fixed?
Thanks,
Duncan.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a messa
* Alexey Kuznetsov <[EMAIL PROTECTED]> wrote:
> > as i said above (see the underlined sentence), hardirq contexts
> > already run just fine with hardirqs enabled.
>
> RENTRANCY PROTECTION! If does not matter _how_ they run, it matters
> what context they preempt and what that context has to ma
On Fri, 29 Jun 2007, Alexey Kuznetsov wrote:
> Hello!
>
> > I find the 4usecs cost on a P4 interesting and a bit too high - how did
> > you measure it?
>
> Simple and stupid:
Noted ;-)
> static void measure_tasklet0(void)
> {
> int i;
> int cnt = 0;
> DECLARE_TASKLET(test, do_t
Hello!
> I felt that three calls to tasklet_disable were better than a gazillion calls
> to
> spin_(un)lock.
It is not better.
Actually, it also has something equivalent to spinlock inside.
It raises some flag and waits for completion of already running
tasklets (cf. spin_lock_bh). And if taskl
Hello!
> > The difference between softirqs and hardirqs lays not in their
> > "heavyness". It is in reentrancy protection, which has to be done with
> > local_irq_disable(), unless networking is not isolated from hardirqs.
>
> i know that pretty well ;)
You forgot about this again in the next
Hi,
> > Just look at the tasklet_disable() logic.
>
> Do not count this.
>
> Done this way because nobody needed that thing, except for _one_ place
> in keyboard/console driver, which was very difficult to fix that time,
> when vt code was utterly messy and not smp safe at all.
>
> start_bh_ato
* Alexey Kuznetsov <[EMAIL PROTECTED]> wrote:
> > also, the "be afraid of the hardirq or the process context" mantra
> > is overblown as well. If something is too heavy for a hardirq, _it's
> > too heavy for a tasklet too_. Most hardirqs are (or should be)
^^^
Hello!
> I find the 4usecs cost on a P4 interesting and a bit too high - how did
> you measure it?
Simple and stupid:
int flag;
static void do_test(unsigned long dummy)
{
flag = 1;
}
static void do_test_wq(void *dummy)
{
flag = 1;
}
static void measure_tasklet0(void)
{
--
>
> But I guess there is a reason it is still marked experimental...
>
> iq81340mc:/data_dir# ./md0_verify.sh
> kernel BUG at mm/page_alloc.c:363!
Well, at least this uncovered something :-)
I'll look into this too.
-- Steve
-
To unsubscribe from this list: send the line "unsubscribe linux
On Thu, 28 Jun 2007, Dan Williams wrote:
> > CONFIG_PREEMPT?
> >
> Everything thus far has been CONFIG_PREEMPT=n (the default for this platform).
>
> With CONFIG_PREEMPT=y the resync is back in the 50MB/s range.
So with upping the prio for the work queue you got back your performance?
>
> [iop-
On 6/28/07, Dan Williams <[EMAIL PROTECTED]> wrote:
Everything thus far has been CONFIG_PREEMPT=n (the default for this platform).
With CONFIG_PREEMPT=y the resync is back in the 50MB/s range.
[iop-adma: hi-prio workqueue, CONFIG_PREEMPT=y]
iq81340mc:~# cat /proc/mdstat
Personalities : [raid0]
On 6/28/07, Steven Rostedt <[EMAIL PROTECTED]> wrote:
On Thu, 28 Jun 2007, Dan Williams wrote:
> >
> Unfortunately setting the thread to real time priority makes
> throughput slightly worse. Instead of floating around 35MB/s the
> resync speed is stuck around 30MB/s:
That is really strange. If
On Thu, 28 Jun 2007, Dan Williams wrote:
> >
> Unfortunately setting the thread to real time priority makes
> throughput slightly worse. Instead of floating around 35MB/s the
> resync speed is stuck around 30MB/s:
That is really strange. If you higher the prio of the work queue it
gets worst? So
* Andrew Morton <[EMAIL PROTECTED]> wrote:
> On Thu, 28 Jun 2007 18:00:01 +0200 Ingo Molnar <[EMAIL PROTECTED]> wrote:
>
> > with 1.2 usecs and 10,000
> > irqs/sec the cost is 1.2 msecs/sec, or 0.1%.
>
> off-by-10 error.
yeah, indeed - 12 msecs and 1.2% :-/
Ingo
-
To unsubscribe fro
On Thu, 28 Jun 2007 18:00:01 +0200 Ingo Molnar <[EMAIL PROTECTED]> wrote:
> with 1.2 usecs and 10,000
> irqs/sec the cost is 1.2 msecs/sec, or 0.1%.
off-by-10 error.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majo
On 6/28/07, Steven Rostedt <[EMAIL PROTECTED]> wrote:
Hi Dan,
On Mon, 25 Jun 2007, Dan Williams wrote:
> Yes you are right, ARM does not flush L1 when prev==next in switch_mm.
>
> > Perhaps something else is at fault here.
> >
> I'll try and dig a bit deeper...
BTW:
static int __init iop_ad
Ingo Molnar wrote:
But it was not me who claimed that 'workqueues are slow'.
The claim was: slower than tasklets.
choice. I am just wondering out loud whether this particular tool, in
its current usage pattern, makes much technological sense. My claim is:
it could very well be that it does
Ingo Molnar wrote:
my argument was: workqueues are more scalable than tasklets in general.
Here is my argument: that is totally irrelevant to $subject, when it
comes to dealing with managing existing [network driver] behavior and
performance.
My overall objection is the attempt to replace
On 06/28, Steven Rostedt wrote:
>
> I also don't see any nice API to have the priority set for a workqueue
> thread from within the kernel. Looks like one needs to be added,
> otherwise, I need to have the wrapper dig into the workqueue structs to
> find the thread that handles the workqueue.
It
* Alexey Kuznetsov <[EMAIL PROTECTED]> wrote:
> > the context-switch argument i'll believe if i see numbers. You'll
> > probably need in excess of tens of thousands of irqs/sec to even be
> > able to measure its overhead. (workqueues are driven by nice kernel
> > threads so there's no TLB over
On Thu, 28 Jun 2007, Alexey Kuznetsov wrote:
> > the context-switch argument i'll believe if i see numbers. You'll
> > probably need in excess of tens of thousands of irqs/sec to even be able
> > to measure its overhead. (workqueues are driven by nice kernel threads
> > so there's no TLB overhead,
Alexey Kuznetsov wrote:
Hello!
the context-switch argument i'll believe if i see numbers. You'll
probably need in excess of tens of thousands of irqs/sec to even be able
to measure its overhead. (workqueues are driven by nice kernel threads
so there's no TLB overhead, etc.)
It was authors o
Ingo Molnar wrote:
* Jeff Garzik <[EMAIL PROTECTED]> wrote:
Tasklets fill a niche not filled by either workqueues (slower,
requiring context switches, and possibly much latency is all wq's
processes are active) [...]
... workqueues are also possibly much more scalable (percpu workqueues
are
Hello!
> the context-switch argument i'll believe if i see numbers. You'll
> probably need in excess of tens of thousands of irqs/sec to even be able
> to measure its overhead. (workqueues are driven by nice kernel threads
> so there's no TLB overhead, etc.)
It was authors of the patch who wer
Hi Dan,
On Mon, 25 Jun 2007, Dan Williams wrote:
> Yes you are right, ARM does not flush L1 when prev==next in switch_mm.
>
> > Perhaps something else is at fault here.
> >
> I'll try and dig a bit deeper...
BTW:
static int __init iop_adma_init (void)
{
+ iop_adma_workqueue = create_wo
* Jeff Garzik <[EMAIL PROTECTED]> wrote:
> Tasklets fill a niche not filled by either workqueues (slower,
> requiring context switches, and possibly much latency is all wq's
> processes are active) [...]
... workqueues are also possibly much more scalable (percpu workqueues
are easy without c
Ingo Molnar wrote:
so how about the following, different approach: anyone who has a tasklet
in any performance-sensitive codepath, please yell now. We'll also do a
proactive search for such places. We can convert those places to
softirqs, or move them back into hardirq context. Once this is don
Ingo Molnar wrote:
so how about the following, different approach: anyone who has a tasklet
in any performance-sensitive codepath, please yell now.
ALSA uses quite a few tasklets in the framework and in several
drivers. Since we
care very much about low latency, many places use tasklet_hi_*
At Tue, 26 Jun 2007 15:03:23 +0200,
Clemens Ladisch wrote:
>
> Ingo Molnar wrote:
> > so how about the following, different approach: anyone who has a tasklet
> > in any performance-sensitive codepath, please yell now.
>
> ALSA uses quite a few tasklets in the framework and in several
> drivers
On 6/25/07, Steven Rostedt <[EMAIL PROTECTED]> wrote:
On Mon, 2007-06-25 at 18:46 -0700, Dan Williams wrote:
>
> Context switches on this platform flush the L1 cache so bouncing
> between a workqueue and the MD thread is painful.
Why is context switches between two kernel threads flushing the L1
On Mon, 2007-06-25 at 18:46 -0700, Dan Williams wrote:
>
> Context switches on this platform flush the L1 cache so bouncing
> between a workqueue and the MD thread is painful.
Why is context switches between two kernel threads flushing the L1
cache? Is this a flaw in the ARM arch? I would think
so how about the following, different approach: anyone who has a tasklet
in any performance-sensitive codepath, please yell now. We'll also do a
proactive search for such places. We can convert those places to
softirqs, or move them back into hardirq context. Once this is done -
and i doubt it wil
On Mon, 2007-06-25 at 18:00 -0600, Jonathan Corbet wrote:
> A couple of days ago I said:
>
> > The cafe_ccic (OLPC) camera driver uses a tasklet to move frames out of
> > the DMA buffers in the streaming I/O path
> >
> > Obviously some testing is called for here. I will make an attempt to do
On Tue, 2007-06-26 at 01:36 +0200, Stefan Richter wrote:
> I can't speak for Kristian, nor do I have test equipment for isochronous
> applications, but I know that there are people out there which do data
> acquisition on as many FireWire buses as they can stuff boards into
> their boxes. There a
A couple of days ago I said:
> The cafe_ccic (OLPC) camera driver uses a tasklet to move frames out of
> the DMA buffers in the streaming I/O path
>
> Obviously some testing is called for here. I will make an attempt to do
> that testing
I've done that testing - I have an OLPC B3 unit runni
Ingo Molnar wrote:
> regarding workqueues - would it be possible for you to test Steve's
> patch and get us performance numbers? Do you have any test with tons of
> tasklet activity that would definitely show the performance impact of
> workqueues?
I can't speak for Kristian, nor do I have test
* Kristian H?gsberg <[EMAIL PROTECTED]> wrote:
> OK, here's a yell. I'm using tasklets in the new firewire stack for
> all interrupt handling. All my interrupt handler does is read out the
> event mask and schedule the appropriate tasklets. Most of these
> tasklets typically just end up sch
On Mon, 2007-06-25 at 16:31 -0400, Steven Rostedt wrote:
> On Mon, 2007-06-25 at 16:07 -0400, Kristian Høgsberg wrote:
>
> > > Maybe we should be looking at something like GENERIC_SOFTIRQ to run
> > > functions that a driver could add. But they would run only on the CPU
> > > that scheduled them,
On Mon, 2007-06-25 at 22:50 +0200, Tilman Schmidt wrote:
> Ok, I'm reassured. I'll look into converting these to a work queue
> then, although I can't promise when I'll get around to it.
>
> In fact, if these timing requirements are so easy to meet, perhaps
> it doesn't even need its own work que
Am 25.06.2007 19:06 schrieb Steven Rostedt:
> On Mon, 2007-06-25 at 18:50 +0200, Tilman Schmidt wrote:
>
>> The Siemens Gigaset ISDN base driver uses tasklets in its isochronous
>> data paths. [...]
>> Does that qualify as performance sensitive for the purpose of this
>> discussion?
>
> Actually,
On Mon, 2007-06-25 at 16:07 -0400, Kristian Høgsberg wrote:
> > Maybe we should be looking at something like GENERIC_SOFTIRQ to run
> > functions that a driver could add. But they would run only on the CPU
> > that scheduled them, and do not guarantee non-reentrant as tasklets do
> > today.
>
> S
On Mon, 2007-06-25 at 15:11 -0400, Steven Rostedt wrote:
> On Mon, 2007-06-25 at 14:48 -0400, Kristian Høgsberg wrote:
> ...
> > However, I don't really understand how you can discuss a wholesale
> > replacing of tasklets with workqueues, given the very different
> > execution sematics of the two m
On Mon, 25 Jun 2007 18:50:03 +0200
Tilman Schmidt <[EMAIL PROTECTED]> wrote:
> Ingo Molnar <[EMAIL PROTECTED]> wrote:
> > so how about the following, different approach: anyone who has a tasklet
> > in any performance-sensitive codepath, please yell now.
Getting rid of tasklet's may seem like a
On Mon, 2007-06-25 at 14:48 -0400, Kristian Høgsberg wrote:
> OK, here's a yell. I'm using tasklets in the new firewire stack for all
Thanks for speaking up!
> interrupt handling. All my interrupt handler does is read out the event
> mask and schedule the appropriate tasklets. Most of these t
On Fri, 2007-06-22 at 23:59 +0200, Ingo Molnar wrote:
> so how about the following, different approach: anyone who has a tasklet
> in any performance-sensitive codepath, please yell now. We'll also do a
> proactive search for such places. We can convert those places to
> softirqs, or move them b
On Mon, 2007-06-25 at 18:50 +0200, Tilman Schmidt wrote:
> The Siemens Gigaset ISDN base driver uses tasklets in its isochronous
> data paths. These will be scheduled for each completion of an isochronous
> URB, or every 8 msec for each of the four isochronous pipes if both B
> channels are connec
Ingo Molnar <[EMAIL PROTECTED]> wrote:
> so how about the following, different approach: anyone who has a tasklet
> in any performance-sensitive codepath, please yell now.
The Siemens Gigaset ISDN base driver uses tasklets in its isochronous
data paths. These will be scheduled for each completion
On Sun, 24 Jun 2007, Jonathan Corbet wrote:
>
> The cafe_ccic (OLPC) camera driver uses a tasklet to move frames out of
> the DMA buffers in the streaming I/O path. With this change in place,
> I'd worry that the possibility of dropping frames would increase,
> especially considering that (1) thi
Ingo Molnar <[EMAIL PROTECTED]> wrote:
> so how about the following, different approach: anyone who has a tasklet
> in any performance-sensitive codepath, please yell now.
The cafe_ccic (OLPC) camera driver uses a tasklet to move frames out of
the DMA buffers in the streaming I/O path. With thi
Most of the tasklet uses are in rarely used or arcane drivers - in fact
none of my 10 test-boxes utilizes _any_ tasklet in any way that could
even get close to mattering to performance. In other words: i just
cannot test this, nor do i think that others will really test this. I.e.
if we dont appr
On Fri, 22 Jun 2007 00:00:14 -0400
Steven Rostedt <[EMAIL PROTECTED]> wrote:
>
> There's a very nice paper by Matthew Willcox that describes Softirqs,
> Tasklets, Bottom Halves, Task Queues, Work Queues and Timers[1].
> In the paper it describes the history of these items. Softirqs and
> tasklet
On Sat, 2007-06-23 at 00:44 +0200, Ingo Molnar wrote:
> * Daniel Walker <[EMAIL PROTECTED]> wrote:
>
> > > remember, these changes have been in use in -rt for a while. there's
> > > reason to believe that they aren't going to cause drastic problems.
> >
> > Since I've been working with -rt (~2 y
On Fri, 2007-06-22 at 23:59 +0200, Ingo Molnar wrote:
> * Linus Torvalds <[EMAIL PROTECTED]> wrote:
>
> > If the numbers say that there is no performance difference (or even
> > better: that the new code performs better or fixes some latency issue
> > or whatever), I'll be very happy. But if the
> As a second example, msr_seek() in arch/i386/kernel/msr.c... is the
> inode semaphore enough or not? Who understands the implications well
> enough to say?
lseek is one of the nasty remaining cases. tty is another real horror
that needs further work but we slowly get closer - drivers/char is al
* Daniel Walker <[EMAIL PROTECTED]> wrote:
> > remember, these changes have been in use in -rt for a while. there's
> > reason to believe that they aren't going to cause drastic problems.
>
> Since I've been working with -rt (~2 years now I think) it's clear
> that the number of testers of the
> > [ and on a similar notion, i still havent given up on seeing all BKL
> > use gone from the kernel. I expect it to happen any decade now ;-) ]
> 2.6.21 had 476 lock_kernel() calls. 2.6.22-git has 473 lock_kernel()
> calls currently. With that kind of flux we'll see the BKL gone in about
On Fri, 2007-06-22 at 15:09 -0700, [EMAIL PROTECTED] wrote:
> On Fri, 22 Jun 2007, Daniel Walker wrote:
>
> >
> > On Fri, 2007-06-22 at 22:40 +0200, Ingo Molnar wrote:
> >
> >>
> >> - tasklets have certain fairness limitations. (they are executed in
> >>softirq context and thus preempt every
* Daniel Walker <[EMAIL PROTECTED]> wrote:
> > - tasklets have certain fairness limitations. (they are executed in
> >softirq context and thus preempt everything, even if there is
> >some potentially more important, high-priority task waiting to be
> >executed.)
>
> Since -rt has
On Fri, 22 Jun 2007, Daniel Walker wrote:
On Fri, 2007-06-22 at 22:40 +0200, Ingo Molnar wrote:
- tasklets have certain fairness limitations. (they are executed in
softirq context and thus preempt everything, even if there is some
potentially more important, high-priority task waiting
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
> [ and on a similar notion, i still havent given up on seeing all BKL
> use gone from the kernel. I expect it to happen any decade now ;-) ]
2.6.21 had 476 lock_kernel() calls. 2.6.22-git has 473 lock_kernel()
calls currently. With that kind of flux
* Linus Torvalds <[EMAIL PROTECTED]> wrote:
> If the numbers say that there is no performance difference (or even
> better: that the new code performs better or fixes some latency issue
> or whatever), I'll be very happy. But if the numbers say that it's
> worse, no amount of cleanliness reall
On Fri, 2007-06-22 at 22:40 +0200, Ingo Molnar wrote:
>
> - tasklets have certain fairness limitations. (they are executed in
>softirq context and thus preempt everything, even if there is some
>potentially more important, high-priority task waiting to be
>executed.)
Since -rt has b
On Fri, 22 Jun 2007, Ingo Molnar wrote:
>
> * Linus Torvalds <[EMAIL PROTECTED]> wrote:
>
> > Whether we actually then want to do 6 is another matter. I think we'd
> > need some measuring and discussion about that.
>
> basically tasklets have a number of limitations:
I'm not disputing that t
On Fri, 2007-06-22 at 22:00 +0100, Christoph Hellwig wrote:
> Note that we also have a lot of inefficiency in the way we do deferred
> processing. Think of a setup where you run a XFS filesystem runs over
> a megaraid adapter.
>
> (1) we get a real hardirq, which just clears the interrupt and th
* Christoph Hellwig <[EMAIL PROTECTED]> wrote:
> Note that we also have a lot of inefficiency in the way we do deferred
> processing. Think of a setup where you run a XFS filesystem runs over
> a megaraid adapter.
>
> (1) we get a real hardirq, which just clears the interrupt and then
>
On Fri, Jun 22, 2007 at 10:40:58PM +0200, Ingo Molnar wrote:
> when it comes to 'deferred processing', we've basically got two 'prime'
> choices for deferred processing:
>
> - if it's high-performance then it goes into a softirq.
>
> - if performance is not important, or robustness and flexibi
* Linus Torvalds <[EMAIL PROTECTED]> wrote:
> Whether we actually then want to do 6 is another matter. I think we'd
> need some measuring and discussion about that.
basically tasklets have a number of limitations:
- tasklets have certain latency limitations over real tasks. (for
example th
On Fri, Jun 22, 2007 at 10:16:47AM -0700, Linus Torvalds wrote:
>
>
> On Fri, 22 Jun 2007, Steven Rostedt wrote:
> >
> > I just want to state that tasklets served their time well. But it's time
> > to give them an honorable discharge. So lets get rid of tasklets and
> > given them a standing sa
On Fri, 2007-06-22 at 10:16 -0700, Linus Torvalds wrote:
>
> So patches 1-4 all look fine to me. In fact, 5 looks ok too.
Great!
> Leaving patch 6 as a "only makes sense after we actually have some numbers
> about it", and patch 5 is a "could go either way" as far as I'm concerned
> (ie I cou
On Fri, 22 Jun 2007, Steven Rostedt wrote:
>
> I just want to state that tasklets served their time well. But it's time
> to give them an honorable discharge. So lets get rid of tasklets and
> given them a standing salute as they leave :-)
Well, independently of whether we actually discharge t
>
> This is stated on the assumption that pretty much all performance
> critical tasklets have been removed (although Christoph just mentioned
> megaraid_sas, but after I made this statement).
>
> We've been running tasklets as threads in the -rt kernel for some time
> now, and that hasn't bothe
On Fri, 2007-06-22 at 07:25 -0700, Arjan van de Ven wrote:
> > The most part, tasklets today are not used for time critical functions.
> > Running tasklets in thread context is not harmful to performance of
> > the overall system.
>
> That is a bold statement...
>
> > But running them in interru
On Fri, 2007-06-22 at 15:12 +0200, Ingo Molnar wrote:
> * Steven Rostedt <[EMAIL PROTECTED]> wrote:
>
> yes, the softirq based tasklet implementation with workqueue based
> implementation, but the tasklet API itself should still stay.
done.
>
> ok, enough idle talking, lets see the next round
> The most part, tasklets today are not used for time critical functions.
> Running tasklets in thread context is not harmful to performance of
> the overall system.
That is a bold statement...
> But running them in interrupt context is, since
> they increase the overall latency for high priori
* Andrew Morton <[EMAIL PROTECTED]> wrote:
> > there are 120 tasklet_init()s in the tree and 224
> > tasklet_schedule()s.
>
> couple of hours?
hm, what would you replace it with? Another new API? Or to workqueues
with a manual adding of a local_bh_disable()/enable() pair around the
worker fu
> On Fri, 22 Jun 2007 15:26:22 +0200 Ingo Molnar <[EMAIL PROTECTED]> wrote:
>
> * Andrew Morton <[EMAIL PROTECTED]> wrote:
>
> > I do think that would be a better approach. Apart from the
> > cleanliness issue, the driver-by-driver conversion would make it much
> > easier to hunt down any regr
On Fri, 2007-06-22 at 06:13 -0700, Andrew Morton wrote:
> > On Fri, 22 Jun 2007 08:58:44 -0400 Steven Rostedt <[EMAIL PROTECTED]> wrote:
> > On Fri, 2007-06-22 at 14:38 +0200, Ingo Molnar wrote:
> > > * Steven Rostedt <[EMAIL PROTECTED]> wrote:
> > > > Honestly, I highly doubted that this would ma
* Andrew Morton <[EMAIL PROTECTED]> wrote:
> I do think that would be a better approach. Apart from the
> cleanliness issue, the driver-by-driver conversion would make it much
> easier to hunt down any regresions or various funnineses.
there are 120 tasklet_init()s in the tree and 224 tasklet
> On Fri, 22 Jun 2007 08:58:44 -0400 Steven Rostedt <[EMAIL PROTECTED]> wrote:
> On Fri, 2007-06-22 at 14:38 +0200, Ingo Molnar wrote:
> > * Steven Rostedt <[EMAIL PROTECTED]> wrote:
> >
> > > > And this is something that might be fine for benchmarking, but not
> > > > something
> > > > we should
* Steven Rostedt <[EMAIL PROTECTED]> wrote:
> > that's where it belongs - but it first needs the cleanups suggested
> > by Christoph.
>
> I had the impression that he didn't want it in, but instead wanted
> each driver to be changed separately.
that can be done too in a later stage. We cannot
On Fri, 2007-06-22 at 14:38 +0200, Ingo Molnar wrote:
> * Steven Rostedt <[EMAIL PROTECTED]> wrote:
>
> > > And this is something that might be fine for benchmarking, but not
> > > something
> > > we should put in. Keeping two wildly different implementation of core
> > > functionality with very
* Steven Rostedt <[EMAIL PROTECTED]> wrote:
> > And this is something that might be fine for benchmarking, but not something
> > we should put in. Keeping two wildly different implementation of core
> > functionality with very different behaviour around is quite bad. Better
> > kill tasklets on
Christoph,
Thanks for taking the time to look at my patches!
On Fri, 2007-06-22 at 08:09 +0100, Christoph Hellwig wrote:
> > I've developed this way to replace all tasklets with work queues without
> > having to change all the drivers that use them. I created an API that
> > uses the tasklet AP
* Christoph Hellwig <[EMAIL PROTECTED]> wrote:
> > which actual in-kernel tasklets do you have in mind? I'm not aware
> > of any in performance critical code. (now that both the RCU and the
> > sched tasklet has been fixed.)
>
> the one in megaraid_sas for example is in a performance-critical
* Christoph Hellwig <[EMAIL PROTECTED]> wrote:
> I think we probably want some numbers, at least for tasklets used in
> potentially performance critical code.
which actual in-kernel tasklets do you have in mind? I'm not aware of
any in performance critical code. (now that both the RCU and the
On Fri, Jun 22, 2007 at 09:51:35AM +0200, Ingo Molnar wrote:
>
> * Christoph Hellwig <[EMAIL PROTECTED]> wrote:
>
> > I think we probably want some numbers, at least for tasklets used in
> > potentially performance critical code.
>
> which actual in-kernel tasklets do you have in mind? I'm not
On Fri, Jun 22, 2007 at 12:00:14AM -0400, Steven Rostedt wrote:
> The most part, tasklets today are not used for time critical functions.
> Running tasklets in thread context is not harmful to performance of
> the overall system. But running them in interrupt context is, since
> they increase the o
98 matches
Mail list logo