Hi Daniel,
See inline...
>>> On Mon, Feb 4, 2008 at 9:51 PM, in message
<[EMAIL PROTECTED]>, Daniel Walker
<[EMAIL PROTECTED]> wrote:
> On Mon, Feb 04, 2008 at 03:35:13PM -0800, Max Krasnyanskiy wrote:
>> This is just an FYI. As part of the "Isolated CPU extensions" thread Daniel
> suggest f
:1 [0001], irqs_disabled():1
Hi Daniel,
Can you try this patch and let me know if it fixes your problem?
---
use rcu for root-domain kfree
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
diff --git a/kernel/sched.c b/kernel/sched.c
index e6ad493..77e86c1 100644
>>> On Mon, Feb 4, 2008 at 9:51 PM, in message
<[EMAIL PROTECTED]>, Daniel Walker
<[EMAIL PROTECTED]> wrote:
> On Mon, Feb 04, 2008 at 03:35:13PM -0800, Max Krasnyanskiy wrote:
[snip]
>>
>> Also the first thing I tried was to bring CPU1 off-line. Thats the fastest
>> way to get irqs, soft-irqs
>>> On Tue, Feb 5, 2008 at 11:59 AM, in message
<[EMAIL PROTECTED]>, Daniel Walker
<[EMAIL PROTECTED]> wrote:
> On Mon, Feb 04, 2008 at 10:02:12PM -0700, Gregory Haskins wrote:
>> >>> On Mon, Feb 4, 2008 at 9:51 PM, in message
>> <[EMAIL PROTECT
; protection of the run queue spinlock .. So you could just move the kfree
> down below the spin_unlock_irqrestore() ..
Here is a new version to address your observation:
---
we cannot kfree while in_atomic()
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
diff --git
ll be used later in the series.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
include/linux/init_task.h |1 +
include/linux/sched.h |8 +
kernel/fork.c |1 +
kernel/sched.c| 70 -
kernel/sche
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/kthread.c |1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/kernel/kthread.c b/kernel/kthread.c
index dcfe724..b193b47 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -170,6 +170,7 @@ void kthrea
>>> On Tue, Feb 12, 2008 at 2:22 PM, in message
<[EMAIL PROTECTED]>, Steven Rostedt
<[EMAIL PROTECTED]> wrote:
> On Tue, 12 Feb 2008, Gregory Haskins wrote:
>
>> This patch adds a new critical-section primitive pair:
>>
>> "migration_disable()
Hi Ingo, Steven,
I had been working on some ideas related to saving context switches in the
bottom-half mechanisms on -rt. So far, the ideas have been a flop, but a few
peripheral technologies did come out of it. This series is one such
idea that I thought might have some merit on its own. The
>>> On Thu, Feb 14, 2008 at 10:57 AM, in message
<[EMAIL PROTECTED]>, Peter Zijlstra <[EMAIL PROTECTED]>
wrote:
> Hi,
>
> Here the current patches that rework load_balance_monitor.
>
> The main reason for doing this is to eliminate the wakeups the thing
> generates,
> esp. on an idle system. Th
>>> On Thu, Feb 14, 2008 at 1:15 PM, in message
<[EMAIL PROTECTED]>, Paul Jackson <[EMAIL PROTECTED]> wrote:
> Peter wrote of:
>> the lack of rd->load_balance.
>
> Could you explain to me a bit what that means?
>
> Does this mean that the existing code would, by default (default being
> a singl
lstra <[EMAIL PROTECTED]>
CC: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/sched.c | 106
kernel/sched_fair.c |2
2 files changed, 59 insertions(+), 49 deletions(-)
Index: linux-2
Peter Zijlstra wrote:
On Fri, 2008-02-15 at 11:46 -0500, Gregory Haskins wrote:
but perhaps you can convince me that it is not needed?
(i.e. I am still not understanding how the timer guarantees the stability).
ok, let me try again.
So we take rq->lock, at this point we know rd
We introduce a configuration variable for the feature to make it easier for
various architectures and/or configs to enable or disable it based on their
requirements.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/Kconfig.preempt |9 +
kernel/spinlock.c
From: Nick Piggin <[EMAIL PROTECTED]>
Introduce ticket lock spinlocks for x86 which are FIFO. The implementation
is described in the comments. The straight-line lock/unlock instruction
sequence is slightly slower than the dec based locks on modern x86 CPUs,
however the difference is quite small on
Preemptible spinlock waiters effectively bypasses the benefits of a fifo
spinlock. Since we now have fifo spinlocks for x86 enabled, disable the
preemption feature on x86.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
CC: Nick Piggin <[EMAIL PROTECTED]>
---
arch/x86/Kconfig
or without the adaptive features that are added later in the series.
We add it here as a separate patch for greater review clarity on smaller
changes.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/rtmutex.c | 20 +++-
1 files changed, 15 insertions(+), 5
It is redundant to wake the grantee task if it is already running
Credit goes to Peter for the general idea.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
Signed-off-by: Peter Morreale <[EMAIL PROTECTED]>
---
kernel/rtmutex.c | 23 ++-
1 files changed, 1
sleep when necessary (to avoid deadlock, etc).
This significantly improves many areas of the performance of the -rt
kernel.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
Signed-off-by: Peter Morreale <[EMAIL PROTECTED]>
Signed-off-by: Sven Dietrich <[EMAIL PROTECTED
From: Peter W.Morreale <[EMAIL PROTECTED]>
This patch adds the adaptive spin lock busywait to rtmutexes. It adds
a new tunable: rtmutex_timeout, which is the companion to the
rtlock_timeout tunable.
Signed-off-by: Peter W. Morreale <[EMAIL PROTECTED]>
---
kernel/Kconfig.preempt| 37 +
From: Peter W.Morreale <[EMAIL PROTECTED]>
In wakeup_next_waiter(), we take the pi_lock, and then find out whether
we have another waiter to add to the pending owner. We can reduce
contention on the pi_lock for the pending owner if we first obtain the
pointer to the next waiter outside of the pi_
Decorate the printk path with an "unlikely()"
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/rtmutex.c |8
1 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c
index 122f143..ebdaa17 100644
--- a/kernel
From: Peter W. Morreale <[EMAIL PROTECTED]>
Remove the redundant attempt to get the lock. While it is true that the
exit path with this patch adds an un-necessary xchg (in the event the
lock is granted without further traversal in the loop) experimentation
shows that we almost never encounter thi
From: Sven Dietrich <[EMAIL PROTECTED]>
Signed-off-by: Sven Dietrich <[EMAIL PROTECTED]>
---
kernel/Kconfig.preempt| 11 +++
kernel/rtmutex.c |4
kernel/rtmutex_adaptive.h | 11 +--
kernel/sysctl.c | 12
4 files changed, 36 inser
. tasks that the
scheduler picked to run first have a logically higher priority amoung tasks
of the same prio). This helps to keep the system "primed" with tasks doing
useful work, and the end result is higher throughput.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]&
From: Sven-Thorsten Dietrich <[EMAIL PROTECTED]>
Add /proc/sys/kernel/lateral_steal, to allow switching on and off
equal-priority mutex stealing between threads.
Signed-off-by: Sven-Thorsten Dietrich <[EMAIL PROTECTED]>
---
kernel/rtmutex.c |8 ++--
kernel/sysctl.c | 14 +
The Real Time patches to the Linux kernel converts the architecture
specific SMP-synchronization primitives commonly referred to as
"spinlocks" to an "RT mutex" implementation that support a priority
inheritance protocol, and priority-ordered wait queues. The RT mutex
implementation allows tasks t
The logic is currently broken so that PREEMPT_RT disables preemptible
spinlock waiters, which is counter intuitive.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/spinlock.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel/spinlock.c b/
>>> On Thu, Feb 21, 2008 at 10:26 AM, in message
<[EMAIL PROTECTED]>, Gregory Haskins
<[EMAIL PROTECTED]> wrote:
> We have put together some data from different types of benchmarks for
> this patch series, which you can find here:
>
> ftp://ftp.novell.com/dev
>>> On Thu, Feb 21, 2008 at 11:36 AM, in message <[EMAIL PROTECTED]>,
Andi Kleen <[EMAIL PROTECTED]> wrote:
> On Thursday 21 February 2008 16:27:22 Gregory Haskins wrote:
>
>> @@ -660,12 +660,12 @@ rt_spin_lock_fastlock(struct rt_mutex *lock,
>>
>>> On Thu, Feb 21, 2008 at 11:41 AM, in message <[EMAIL PROTECTED]>,
Andi Kleen <[EMAIL PROTECTED]> wrote:
>> +config RTLOCK_DELAY
>> +int "Default delay (in loops) for adaptive rtlocks"
>> +range 0 10
>> +depends on ADAPTIVE_RTLOCK
>
> I must say I'm not a big fan of puttin
>>> On Thu, Feb 21, 2008 at 4:24 PM, in message <[EMAIL PROTECTED]>,
Ingo Molnar <[EMAIL PROTECTED]> wrote:
> hm. Why is the ticket spinlock patch included in this patchset? It just
> skews your performance results unnecessarily. Ticket spinlocks are
> independent conceptually, they are alread
>>> On Thu, Feb 21, 2008 at 4:42 PM, in message <[EMAIL PROTECTED]>,
Ingo Molnar <[EMAIL PROTECTED]> wrote:
> * Bill Huey (hui) <[EMAIL PROTECTED]> wrote:
>
>> I came to the original conclusion that it wasn't originally worth it,
>> but the dbench number published say otherwise. [...]
>
> dbe
>>> On Tue, Feb 5, 2008 at 4:58 PM, in message
<[EMAIL PROTECTED]>, Daniel Walker
<[EMAIL PROTECTED]> wrote:
> On Tue, Feb 05, 2008 at 11:25:18AM -0700, Gregory Haskins wrote:
>> @@ -6241,7 +6242,7 @@ static void rq_attach_root(struct rq
Pavel Machek wrote:
Hi!
Are there any recent changes in cpu hotplug? I have suspend (random)
problems, nosmp seems to fix it, and last messages in the "it hangs"
case are from cpu hotplug...
Pavel
Hi Pavel,
Can you send
Pavel Machek wrote:
Hi!
Are there any recent changes in cpu hotplug? I have suspend (random)
problems, nosmp seems to fix it, and last messages in the "it hangs"
case are from cpu hotplug...
Can you send along your cpuinfo?
It happened on more than one machine, one cpui
Gregory Haskins wrote:
@@ -732,14 +741,15 @@ rt_spin_lock_slowlock(struct rt_mutex *lock)
debug_rt_mutex_print_deadlock(&waiter);
- schedule_rt_mutex(lock);
+ update_current(TASK_UNINTERRUPTIBLE, &saved_state);
I have a question for everyone out there ab
Paul E. McKenney wrote:
Governing the timeout by context-switch overhead sounds even better to me.
Really easy to calibrate, and short critical sections are of much shorter
duration than are a context-switch pair.
Yeah, fully agree. This is on my research "todo" list. My theory is
that the u
Pavel Machek wrote:
Hi!
Decorate the printk path with an "unlikely()"
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/rtmutex.c |8
1 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c
index 122f143
Bill Huey (hui) wrote:
The might_sleep is annotation and well as a conditional preemption
point for the regular kernel. You might want to do a schedule check
there, but it's the wrong function if memory serves me correctly. It's
reserved for things that actually are design to sleep.
Note that
You can download this series here:
ftp://ftp.novell.com/dev/ghaskins/adaptive-locks-v2.tar.bz2
Changes since v1:
*) Rebased from 24-rt1 to 24.2-rt2
*) Dropped controversial (and likely unecessary) printk patch
*) Dropped (internally) controversial PREEMPT_SPINLOCK_WAITERS config options
*) Incor
. tasks that the
scheduler picked to run first have a logically higher priority amoung tasks
of the same prio). This helps to keep the system "primed" with tasks doing
useful work, and the end result is higher throughput.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]&
From: Sven-Thorsten Dietrich <[EMAIL PROTECTED]>
Add /proc/sys/kernel/lateral_steal, to allow switching on and off
equal-priority mutex stealing between threads.
Signed-off-by: Sven-Thorsten Dietrich <[EMAIL PROTECTED]>
---
kernel/rtmutex.c |7 ++-
kernel/sysctl.c | 14 ++
or without the adaptive features that are added later in the series.
We add it here as a separate patch for greater review clarity on smaller
changes.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/rtmutex.c | 20 +++-
1 files changed, 15 insertions(+), 5
It is redundant to wake the grantee task if it is already running
Credit goes to Peter for the general idea.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
Signed-off-by: Peter Morreale <[EMAIL PROTECTED]>
---
kernel/rtmutex.c | 45 --
sleep when necessary (to avoid deadlock, etc).
This significantly improves many areas of the performance of the -rt
kernel.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
Signed-off-by: Peter Morreale <[EMAIL PROTECTED]>
Signed-off-by: Sven Dietrich <[EMAIL PROTECTED
From: Sven Dietrich <[EMAIL PROTECTED]>
Signed-off-by: Sven Dietrich <[EMAIL PROTECTED]>
---
kernel/Kconfig.preempt| 11 +++
kernel/rtmutex.c |4
kernel/rtmutex_adaptive.h | 11 +--
kernel/sysctl.c | 12
4 files changed, 36 inser
From: Peter W.Morreale <[EMAIL PROTECTED]>
This patch adds the adaptive spin lock busywait to rtmutexes. It adds
a new tunable: rtmutex_timeout, which is the companion to the
rtlock_timeout tunable.
Signed-off-by: Peter W. Morreale <[EMAIL PROTECTED]>
---
kernel/Kconfig.preempt| 37 +
From: Peter W.Morreale <[EMAIL PROTECTED]>
In wakeup_next_waiter(), we take the pi_lock, and then find out whether
we have another waiter to add to the pending owner. We can reduce
contention on the pi_lock for the pending owner if we first obtain the
pointer to the next waiter outside of the pi_
From: Peter W. Morreale <[EMAIL PROTECTED]>
Remove the redundant attempt to get the lock. While it is true that the
exit path with this patch adds an un-necessary xchg (in the event the
lock is granted without further traversal in the loop) experimentation
shows that we almost never encounter thi
>>> On Mon, Feb 25, 2008 at 4:54 PM, in message
<[EMAIL PROTECTED]>, Pavel Machek <[EMAIL PROTECTED]> wrote:
> Hi!
>
>> @@ -720,7 +728,8 @@ rt_spin_lock_slowlock(struct rt_mutex *lock)
>> * saved_state accordingly. If we did not get a real wakeup
>> * then we return with the saved st
>>> On Mon, Feb 25, 2008 at 5:03 PM, in message
<[EMAIL PROTECTED]>, Pavel Machek <[EMAIL PROTECTED]> wrote:
> Hi!
>
>> +/*
>> + * Adaptive-rtlocks will busywait when possible, and sleep only if
>> + * necessary. Note that the busyloop looks racy, and it isbut we do
>> + * not care. If we lo
>>> On Mon, Feb 25, 2008 at 5:09 PM, in message
<[EMAIL PROTECTED]>, Pavel Machek <[EMAIL PROTECTED]> wrote:
> Hi!
>
>> From: Peter W.Morreale <[EMAIL PROTECTED]>
>>
>> This patch adds the adaptive spin lock busywait to rtmutexes. It adds
>> a new tunable: rtmutex_timeout, which is the compani
>>> On Mon, Feb 25, 2008 at 5:57 PM, in message
<[EMAIL PROTECTED]>, Sven-Thorsten Dietrich
<[EMAIL PROTECTED]> wrote:
>
> But Greg may need to enforce it on his git tree that he mails these from
> - are you referring to anything specific in this patch?
>
Thats what I don't get. I *did* checkp
>>> On Mon, Feb 25, 2008 at 5:03 PM, in message
<[EMAIL PROTECTED]>, Pavel Machek <[EMAIL PROTECTED]> wrote:
>> +static inline void
>> +prepare_adaptive_wait(struct rt_mutex *lock, struct adaptive_waiter
> *adaptive)
> ...
>> +#define prepare_adaptive_wait(lock, busy) {}
>
> This is evil. Use
>>> On Mon, Feb 25, 2008 at 5:06 PM, in message
<[EMAIL PROTECTED]>, Pavel Machek <[EMAIL PROTECTED]> wrote:
>
> I believe you have _way_ too many config variables. If this can be set
> at runtime, does it need a config option, too?
Generally speaking, I think until this algorithm has an adapti
>>> On Tue, Feb 26, 2008 at 1:06 PM, in message
<[EMAIL PROTECTED]>, Pavel Machek <[EMAIL PROTECTED]> wrote:
> On Tue 2008-02-26 08:03:43, Gregory Haskins wrote:
>> >>> On Mon, Feb 25, 2008 at 5:03 PM, in message
>> <[EMAIL PROTECTED]>, Pavel M
Hi Dmitry,
>>> On Sun, Dec 9, 2007 at 12:16 PM, in message
<[EMAIL PROTECTED]>, "Dmitry
Adamushko" <[EMAIL PROTECTED]> wrote:
> [ cc'ed lkml ]
>
> I guess, one possible load-balancing point is out of consideration --
> sched_setscheduler()
> (also rt_mutex_setprio()).
>
> (1) NORMAL --> RT, wh
This patch should button up those conditions.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
CC: Dmitry Adamushko <[EMAIL PROTECTED]>
---
kernel/sched.c|8
kernel/sched_rt.c | 46 +-
2 files changed, 53 insertions(+),
>>> On Sun, Dec 9, 2007 at 9:53 PM, in message
<[EMAIL PROTECTED]>, Gregory Haskins
<[EMAIL PROTECTED]> wrote:
> + * I have no doubt that this is the proper thing to do to make
> + * sure RT tasks are properly balanced. What I cannot wrap
>>> On Thu, Dec 13, 2007 at 7:06 PM, in message
<[EMAIL PROTECTED]>, Steven Rostedt
<[EMAIL PROTECTED]> wrote:
>
> This is from Gregory Haskins' patch. He forgot to compile check for
> warnings on UP again ;-)
Doh!
>
> Greg,
>
> Can you mer
Mark Hansen wrote:
Hello,
Firstly, may I apologise as I am not a member of the LKML, and ask that
I be CC'd in any responses that may be forthcoming.
My question concerns the following patch which was incorporated into the
2.6.22 kernel (quoted from that change log):
Today, all threads wai
Gregory Haskins wrote:
(*) I have no information on whether the futex-plist implemetation was
pulled from the tree to cause your regression. It is possible that the
changes between 22 and 23 are just tickling your environment enough to
bring out this RT-preempt issue.
Hmm...seems I
[EMAIL PROTECTED] wrote:
Hello,
I have some strange behavior in one of my systems.
I have a real-time kernel thread under SCHED_FIFO which is running every
10ms.
It is blocking on a semaphore and released by a timer interrupt every 10ms.
Generally this works really well.
However, there is a mod
Doh! I guess there should be a rule about sending patches out after midnight
;)
The original patch I worked on was written before the code was moved to
validate_chain(), so my previous posting didnt quite translate when I merged
with git HEAD. Here is an updated patch. Sorry for the confusion.
may inadvertently fail to find a hit in the cache
resulting in a new node being added to the graph for every acquire.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/lockdep.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel/lockdep.c b/ke
eed RT
balancing.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/sched.c | 12 +---
1 files changed, 9 insertions(+), 3 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 93fd6de..aaacec2 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -631
This series may help debugging certain circumstances where the serial
console is unreponsive (e.g. RT51+ spinner, or scheduler problem). It changes
the serial8250 driver to use IRQF_NODELAY so that interrupts execute in irq
context instead of a kthread.
It works pretty well on this end, though it
This is a cleanup in preparation for the console-nodelay patch to follow
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
drivers/serial/8250.c | 459 ++---
1 files changed, 241 insertions(+), 218 deletions(-)
diff --git a/drivers/seria
through more reliably.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
drivers/char/sysrq.c|8 +
drivers/serial/8250.c | 239 ++-
drivers/serial/8250.h |6 +
drivers/serial/Kconfig | 16 +++
include/linux/serial_
On Fri, 2007-10-05 at 18:41 +0200, Thomas Gleixner wrote:
> On Fri, 5 Oct 2007, Gregory Haskins wrote:
> > This series may help debugging certain circumstances where the serial
> > console is unreponsive (e.g. RT51+ spinner, or scheduler problem). It
> > changes
> > t
On Mon, 2007-10-08 at 10:10 -0400, Steven Rostedt wrote:
> This issue has hit me enough times where I've played with a few other
> ideas. I just haven't had the time to finish them. The main problem is if
> the system locks up somewhere we have a lock held that keeps us from
> scheduling. Once tha
On Mon, 2007-10-08 at 10:41 -0400, Steven Rostedt wrote:
> --
> On Mon, 8 Oct 2007, Gregory Haskins wrote:
> >
> > Hi Steve,
> > What you describe is exactly what I did. The IRQF_NODELAY handler
> > just minimally checks to see if the character is a sysrq relate
Hi Guys,
Nice find! Comment inline..
(adding linux-rt-users)
for reference to
http://lkml.org/lkml/2007/10/8/252
On Mon, 2007-10-08 at 22:46 -0400, Steven Rostedt wrote:
> Index: linux-2.6.23-rc9-rt2/kernel/sched.c
> ===
> ---
Hi All,
The first two patches are from Mike and Steven on LKML, which the rest of my
series is dependent on. Patch #4 is a resend from earlier.
Series Summary:
1) Send IPI on overload regardless of whether prev is an RT task
2) Set the NEEDS_RESCHED flag on reception of RESCHED_IPI
3) Fix a mis
From: Mike Kravetz <[EMAIL PROTECTED]>
RESCHED_IPIs can be missed if more than one RT task is awoken simultaneously
Signed-off-by: Steven Rostedt <[EMAIL PROTECTED]>
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/sched.c |2 +-
1 files changed, 1 insertio
From: Mike Kravetz <[EMAIL PROTECTED]>
x86_64 based RESCHED_IPIs fail to set the reschedule flag
Signed-off-by: Steven Rostedt <[EMAIL PROTECTED]>
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
arch/x86_64/kernel/smp.c |6 +++---
1 files changed, 3 insertio
Any number of tasks could be queued behind the current task, so direct the
balance IPI at all CPUs (other than current)
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
CC: Steven Rostedt <[EMAIL PROTECTED]>
CC: Mike Kravetz <[EMAIL PROTECTED]>
CC: Peter W. Morreale
those affected units.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
CC: Peter W. Morreale <[EMAIL PROTECTED]>
---
kernel/sched.c | 15 +--
1 files changed, 13 insertions(+), 2 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index a28ca9d..6ca5f4f 100644
The system currently evaluates all online CPUs whenever one or more enters
an rt_overload condition. This suffers from scalability limitations as
the # of online CPUs increases. So we introduce a cpumask to track
exactly which CPUs need RT balancing.
Signed-off-by: Gregory Haskins <[EM
On Tue, 2007-10-09 at 11:00 -0400, Steven Rostedt wrote:
>
Hi Steve, Peter,
> --
> On Tue, 9 Oct 2007, Gregory Haskins wrote:
> > Hi All,
>
> Hi Gregory,
>
> >
> > The first two patches are from Mike and Steven on LKML, which the rest of my
> > ser
Applies to 2.6.23-rc9-rt2... This is another RTO related fix from the thread
two days ago.
---
RT: Fix special-case exception for preempting the local CPU
Check whether the local CPU is eligible to take the task before trying to
preempt it.
Signed-off-by: Gregory Haskins <[EMAIL PROTEC
The current series applies to 23-rt1-pre1.
This is a snapshot of the current work-in-progress for the rt-overload
enhancements. The primary motivation for the series to to improve the
algorithm for distributing RT tasks to keep the highest tasks active. The
current system tends to blast IPIs "sh
From: Steven Rostedt <[EMAIL PROTECTED]>
This has been complied tested (and no more ;-)
The idea here is when we find a situation that we just scheduled in an
RT task and we either pushed a lesser RT task away or more than one RT
task was scheduled on this CPU before scheduling occurred.
The an
This is my own interpretation of Peter's recommended changes Steven's push-rt
patch. Just to be clear, Peter does not endorse this patch unless he himself
specifically says so ;).
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/sched.c | 12 ++--
1
task->cpus_allowed can have bit positions that are set for CPUs that are
not currently online. So we optimze our search by ANDing against the online
set.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/sched.c |6 +-
1 files changed, 5 insertions(+), 1 deletions
overhead, such
as: seqlocks, per_cpu data to avoid cacheline contention, avoiding locks
in the update code when possible, etc.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
include/linux/cpupri.h | 25 +
kernel/Kconfig.preempt | 11 ++
kernel/Makefile|1
Normalize the CPU priority system between the two search algorithms, and
modularlize the search function within push_rt_tasks.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/sched.c | 91 ++--
1 files changed, 61 inse
tasks, or equilibrium is achieved. The orignal logic only tried to push one
task per event.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/sched.c | 69 ++--
1 files changed, 42 insertions(+), 27 deletions(-)
diff --g
The system currently evaluates all online CPUs whenever one or more enters
an rt_overload condition. This suffers from scalability limitations as
the # of online CPUs increases. So we introduce a cpumask to track
exactly which CPUs need RT balancing.
Signed-off-by: Gregory Haskins <[EM
->cpus_allowed will effectively reduce our search
to within our domain. However, I believe there are cases where the
cpus_allowed mask may be all ones and therefore we err on the side of
caution. If it can be optimized later, so be it.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
CC:
The current code use a linear algorithm which causes scaling issues
on larger SMP machines. This patch replaces that algorithm with a
2-dimensional bitmap to reduce latencies in the wake-up path.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
CC: Christoph Lameter <[EMAIL
there may be other regressions as well. We make it easier on people
to select which method they want by making the algorithm a config option,
with the default being the current behavior.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/Kconfig.preempt
This logic doesn't have any clients
yet but it will later in the series.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
CC: Christoph Lameter <[EMAIL PROTECTED]>
CC: Paul Jackson <[EMAIL PROTECTED]>
CC: Simon Derr <[EMAIL PROTECTED]>
---
include/linu
These patches apply to the end of the rt-balance-patches v6 annouced here:
http://lkml.org/lkml/2007/11/20/613
These replace the v6a patches annouced here:
http://lkml.org/lkml/2007/11/21/226
Changes since v6a:
*) made features tunable via config options
*) fixed a bug related to setting a CPU
l.org/pub/linux/kernel/projects/rt/
or in prebuilt form here from opensuse-factory:
http://download.opensuse.org/distribution/SL-OSS-factory/inst-source/suse/x86_64/kernel-rt-2.6.24_rc3_git1-3.x86_64.rpm
Please consider for inclusion in the next convenient merge window.
Regards,
-Steven Roste
From: Steven Rostedt <[EMAIL PROTECTED]>
This patch adds accounting to keep track of the number of RT tasks running
on a runqueue. This information will be used in later patches.
Signed-off-by: Steven Rostedt <[EMAIL PROTECTED]>
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED
mation will be used for later patches.
Signed-off-by: Steven Rostedt <[EMAIL PROTECTED]>
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/sched.c|3 +++
kernel/sched_rt.c | 18 ++
2 files changed, 21 insertions(+), 0 deletions(-)
diff --git
This
patch set does not address this issue.
Note: checkpatch reveals two over 80 character instances. I'm not sure
that breaking them up will help visually, so I left them as is.
Signed-off-by: Steven Rostedt <[EMAIL PROTECTED]>
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]&g
igned-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/sched_rt.c | 36
1 files changed, 36 insertions(+), 0 deletions(-)
diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c
index b5ef4b8..b8c758a 100644
--- a/kernel/sched_rt.c
+++ b/kernel/sched_
1 - 100 of 292 matches
Mail list logo