[ cc: lkml ]
>> > There is a property of shadow memory that I would like to exploit
>> > - any region of shadow memory can be reset to zero at any point
>> > w/o any bad consequences (it can lead to missed data
>> > races, but it's better than OOM kill).
>> > I've tried to execute madvise(MADV_DO
On 23/02/2008, Linus Torvalds <[EMAIL PROTECTED]> wrote:
>
> On Sat, 23 Feb 2008, Dmitry Adamushko wrote:
> >
> > it's not a LOAD that escapes *out* of the region. It's a MODIFY that gets
> *in*:
>
>
> Not with the smp_wmb(). That's t
gets *in*:
(1)
MODIFY(a);
LOCK
LOAD(b);
UNLOCK
can become:
(2)
LOCK
MOFIDY(a)
LOAD(b);
UNLOCK
and (reordered)
(3)
LOCK
LOAD(a)
MODIFY(b)
UNLOCK
and this last one is a problem. No?
>
> Linus
>
--
Best regards,
Dmitry Adamushko
--
To unsubscribe from this li
he _reverse_ order:
condition = new;
smb_mb();
try_to_wake_up();
=> (1) MODIFY(condition); (2) LOAD(current->state)
try_to_wake_up() does not need to be a full mb per se, the only
requirement (and only for situation like above) is that there is a
full mb between possible write ops. that
CPUs
+ when Stop Machine is triggered. Stop Machine is currently only
+ used by the module insertion and removal.
this "only" part. What about e.g. a 'cpu hotplug' case (_cpu_down())?
(or we should abstract it a bit to the point that e.g. a cpu can be
considered as
From: Dmitry Adamushko <[EMAIL PROTECTED]>
Subject: kthread: call wake_up_process() whithout the lock being held
- from the POV of synchronization, there should be no need to call
wake_up_process()
with the 'kthread_create_lock' being held;
- moreover, in order to su
From: Dmitry Adamushko <[EMAIL PROTECTED]>
Subject: kthread: add a missing memory barrier to kthread_stop()
We must ensure that kthread_stop_info.k has been updated before
kthread's wakeup. This is required to properly support
the use of kthread_should_stop() in the main loop
2/2] kthread: call wake_up_process() whithout the lock being held
---
(this one is from Ingo's sched-devel tree)
softlockup: fix task state setting
kthread_stop() can be called when a 'watchdog' thread is executing after
kthread_should_stop() but before set_task_state(TASK_INTERR
ocation I'm able to find is from local_bh_enable() and
> from ksoftirqd/n threads (by calling do_softirq()). AFAIK, both
> invocations occur in a _nont-interrupt_ context (exception context).
>
> So, where does the interrupt-context tasklets invocation really
> occur ?
Look at irq_exit() in softirq.c.
The common sequence is ... -> do_IRQ() --> irq_exit() --> invoke_softirq()
--
Best regards,
Dmitry Adamushko
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
spin_lock(&kthread_create_lock);
list_add_tail(&create.list, &kthread_create_list);
- wake_up_process(kthreadd_task);
spin_unlock(&kthread_create_lock);
+ wake_up_process(kthreadd_task);
wait_for_completion(&create.done);
--
Best regards,
; < our main loop is
inside this function
/* It might have exited on its own, w/o kthread_stop. Check. */
if (kthread_should_stop()) {
kthread_stop_info.err = ret;
complete(&kthread_stop_info.done);
}
return 0;
}
ocumented as part of this patch. Finally, I think the comment as is is
> hard to understand I got the sense of it backwards on first reading;
> perhaps something like this:
>
> /*
> * Ensure kthread_stop_info.k is visible before wakeup, paired
> * with barri
On 19/02/2008, Peter Zijlstra <[EMAIL PROTECTED]> wrote:
> [ ... ]
> > >
> > > From: Dmitry Adamushko <[EMAIL PROTECTED]>
> > > Subject: kthread: add a memory barrier to kthread_stop()
> > >
> > > 'kthread' threads do a chec
- set_current_state(TASK_INTERRUPTIBLE);
- kthread_should_stop();
here, kthread_stop_info.k is not yet visible
- schedule()
...
we missed a 'kthread_stop' event.
hum?
TIA,
---
From: Dmitry Adamushko <[EMAIL PROTECTED]>
Subject: kthread: add a memory barrie
resched+0x31/0x40
> [] wait_for_common+0x34/0x170
> [] ? try_to_wake_up+0x77/0x200
> [] wait_for_completion+0x18/0x20
> [ ... ]
does a stack trace always look like this?
--
Best regards,
Dmitry Adamushko
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel&qu
On 03/02/2008, Arjan van de Ven <[EMAIL PROTECTED]> wrote:
> Dmitry Adamushko wrote:
> > Subject: latencytop: optimize LT_BACKTRACEDEPTH loops a bit.
> >
> > It looks like there is no need to loop any longer when 'same == 0'.
>
> thanks for the contri
Subject: latencytop: optimize LT_BACKTRACEDEPTH loops a bit.
It looks like there is no need to loop any longer when 'same == 0'.
Signed-off-by: Dmitry Adamushko <[EMAIL PROTECTED]>
diff --git a/kernel/latencytop.c b/kernel/latencytop.c
index b4e3c85..61f7da0 100644
--- a/ker
On 02/02/2008, Ingo Molnar <[EMAIL PROTECTED]> wrote:
>
> * Dmitry Adamushko <[EMAIL PROTECTED]> wrote:
>
> > yeah, I was already on a half-way to check it out.
> >
> > It does fix a problem for me.
> >
> > Don't forget to take along these
On 01/02/2008, Ingo Molnar <[EMAIL PROTECTED]> wrote:
>
> * Dmitry Adamushko <[EMAIL PROTECTED]> wrote:
>
> > > I've observed delays from ~3 s. up to ~8 s. (out of ~20 tests) so
> > > the 10s. delay of msleep_interruptible() might be related but I'
On 01/02/2008, Dmitry Adamushko <[EMAIL PROTECTED]> wrote:
> On 01/02/2008, Ingo Molnar <[EMAIL PROTECTED]> wrote:
> >
> > thanks - i cannot reproduce it on my usual suspend/resume testbox
> > because e1000 broke on it, and this is a pretty annoying regression
ine
real0m7.770s
I've observed delays from ~3 s. up to ~8 s. (out of ~20 tests) so the
10s. delay of msleep_interruptible() might be related but
I'm still looking for the reason why this fix helps (and what goes
wrong with the current code).
>
> Ingo
>
--
Best re
uptible(1) timeouts? On
average, it would take +-5 sec. and might explain the first
observation of Ravael -- "...adds a 5 - 10 sec delay..." (although,
lately he reported up to +30 sec. delays).
(/me goint to also try reproducing it later today)
> [ ... ]
--
Best regards,
Dmitry Ad
tasks
>
> Reverting this commit (it reverts with some minor modifications) fixes the
> problem for me.
What if you use the same kernel that triggers a problem and just disable
this new 'softlockup' functionality:
echo 0 > /proc/sys/kernel/hung_task_timeout_secs
does the problem disapear?
TIA,
>
> Thanks,
> Rafael
>
--
Best regards,
Dmitry Adamushko
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
On 20/01/2008, Dmitry Adamushko <[EMAIL PROTECTED]> wrote:
> Hello Arjan,
>
> a few comments on the current locking scheme.
heh... now having read the first message in this series ("[Announce]
Development release 0.1 of the LatencyTOP tool"), I finally see that
"fin
*m, void *v)
> +{
> + int i;
> + struct task_struct *task = m->private;
> + seq_puts(m, "Latency Top version : v0.1\n");
> +
> + for (i = 0; i < 32; i++) {
> + if (task->latency_record[i].reason)
for (i = 0; i <
cpu, rq);
is called from sched_class_fair :: load_balance_fair()
upon getting a PRE_SCHEDULE load-balancing point.
IMHO, it would look nicer this way _BUT_ yeah, this 'full' abstraction
adds additional overhead to the hot-path (which might make it not that
worthy).
--
Best regards,
Dmitr
From: Dmitry Adamushko <[EMAIL PROTECTED]>
Clean-up try_to_wake_up().
Get rid of the 'new_cpu' variable in try_to_wake_up() [ that's, one #ifdef
section less ].
Also remove a few redundant blank lines.
Signed-off-by: Dmitry Adamushko <[EMAIL PROTECTED]>
---
di
From: Dmitry Adamushko <[EMAIL PROTECTED]>
No need to do a check for 'affine wakeup and passive balancing possibilities' in
select_task_rq_fair() when task_cpu(p) == this_cpu.
I guess, this part got missed upon introduction of per-sched_class
select_task_rq()
in try_to_wake_up()
queue/dequeue() interface would become less straightforward,
logically-wise.
Something like:
rq = activate_task(rq, ...) ; /* may unlock rq and lock/return another one */
would complicate the existing use cases.
--
Best regards,
Dmitry Adamushko
--
To unsubscribe from this list: send the line "unsub
ven (at least, as an experiment) fix them to different
CPUs as well?
sure, the scenario is highly dependent on a nature of those
'events'... and I can just speculate here :-) (but I'd imagine
situations when such a scenario would scale better).
>
> Thank you again,
> --Mica
q->nr_running == 0)
return idle_sched_class.pick_next_task(rq);
at the beginning of pick_next_task().
(or maybe put it at the beginning of the
if (likely(rq->nr_running == rq->cfs.nr_running)) {} block as we
already have 'likely()' there).
--
Best regards,
Dmitry Adamushko
-
To unsubsc
On 22/11/2007, Micah Dowty <[EMAIL PROTECTED]> wrote:
> On Tue, Nov 20, 2007 at 10:47:52PM +0100, Dmitry Adamushko wrote:
> > btw., what's your system? If I recall right, SD_BALANCE_NEWIDLE is on
> > by default for all configs, except for NUMA nodes.
>
> It
ache_nice_tries",
&sd->cache_nice_tries,
sizeof(int), 0644, proc_dointvec_minmax);
- set_table_entry(&table[12], "flags", &sd->flags,
+ set_table_entry(&table[10], "flags", &sd->flags,
sizeof(int), 064
ke any difference?
moreover, /proc/sys/kernel/sched_domain/cpu1/domain0/newidle_idx seems
to be responsible for a source of the load for calculating the busiest
group. e.g. with newidle_idx == 0, the current load on the queue is
used instead of cpu_load[].
>
> Thanks,
> --Micah
>
--
Bes
cat /proc/schedstat
... wait either a few seconds or until the problem disappears
(whatever comes first)
# cat /proc/schedstat
TIA,
>
> --Micah
>
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a mess
.
yep, that's what load_balance_newidle() is about... so maybe there are
some factors resulting in its inconsistency/behavioral differences on
different kernels.
Let's say we change a pattern for the niced task: e.g. run for 100 ms.
and then sleep for 300 ms. (that's ~25% of cpu
en not seen (i.e.
don't contribute to cpu_load[]) on cpu_0...
we do sampling every tick (sched.c :: update_cpu_load()) and consider
this_rq->ls.load.weight at this particular moment (that is the sum of
'weights' for all runnable tasks on this rq)... and it may well be
that the afo
em with low HZ value (I can't see the kernel config
immediately on the bugzilla page) and a task niced to the lowest
priority (is this 'kjournald' mentioned in the report of lower prio? )
running for a full tick, 'tmp' can be such a big value... hmm?
--
Best regards,
Dmitry
maximum 'delays' you see before hitting this "once in 10
minutes" point?
say, with 256 Mb. the blips could just become lower (e.g. 2 ms.) and
are not reported as "big ones" (>5 ms. in your terms)...
Quite often the source of high periodic latency is SMI (System
Ma
our code and device spec.)...
--> ISR runs and due to some error e.g. loops endlessly/deadlocks/etc.
Tried placing printk() at the beginning of ISR?
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a mess
, cfs_rq->curr can be NULL
> for the child.
Would it be better, logically-wise, to use is_same_group() instead?
Although, we can't have 2 groups with cfs_rq->curr != NULL on the same
CPU... so if the child belongs to another group, it's cfs_rq->curr is
automatically NULL indeed.
;running?' tests to be separate.
Humm... the 'current' is not kept within the tree but
current->se.on_rq is supposed to be '1' ,
so the old code looks ok to me (at least for the 'leaf' elements).
Maybe you were able to get more useful oops on your site?
> -
though, there can be some disadvantages here as well. e.g. we would
likely need to remove 'max 3 tasks at once' limit and get,
theoretically, unbounded time spent in push_rt_tasks() on a single
CPU).
>
> -- Steve
>
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list:
On 21/10/2007, Dmitry Adamushko <[EMAIL PROTECTED]> wrote:
> On 20/10/2007, Jeff Garzik <[EMAIL PROTECTED]> wrote:
> > Chuck Ebbert wrote:
> > > On 10/19/2007 05:39 PM, Jeff Garzik wrote:
> > >> On my main devel box, vanilla 2.6.23 on x86-64/Fedora-7, I
s of /proc/PID/stat is the CPU# on which the
task is currently active.
Let's start a buzy-loop task on this cpu and see wheather it's able to
make any progress (TIME counter in 'ps')
# taskset -c THIS_CPU some_busy_looping_prog
TIA,
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
pushing/pulling 1 task at once (as
described above)... any additional actions are just overhead or there
is some problem with the algorithm (ah well, or with my understanding
:-/ )
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
be used in pull_rt_task()
for the 'next' in a similar way as it's done in push_rt_task() .
>
> [ ... ]
>
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Mor
d then,
static struct task_struct *pick_next_task_rt(struct rq *rq)
{
struct rt_prio_array *array = &rq->rt.active;
struct task_struct *next;
struct list_head *queue;
int idx;
-idx = sched_find_first_bit(array->bitmap);
+ rq->highest_prio
st, array->queue + p->prio);
> __set_bit(p->prio, array->bitmap);
> +
> + inc_rt_tasks(p, rq);
why do you need the rt_task(p) check in {inc,dec}_rt_tasks() ?
{enqueue,dequeue}_task_rt() seem to be the only callers and they will
crash (or corrupt memory) anyway in t
ted earlier today (crash in put_prev_task_fair() -->
__enqueue_task() --> rb_insert_color()) that you are already aware of
... (/me will continue tomorrow).
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a me
threads (like events/cpu, ksoftirq/cpu,
etc.) being created on start-up (x NUMBER_OF_CPUS) and SD_SCHED_FORK
(actually, sched_balance_self() from sched_fork()) is just an overhead
in this case...
although, sched_balance_self() is likely to be responsible for a minor
% of the time taken to crea
te_task(rq, p, 0);
> + __setscheduler(rq, p, SCHED_NORMAL, 0);
> + activate_task(rq, p, 0);
> + resched_task(p);
I guess, put_prev_task() / set_curr_task() should be called (for the
case of task_running(p)) to make it group-scheduler-friendly (as it's
done e.g. in sched_setscheduler(
s::task_timeslice() but decided it was not worth it.
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
On 03/10/2007, Dmitry Adamushko <[EMAIL PROTECTED]> wrote:
> On 03/10/2007, Jarek Poplawski <[EMAIL PROTECTED]> wrote:
> > I can't see anything about clearing. I think, this was about charging,
> > which should change the key enough, to move a task to, maybe, a
a_exec_weighted;
+ }
+ return;
}
+
/*
* Find the rightmost entry in the rbtree:
*/
>
> Jarek P.
>
--
Best regards,
Dmitry Adamushko
--- sched_fair-old.c 2007-10-03 12:45:17.010306000 +0200
+++ sched_fair.c 2007-10-03 12:44:46.89985100
The following patch (sched: disable sleeper_fairness on SCHED_BATCH)
seems to break GROUP_SCHED. Although, it may be
'oops'-less due to the possibility of 'p' being always a valid
address.
Signed-off-by: Dmitry Adamushko <[EMAIL PROTECTED]>
---
diff --git a/ke
On 01/10/2007, Ingo Molnar <[EMAIL PROTECTED]> wrote:
>
> * Dmitry Adamushko <[EMAIL PROTECTED]> wrote:
>
> > here is a few patches on top of the recent 'sched-dev':
> >
> > (1) [ proposal ] make timeslices of SCHED_RR tasks constant and no
and this one,
make dequeue_entity() / enqueue_entity() and update_stats_dequeue() /
update_stats_enqueue() look similar, structure-wise.
zero effect, functionally-wise.
Signed-off-by: Dmitry Adamushko <[EMAIL PROTECTED]>
---
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index 2
remove obsolete code -- calc_weighted()
Signed-off-by: Dmitry Adamushko <[EMAIL PROTECTED]>
---
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index fe4003d..2674e27 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -342,17 +342,6 @@ update_stats_wait_start(struct
40 51727ca0f ../build/kernel/sched.o.before
465535102 40 51695c9ef ../build/kernel/sched.o
yeah, this seems to require task_rq_lock/unlock() but this is not a hot
path.
what do you think?
(compiles well, not functionally tested yet)
Almost-Signed-off-by: Dmitry Adamushko <[EMAI
humm... I think, it'd be safer to have something like the following
change in place.
The thing is that __pick_next_entity() must never be called when
first_fair(cfs_rq) == NULL. It wouldn't be a problem, should 'run_node'
be the very first field of 'struct sched_entity' (and it's the second).
Th
ady existing
> "do not leak PI boosting priority to the child" at the sched_fork().
> This patch moves the adjusting sched_class from wake_up_new_task()
> to sched_fork().
>
> Signed-off-by: Hiroshi Shimamoto <[EMAIL PROTECTED]>
Signed-off-by: Dmitry Adamushko
> > Hi Ingo,
> >
> > I found an issue about the scheduler.
> > If you need a test case, please let me know.
> > Here is a patch.
> > [ ... ]
> > The new thread should be valid scheduler class before queuing.
> > This patch fixes to set the suitable scheduler class.
>
> Nice fix! It's a 2.6.23 mus
(3)
rework enqueue/dequeue_entity() to get rid of sched_class::set_curr_task().
This simplifies sched_setscheduler(), rt_mutex_setprio() and sched_move_tasks().
Signed-off-by : Dmitry Adamushko <[EMAIL PROTECTED]>
Signed-off-by : Srivatsa Vaddagiri <[EMAIL PROTECTED]>
---
diff --g
(2)
the 'p' (task_struct) parameter in the sched_class :: yield_task()
is redundant as the caller is always the 'current'. Get rid of it.
Signed-off-by: Dmitry Adamushko <[EMAIL PROTECTED]>
---
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 9fd
se->load.weight = shares;
> + se->load.inv_weight = div64_64((1ULL<<32), shares);
A bit of nit-picking... are you sure, there is no need in non '__'
versions of dequeue/enqueu() here (at least, for the sake of
update_curr())? Although, I don't have -mm at hand a
enqueue_task(rq, tsk, 0);
if (unlikely(running) && tsk->sched_class == &fair_sched_class)
tsk->sched_class->set_curr_task(rq);
}
task_rq_unlock(rq, &flags);
}
> --
> Regards,
> vatsa
>
--
Best regards,
extra typing is worth it ;)
>
> Ok! Here's the modified patch (against 2.6.23-rc4-mm1).
as everyone seems to be in a quest for a better name... I think, the
obvious one would be just 'group_sched'.
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the
ask_cfs_rq(tsk);
> +
> + if (on_rq)
> +activate_task(rq, tsk, 0);
> +
> + if (unlikely(rq->curr == tsk) && tsk->sched_class ==
> &fair_sched_class)
> + tsk->sched_class->set_curr_task(rq);
> +
> +
uess, a patch would be welcomed :-)
>
> Mitchell Erblich
>
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
On 09/08/07, Ingo Molnar <[EMAIL PROTECTED]> wrote:
>
> FYI, that's the patch i applied:
Thanks. Added my SOB below.
>
> --->
> Subject: sched: optimize task_tick_rt() a bit
> From: Dmitry Adamushko <[EMAIL PROTECTED]>
>
>
hat may make accounting a bit less precise
for (a).
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
ice(p->static_prio);
+
+ /* We are the only element on the queue. */
+ if (p->run_list.prev == p->run_list.next)
+ return;
+
set_tsk_need_resched(p);
/* put it at the end of the queue: */
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this lis
' just
from the thin air.
e.g. sleepers get an additional bonus to their 'wait_runtime' upon a
wakeup _but_ the amount of "wait_runtime" == "a given bonus" will be
additionally substracted from tasks which happen to run later on (grep
for "sleeper_bonus&quo
f 80%) down from old scheduler
> without CFS.
40 or 80 % is still a huge regression.
>
> Regards,
> Tim
>
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
application, 10 sec. after it's got started, 1 minute, a few minute...
http://people.redhat.com/mingo/cfs-scheduler/tools/cfs-debug-info.sh
then send us the resulting files. TIA,
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line "unsubscribe linux-ke
ove)
'(long)rem_load_move <= 0'
and I think somewhere else in the code.
--
Regards,
vatsa
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
t say, user's task becomes finally active after _a lot_ of inactive
ticks (the user came back).. now it's in the rq and waiting for its
turn (which can be easily > 1 tick).. in the mean time 'load
balancing' is triggered.. and it considers the old lrq :: cpu_load[]
...
P.S.
ju
On 12/06/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
On Tue, Jun 12, 2007 at 11:03:36AM +0200, Dmitry Adamushko wrote:
> I had an idea of per-sched-class 'load balance' calculator. So that
> update_load() (as in your patch) would look smth like :
>
> ...
internally in update_load_fair()) ... but again, I'll come up with
some code for further discussion.
--
Regards,
vatsa
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More maj
_prev_task() from __setscheduler() ? But it's not
supposed to be called from here, logically-wise. You just rely on its
current behavior (which is only about updating 'exec_start' and
'exec_sum') -- that's just bad. Maybe I misunderatood your intention
though..
--
Reg
What do you think?
sched_setscheduler()
{
...
on_rq = p->on_rq;
if (on_rq)
deactivate_task(rq, p, 0);
oldprio = p->prio;
__setscheduler(rq, p, policy, param->sched_priority);
if (on_rq) {
activate_task(rq, p, 0);
...
--
R
On 05/06/07, Dmitry Adamushko <[EMAIL PROTECTED]> wrote:
> now at 257428593818894 nsecs
>
> cpu: 0
> .nr_running: 3
> .raw_weighted_load : 2063
> .nr_switches : 242830075
> .nr_load_updates : 30172063
> .nr_unint
k_max" reported earlier.. I guess, sched_clock() is
tsc-based in your case?
Any way to get it switched to jiffies-based one and repeat the test?
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message t
if (interval > HZ*NR_CPUS/10)
interval = HZ*NR_CPUS/10;
so it can't be > 0.2 HZ in your case (== once in 200 ms at max with
HZ=1000).. am I missing something? TIA
Peter
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line "un
to the recent kernel) and so far for me it's
mainly about getting sure I see things sanely.
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http:
7;s stupid?
Peter
--
Peter Williams [EMAIL PROTECTED]
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org
7;s different in your case (besides I don't claim that
everything is ok with SD and CFS) but I guess this could be one of the
reasons.
At least, fork()-ing the second process (with different
run_time/sleep_time characteristics) from the first one would ensure
both have the same "loop_per_ms
tly
in the "active" array and its time_slice != 0 -- the old p->prio is
not changed
So the task is queued taking into account the old_prio, although this
slot can be prohibited by a new p->static_prio. It's only for the very
first slot so one may call it err.. a feature (?)
the "rules" are never broken.
Most likely Con did it already but as he is off-line we can't know.
Did another test with hackbench and git-gc both at same priorities
and captured the top output, in here all of them seem to have
same priority..
Yep, this one looks good.
thanks.
s
surya 20 19 3684 484 484 S 0.0 0.1 0:00.00
git-gc
12357 surya 20 19 4440 684 684 S 0.0 0.1 0:00.06
git-repack
12366 surya 20 19 4440 228 228 S 0.0 0.0 0:00.00
git-repack
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line "unsub
Do you get { pretty same / worse / much worse } feeling of
interactivity with git-gc running at the default (0) and, say, 5 or 10
nice levels?
TIA,
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
set_user_nice() only
if the task may preempt the current one.
Signed-off-by: Dmitry Adamushko <[EMAIL PROTECTED]>
---
--- linux-2.6.21-rc6-mm1/kernel/sched-orig.c2007-04-09 10:10:52.0
+0200
+++ linux-2.6.21-rc6-mm1/kernel/sched.c 2007-04-09 17:07:12.0 +0200
@@ -4022,14
From: Dmitry Adamushko <[EMAIL PROTECTED]>
o Fixed a mail client (shouldn't be white-space damaged now);
o Andrew, a patch against SD will follow.
---
o Make TASK_PREEMPTS_CURR(task, rq) return "true" only if the task's
prio is higher than the current's one
d SD (besides a 2.6.13 mainline, but I have to re-run all
the tests letting tenp run longer for the "sched_time" to be
accumulated)...
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
is also called from timer_ISR ->
update_process_times() like scheduler_tick(). So if task's running
intervals are shorter than 1/HZ, it's not always accounted --> so cpu%
may be wrong for such a task...
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the li
n" time interval related issue. Like
described in (*). Or maybe we both observe similar situations but have
different reasons behind them.
-Mike
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
On 07/04/07, Andrew Morton <[EMAIL PROTECTED]> wrote:
On Wed, 4 Apr 2007 22:05:40 +0200 "Dmitry Adamushko"
> [...]
>
> o Make TASK_PREEMPTS_CURR(task, rq) return "true" only if the task's
> prio is higher than the current's one and the task is
r_nice(), rt_mutex_setprio() and sched_setscheduler()
Signed-off-by: Dmitry Adamushko <[EMAIL PROTECTED]>
--
--- linux-2.6.21-rc5/kernel/sched-orig.c2007-04-04
18:26:19.0 +0200
+++ linux-2.6.21-rc5/kernel/sched.c 2007-04-04 18:26:43.0 +0200
@@ -168,7 +168,7 @@
On 04/04/07, Ingo Molnar <[EMAIL PROTECTED]> wrote:
* Dmitry Adamushko <[EMAIL PROTECTED]> wrote:
> [...]
>
> The same is applicable to rt_mutex_setprio().
>
> Of course, not a big deal, but it's easily avoidable, e.g. (delta < 0
> && array == r
1 - 100 of 107 matches
Mail list logo