Re: [PATCH 0/2] [RFC] Volatile ranges (v4)

2012-07-21 Thread Dmitry Adamushko
[ cc: lkml ] >> > There is a property of shadow memory that I would like to exploit >> > - any region of shadow memory can be reset to zero at any point >> > w/o any bad consequences (it can lead to missed data >> > races, but it's better than OOM kill). >> > I've tried to execute madvise(MADV_DO

Re: + kthread-add-a-missing-memory-barrier-to-kthread_stop.patch added to -mm tree

2008-02-23 Thread Dmitry Adamushko
On 23/02/2008, Linus Torvalds <[EMAIL PROTECTED]> wrote: > > On Sat, 23 Feb 2008, Dmitry Adamushko wrote: > > > > it's not a LOAD that escapes *out* of the region. It's a MODIFY that gets > *in*: > > > Not with the smp_wmb(). That's t

Re: + kthread-add-a-missing-memory-barrier-to-kthread_stop.patch added to -mm tree

2008-02-23 Thread Dmitry Adamushko
gets *in*: (1) MODIFY(a); LOCK LOAD(b); UNLOCK can become: (2) LOCK MOFIDY(a) LOAD(b); UNLOCK and (reordered) (3) LOCK LOAD(a) MODIFY(b) UNLOCK and this last one is a problem. No? > > Linus > -- Best regards, Dmitry Adamushko -- To unsubscribe from this li

Re: + kthread-add-a-missing-memory-barrier-to-kthread_stop.patch added to -mm tree

2008-02-23 Thread Dmitry Adamushko
he _reverse_ order: condition = new; smb_mb(); try_to_wake_up(); => (1) MODIFY(condition); (2) LOAD(current->state) try_to_wake_up() does not need to be a full mb per se, the only requirement (and only for situation like above) is that there is a full mb between possible write ops. that

Re: [PATCH sched-devel 0/7] CPU isolation extensions

2008-02-22 Thread Dmitry Adamushko
CPUs + when Stop Machine is triggered. Stop Machine is currently only + used by the module insertion and removal. this "only" part. What about e.g. a 'cpu hotplug' case (_cpu_down())? (or we should abstract it a bit to the point that e.g. a cpu can be considered as

[PATCH 2/2] kthread: call wake_up_process() whithout the lock being held

2008-02-20 Thread Dmitry Adamushko
From: Dmitry Adamushko <[EMAIL PROTECTED]> Subject: kthread: call wake_up_process() whithout the lock being held - from the POV of synchronization, there should be no need to call wake_up_process() with the 'kthread_create_lock' being held; - moreover, in order to su

[PATCH 1/2] kthread: add a missing memory barrier to kthread_stop()

2008-02-20 Thread Dmitry Adamushko
From: Dmitry Adamushko <[EMAIL PROTECTED]> Subject: kthread: add a missing memory barrier to kthread_stop() We must ensure that kthread_stop_info.k has been updated before kthread's wakeup. This is required to properly support the use of kthread_should_stop() in the main loop

[PATCH 0/2] kthread: synchronization issues

2008-02-20 Thread Dmitry Adamushko
2/2] kthread: call wake_up_process() whithout the lock being held --- (this one is from Ingo's sched-devel tree) softlockup: fix task state setting kthread_stop() can be called when a 'watchdog' thread is executing after kthread_should_stop() but before set_task_state(TASK_INTERR

Re: [PATCH] Tasklets: Avoid duplicating __tasklet_{,hi_}schedule() code

2008-02-20 Thread Dmitry Adamushko
ocation I'm able to find is from local_bh_enable() and > from ksoftirqd/n threads (by calling do_softirq()). AFAIK, both > invocations occur in a _nont-interrupt_ context (exception context). > > So, where does the interrupt-context tasklets invocation really > occur ? Look at irq_exit() in softirq.c. The common sequence is ... -> do_IRQ() --> irq_exit() --> invoke_softirq() -- Best regards, Dmitry Adamushko -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

Re: [PATCH, RFC] kthread: (possibly) a missing memory barrier in kthread_stop()

2008-02-19 Thread Dmitry Adamushko
spin_lock(&kthread_create_lock); list_add_tail(&create.list, &kthread_create_list); - wake_up_process(kthreadd_task); spin_unlock(&kthread_create_lock); + wake_up_process(kthreadd_task); wait_for_completion(&create.done); -- Best regards,

Re: [PATCH, RFC] kthread: (possibly) a missing memory barrier in kthread_stop()

2008-02-19 Thread Dmitry Adamushko
; < our main loop is inside this function /* It might have exited on its own, w/o kthread_stop. Check. */ if (kthread_should_stop()) { kthread_stop_info.err = ret; complete(&kthread_stop_info.done); } return 0; }

Re: [PATCH, RFC] kthread: (possibly) a missing memory barrier in kthread_stop()

2008-02-19 Thread Dmitry Adamushko
ocumented as part of this patch. Finally, I think the comment as is is > hard to understand I got the sense of it backwards on first reading; > perhaps something like this: > > /* > * Ensure kthread_stop_info.k is visible before wakeup, paired > * with barri

Re: [PATCH, RFC] kthread: (possibly) a missing memory barrier in kthread_stop()

2008-02-19 Thread Dmitry Adamushko
On 19/02/2008, Peter Zijlstra <[EMAIL PROTECTED]> wrote: > [ ... ] > > > > > > From: Dmitry Adamushko <[EMAIL PROTECTED]> > > > Subject: kthread: add a memory barrier to kthread_stop() > > > > > > 'kthread' threads do a chec

[PATCH, RFC] kthread: (possibly) a missing memory barrier in kthread_stop()

2008-02-18 Thread Dmitry Adamushko
- set_current_state(TASK_INTERRUPTIBLE); - kthread_should_stop(); here, kthread_stop_info.k is not yet visible - schedule() ... we missed a 'kthread_stop' event. hum? TIA, --- From: Dmitry Adamushko <[EMAIL PROTECTED]> Subject: kthread: add a memory barrie

Re: [Regression] 2.6.24-git9: RT sched mishandles artswrapper (bisected)

2008-02-05 Thread Dmitry Adamushko
resched+0x31/0x40 > [] wait_for_common+0x34/0x170 > [] ? try_to_wake_up+0x77/0x200 > [] wait_for_completion+0x18/0x20 > [ ... ] does a stack trace always look like this? -- Best regards, Dmitry Adamushko -- To unsubscribe from this list: send the line "unsubscribe linux-kernel&qu

Re: latencytop: optimize LT_BACKTRACEDEPTH loops a bit

2008-02-03 Thread Dmitry Adamushko
On 03/02/2008, Arjan van de Ven <[EMAIL PROTECTED]> wrote: > Dmitry Adamushko wrote: > > Subject: latencytop: optimize LT_BACKTRACEDEPTH loops a bit. > > > > It looks like there is no need to loop any longer when 'same == 0'. > > thanks for the contri

latencytop: optimize LT_BACKTRACEDEPTH loops a bit

2008-02-02 Thread Dmitry Adamushko
Subject: latencytop: optimize LT_BACKTRACEDEPTH loops a bit. It looks like there is no need to loop any longer when 'same == 0'. Signed-off-by: Dmitry Adamushko <[EMAIL PROTECTED]> diff --git a/kernel/latencytop.c b/kernel/latencytop.c index b4e3c85..61f7da0 100644 --- a/ker

Re: [Regression] 2.6.24-git3: Major annoyance during suspend/hibernation on x86-64 (bisected)

2008-02-01 Thread Dmitry Adamushko
On 02/02/2008, Ingo Molnar <[EMAIL PROTECTED]> wrote: > > * Dmitry Adamushko <[EMAIL PROTECTED]> wrote: > > > yeah, I was already on a half-way to check it out. > > > > It does fix a problem for me. > > > > Don't forget to take along these

Re: [Regression] 2.6.24-git3: Major annoyance during suspend/hibernation on x86-64 (bisected)

2008-02-01 Thread Dmitry Adamushko
On 01/02/2008, Ingo Molnar <[EMAIL PROTECTED]> wrote: > > * Dmitry Adamushko <[EMAIL PROTECTED]> wrote: > > > > I've observed delays from ~3 s. up to ~8 s. (out of ~20 tests) so > > > the 10s. delay of msleep_interruptible() might be related but I'

Re: [Regression] 2.6.24-git3: Major annoyance during suspend/hibernation on x86-64 (bisected)

2008-02-01 Thread Dmitry Adamushko
On 01/02/2008, Dmitry Adamushko <[EMAIL PROTECTED]> wrote: > On 01/02/2008, Ingo Molnar <[EMAIL PROTECTED]> wrote: > > > > thanks - i cannot reproduce it on my usual suspend/resume testbox > > because e1000 broke on it, and this is a pretty annoying regression

Re: [Regression] 2.6.24-git3: Major annoyance during suspend/hibernation on x86-64 (bisected)

2008-02-01 Thread Dmitry Adamushko
ine real0m7.770s I've observed delays from ~3 s. up to ~8 s. (out of ~20 tests) so the 10s. delay of msleep_interruptible() might be related but I'm still looking for the reason why this fix helps (and what goes wrong with the current code). > > Ingo > -- Best re

Re: [Regression] 2.6.24-git3: Major annoyance during suspend/hibernation on x86-64 (bisected)

2008-02-01 Thread Dmitry Adamushko
uptible(1) timeouts? On average, it would take +-5 sec. and might explain the first observation of Ravael -- "...adds a 5 - 10 sec delay..." (although, lately he reported up to +30 sec. delays). (/me goint to also try reproducing it later today) > [ ... ] -- Best regards, Dmitry Ad

Re: [Regression] 2.6.24-git3: Major annoyance during suspend/hibernation on x86-64 (bisected)

2008-01-28 Thread Dmitry Adamushko
tasks > > Reverting this commit (it reverts with some minor modifications) fixes the > problem for me. What if you use the same kernel that triggers a problem and just disable this new 'softlockup' functionality: echo 0 > /proc/sys/kernel/hung_task_timeout_secs does the problem disapear? TIA, > > Thanks, > Rafael > -- Best regards, Dmitry Adamushko -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

Re: [patch 1/3] LatencyTOP infrastructure patch

2008-01-20 Thread Dmitry Adamushko
On 20/01/2008, Dmitry Adamushko <[EMAIL PROTECTED]> wrote: > Hello Arjan, > > a few comments on the current locking scheme. heh... now having read the first message in this series ("[Announce] Development release 0.1 of the LatencyTOP tool"), I finally see that "fin

Re: [patch 1/3] LatencyTOP infrastructure patch

2008-01-20 Thread Dmitry Adamushko
*m, void *v) > +{ > + int i; > + struct task_struct *task = m->private; > + seq_puts(m, "Latency Top version : v0.1\n"); > + > + for (i = 0; i < 32; i++) { > + if (task->latency_record[i].reason) for (i = 0; i <

Re: [PATCH 1/4] Replace hooks with pre/post schedule and wakeup methods

2007-12-11 Thread Dmitry Adamushko
cpu, rq); is called from sched_class_fair :: load_balance_fair() upon getting a PRE_SCHEDULE load-balancing point. IMHO, it would look nicer this way _BUT_ yeah, this 'full' abstraction adds additional overhead to the hot-path (which might make it not that worthy). -- Best regards, Dmitr

[PATCH, cleaun-up git-sched 2/3] get rid of 'new_cpu' in try_to_wake_up()

2007-12-09 Thread Dmitry Adamushko
From: Dmitry Adamushko <[EMAIL PROTECTED]> Clean-up try_to_wake_up(). Get rid of the 'new_cpu' variable in try_to_wake_up() [ that's, one #ifdef section less ]. Also remove a few redundant blank lines. Signed-off-by: Dmitry Adamushko <[EMAIL PROTECTED]> --- di

[PATCH git-sched 1/3] no need for 'affine wakeup' balancing in select_task_rq_fair() when task_cpu(p) == this_cpu

2007-12-09 Thread Dmitry Adamushko
From: Dmitry Adamushko <[EMAIL PROTECTED]> No need to do a check for 'affine wakeup and passive balancing possibilities' in select_task_rq_fair() when task_cpu(p) == this_cpu. I guess, this part got missed upon introduction of per-sched_class select_task_rq() in try_to_wake_up()

RT Load balance changes in sched-devel

2007-12-09 Thread Dmitry Adamushko
queue/dequeue() interface would become less straightforward, logically-wise. Something like: rq = activate_task(rq, ...) ; /* may unlock rq and lock/return another one */ would complicate the existing use cases. -- Best regards, Dmitry Adamushko -- To unsubscribe from this list: send the line "unsub

Re: High priority tasks break SMP balancer?

2007-11-27 Thread Dmitry Adamushko
ven (at least, as an experiment) fix them to different CPUs as well? sure, the scenario is highly dependent on a nature of those 'events'... and I can just speculate here :-) (but I'd imagine situations when such a scenario would scale better). > > Thank you again, > --Mica

Re: [PATCH] sched: minor optimization

2007-11-23 Thread Dmitry Adamushko
q->nr_running == 0) return idle_sched_class.pick_next_task(rq); at the beginning of pick_next_task(). (or maybe put it at the beginning of the if (likely(rq->nr_running == rq->cfs.nr_running)) {} block as we already have 'likely()' there). -- Best regards, Dmitry Adamushko - To unsubsc

Re: High priority tasks break SMP balancer?

2007-11-22 Thread Dmitry Adamushko
On 22/11/2007, Micah Dowty <[EMAIL PROTECTED]> wrote: > On Tue, Nov 20, 2007 at 10:47:52PM +0100, Dmitry Adamushko wrote: > > btw., what's your system? If I recall right, SD_BALANCE_NEWIDLE is on > > by default for all configs, except for NUMA nodes. > > It&#

Re: High priority tasks break SMP balancer?

2007-11-20 Thread Dmitry Adamushko
ache_nice_tries", &sd->cache_nice_tries, sizeof(int), 0644, proc_dointvec_minmax); - set_table_entry(&table[12], "flags", &sd->flags, + set_table_entry(&table[10], "flags", &sd->flags, sizeof(int), 064

Re: High priority tasks break SMP balancer?

2007-11-19 Thread Dmitry Adamushko
ke any difference? moreover, /proc/sys/kernel/sched_domain/cpu1/domain0/newidle_idx seems to be responsible for a source of the load for calculating the busiest group. e.g. with newidle_idx == 0, the current load on the queue is used instead of cpu_load[]. > > Thanks, > --Micah > -- Bes

Re: High priority tasks break SMP balancer?

2007-11-17 Thread Dmitry Adamushko
cat /proc/schedstat ... wait either a few seconds or until the problem disappears (whatever comes first) # cat /proc/schedstat TIA, > > --Micah > -- Best regards, Dmitry Adamushko - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a mess

Re: High priority tasks break SMP balancer?

2007-11-16 Thread Dmitry Adamushko
. yep, that's what load_balance_newidle() is about... so maybe there are some factors resulting in its inconsistency/behavioral differences on different kernels. Let's say we change a pattern for the niced task: e.g. run for 100 ms. and then sleep for 300 ms. (that's ~25% of cpu

Re: High priority tasks break SMP balancer?

2007-11-16 Thread Dmitry Adamushko
en not seen (i.e. don't contribute to cpu_load[]) on cpu_0... we do sampling every tick (sched.c :: update_cpu_load()) and consider this_rq->ls.load.weight at this particular moment (that is the sum of 'weights' for all runnable tasks on this rq)... and it may well be that the afo

Re: Divide-by-zero in the 2.6.23 scheduler code

2007-11-14 Thread Dmitry Adamushko
em with low HZ value (I can't see the kernel config immediately on the bugzilla page) and a task niced to the lowest priority (is this 'kjournald' mentioned in the report of lower prio? ) running for a full tick, 'tmp' can be such a big value... hmm? -- Best regards, Dmitry

Re: Strange delays / what usually happens every 10 min?

2007-11-13 Thread Dmitry Adamushko
maximum 'delays' you see before hitting this "once in 10 minutes" point? say, with 256 Mb. the blips could just become lower (e.g. 2 ms.) and are not reported as "big ones" (>5 ms. in your terms)... Quite often the source of high periodic latency is SMI (System Ma

Re: Race in setup_irq?

2007-11-11 Thread Dmitry Adamushko
our code and device spec.)... --> ISR runs and due to some error e.g. loops endlessly/deadlocks/etc. Tried placing printk() at the beginning of ISR? -- Best regards, Dmitry Adamushko - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a mess

Re: [BUG]: Crash with CONFIG_FAIR_CGROUP_SCHED=y

2007-11-09 Thread Dmitry Adamushko
, cfs_rq->curr can be NULL > for the child. Would it be better, logically-wise, to use is_same_group() instead? Although, we can't have 2 groups with cfs_rq->curr != NULL on the same CPU... so if the child belongs to another group, it's cfs_rq->curr is automatically NULL indeed.

Re: [BUG]: Crash with CONFIG_FAIR_CGROUP_SCHED=y

2007-11-09 Thread Dmitry Adamushko
;running?' tests to be separate. Humm... the 'current' is not kept within the tree but current->se.on_rq is supposed to be '1' , so the old code looks ok to me (at least for the 'leaf' elements). Maybe you were able to get more useful oops on your site? > -

Re: [patch 6/8] pull RT tasks

2007-10-22 Thread Dmitry Adamushko
though, there can be some disadvantages here as well. e.g. we would likely need to remove 'max 3 tasks at once' limit and get, theoretically, unbounded time spent in push_rt_tasks() on a single CPU). > > -- Steve > -- Best regards, Dmitry Adamushko - To unsubscribe from this list:

Re: [2.6.23] tasks stuck in running state?

2007-10-21 Thread Dmitry Adamushko
On 21/10/2007, Dmitry Adamushko <[EMAIL PROTECTED]> wrote: > On 20/10/2007, Jeff Garzik <[EMAIL PROTECTED]> wrote: > > Chuck Ebbert wrote: > > > On 10/19/2007 05:39 PM, Jeff Garzik wrote: > > >> On my main devel box, vanilla 2.6.23 on x86-64/Fedora-7, I&#

Re: [2.6.23] tasks stuck in running state?

2007-10-21 Thread Dmitry Adamushko
s of /proc/PID/stat is the CPU# on which the task is currently active. Let's start a buzy-loop task on this cpu and see wheather it's able to make any progress (TIME counter in 'ps') # taskset -c THIS_CPU some_busy_looping_prog TIA, -- Best regards, Dmitry Adamushko - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

Re: [patch 6/8] pull RT tasks

2007-10-21 Thread Dmitry Adamushko
pushing/pulling 1 task at once (as described above)... any additional actions are just overhead or there is some problem with the algorithm (ah well, or with my understanding :-/ ) -- Best regards, Dmitry Adamushko - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

Re: [patch 6/8] pull RT tasks

2007-10-21 Thread Dmitry Adamushko
be used in pull_rt_task() for the 'next' in a similar way as it's done in push_rt_task() . > > [ ... ] > -- Best regards, Dmitry Adamushko - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Mor

Re: [patch 2/8] track highest prio queued on runqueue

2007-10-20 Thread Dmitry Adamushko
d then, static struct task_struct *pick_next_task_rt(struct rq *rq) { struct rt_prio_array *array = &rq->rt.active; struct task_struct *next; struct list_head *queue; int idx; -idx = sched_find_first_bit(array->bitmap); + rq->highest_prio

Re: [patch 1/8] Add rt_nr_running accounting

2007-10-20 Thread Dmitry Adamushko
st, array->queue + p->prio); > __set_bit(p->prio, array->bitmap); > + > + inc_rt_tasks(p, rq); why do you need the rt_task(p) check in {inc,dec}_rt_tasks() ? {enqueue,dequeue}_task_rt() seem to be the only callers and they will crash (or corrupt memory) anyway in t

Re: [git pull] scheduler updates for v2.6.24

2007-10-16 Thread Dmitry Adamushko
ted earlier today (crash in put_prev_task_fair() --> __enqueue_task() --> rb_insert_color()) that you are already aware of ... (/me will continue tomorrow). -- Best regards, Dmitry Adamushko - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a me

Re: [git pull] scheduler updates for v2.6.24

2007-10-16 Thread Dmitry Adamushko
threads (like events/cpu, ksoftirq/cpu, etc.) being created on start-up (x NUMBER_OF_CPUS) and SD_SCHED_FORK (actually, sched_balance_self() from sched_fork()) is just an overhead in this case... although, sched_balance_self() is likely to be responsible for a minor % of the time taken to crea

Re: [RFC][PATCH] sched: SCHED_FIFO watchdog timer

2007-10-15 Thread Dmitry Adamushko
te_task(rq, p, 0); > + __setscheduler(rq, p, SCHED_NORMAL, 0); > + activate_task(rq, p, 0); > + resched_task(p); I guess, put_prev_task() / set_curr_task() should be called (for the case of task_running(p)) to make it group-scheduler-friendly (as it's done e.g. in sched_setscheduler(

Re: [PATCH] sched: Rationalize sys_sched_rr_get_interval()

2007-10-11 Thread Dmitry Adamushko
s::task_timeslice() but decided it was not worth it. -- Best regards, Dmitry Adamushko - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

Re: Network slowdown due to CFS

2007-10-03 Thread Dmitry Adamushko
On 03/10/2007, Dmitry Adamushko <[EMAIL PROTECTED]> wrote: > On 03/10/2007, Jarek Poplawski <[EMAIL PROTECTED]> wrote: > > I can't see anything about clearing. I think, this was about charging, > > which should change the key enough, to move a task to, maybe, a

Re: Network slowdown due to CFS

2007-10-03 Thread Dmitry Adamushko
a_exec_weighted; + } + return; } + /* * Find the rightmost entry in the rbtree: */ > > Jarek P. > -- Best regards, Dmitry Adamushko --- sched_fair-old.c 2007-10-03 12:45:17.010306000 +0200 +++ sched_fair.c 2007-10-03 12:44:46.89985100

Re: [git] CFS-devel, latest code

2007-10-02 Thread Dmitry Adamushko
The following patch (sched: disable sleeper_fairness on SCHED_BATCH) seems to break GROUP_SCHED. Although, it may be 'oops'-less due to the possibility of 'p' being always a valid address. Signed-off-by: Dmitry Adamushko <[EMAIL PROTECTED]> --- diff --git a/ke

Re: [git] CFS-devel, latest code

2007-10-02 Thread Dmitry Adamushko
On 01/10/2007, Ingo Molnar <[EMAIL PROTECTED]> wrote: > > * Dmitry Adamushko <[EMAIL PROTECTED]> wrote: > > > here is a few patches on top of the recent 'sched-dev': > > > > (1) [ proposal ] make timeslices of SCHED_RR tasks constant and no

Re: [git] CFS-devel, latest code

2007-09-30 Thread Dmitry Adamushko
and this one, make dequeue_entity() / enqueue_entity() and update_stats_dequeue() / update_stats_enqueue() look similar, structure-wise. zero effect, functionally-wise. Signed-off-by: Dmitry Adamushko <[EMAIL PROTECTED]> --- diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c index 2

Re: [git] CFS-devel, latest code

2007-09-30 Thread Dmitry Adamushko
remove obsolete code -- calc_weighted() Signed-off-by: Dmitry Adamushko <[EMAIL PROTECTED]> --- diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c index fe4003d..2674e27 100644 --- a/kernel/sched_fair.c +++ b/kernel/sched_fair.c @@ -342,17 +342,6 @@ update_stats_wait_start(struct

Re: [git] CFS-devel, latest code

2007-09-30 Thread Dmitry Adamushko
40 51727ca0f ../build/kernel/sched.o.before 465535102 40 51695c9ef ../build/kernel/sched.o yeah, this seems to require task_rq_lock/unlock() but this is not a hot path. what do you think? (compiles well, not functionally tested yet) Almost-Signed-off-by: Dmitry Adamushko <[EMAI

Re: [git] CFS-devel, latest code

2007-09-25 Thread Dmitry Adamushko
humm... I think, it'd be safer to have something like the following change in place. The thing is that __pick_next_entity() must never be called when first_fair(cfs_rq) == NULL. It wouldn't be a problem, should 'run_node' be the very first field of 'struct sched_entity' (and it's the second). Th

Re: [PATCH] sched: cleanup adjusting sched_class

2007-09-24 Thread Dmitry Adamushko
ady existing > "do not leak PI boosting priority to the child" at the sched_fork(). > This patch moves the adjusting sched_class from wake_up_new_task() > to sched_fork(). > > Signed-off-by: Hiroshi Shimamoto <[EMAIL PROTECTED]> Signed-off-by: Dmitry Adamushko

Re: [PATCH] sched: fix to use invalid sched_class

2007-09-19 Thread Dmitry Adamushko
> > Hi Ingo, > > > > I found an issue about the scheduler. > > If you need a test case, please let me know. > > Here is a patch. > > [ ... ] > > The new thread should be valid scheduler class before queuing. > > This patch fixes to set the suitable scheduler class. > > Nice fix! It's a 2.6.23 mus

Re: [git] CFS-devel, group scheduler, fixes

2007-09-18 Thread Dmitry Adamushko
(3) rework enqueue/dequeue_entity() to get rid of sched_class::set_curr_task(). This simplifies sched_setscheduler(), rt_mutex_setprio() and sched_move_tasks(). Signed-off-by : Dmitry Adamushko <[EMAIL PROTECTED]> Signed-off-by : Srivatsa Vaddagiri <[EMAIL PROTECTED]> --- diff --g

[git] CFS-devel, group scheduler, fixes

2007-09-18 Thread Dmitry Adamushko
(2) the 'p' (task_struct) parameter in the sched_class :: yield_task() is redundant as the caller is always the 'current'. Get rid of it. Signed-off-by: Dmitry Adamushko <[EMAIL PROTECTED]> --- diff --git a/include/linux/sched.h b/include/linux/sched.h index 9fd

Re: [PATCH] Hookup group-scheduler with task container infrastructure

2007-09-12 Thread Dmitry Adamushko
se->load.weight = shares; > + se->load.inv_weight = div64_64((1ULL<<32), shares); A bit of nit-picking... are you sure, there is no need in non '__' versions of dequeue/enqueu() here (at least, for the sake of update_curr())? Although, I don't have -mm at hand a

Re: [PATCH] Hookup group-scheduler with task container infrastructure

2007-09-11 Thread Dmitry Adamushko
enqueue_task(rq, tsk, 0); if (unlikely(running) && tsk->sched_class == &fair_sched_class) tsk->sched_class->set_curr_task(rq); } task_rq_unlock(rq, &flags); } > -- > Regards, > vatsa > -- Best regards,

Re: [PATCH] Hookup group-scheduler with task container infrastructure

2007-09-10 Thread Dmitry Adamushko
extra typing is worth it ;) > > Ok! Here's the modified patch (against 2.6.23-rc4-mm1). as everyone seems to be in a quest for a better name... I think, the obvious one would be just 'group_sched'. -- Best regards, Dmitry Adamushko - To unsubscribe from this list: send the

Re: [PATCH] Hookup group-scheduler with task container infrastructure

2007-09-10 Thread Dmitry Adamushko
ask_cfs_rq(tsk); > + > + if (on_rq) > +activate_task(rq, tsk, 0); > + > + if (unlikely(rq->curr == tsk) && tsk->sched_class == > &fair_sched_class) > + tsk->sched_class->set_curr_task(rq); > + > +

Re: Question: sched_rt.c : is RT check needed within a RT func? dequeue_task_rt() calls update_curr_rt() which checks for priority of RR or FIFO :

2007-08-09 Thread Dmitry Adamushko
uess, a patch would be welcomed :-) > > Mitchell Erblich > -- Best regards, Dmitry Adamushko - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

Re: Question: RT schedular : task_tick_rt(struct rq *rq, struct task_struct *p) : decreases overhead when rq->nr_running == 1

2007-08-09 Thread Dmitry Adamushko
On 09/08/07, Ingo Molnar <[EMAIL PROTECTED]> wrote: > > FYI, that's the patch i applied: Thanks. Added my SOB below. > > ---> > Subject: sched: optimize task_tick_rt() a bit > From: Dmitry Adamushko <[EMAIL PROTECTED]> > >

Re: Question : sched_rt.c : Loss of stats?? requeue_task_rt() does not call update_curr_rt() which updates stats

2007-08-09 Thread Dmitry Adamushko
hat may make accounting a bit less precise for (a). -- Best regards, Dmitry Adamushko - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

Re: Question: RT schedular : task_tick_rt(struct rq *rq, struct task_struct *p) : decreases overhead when rq->nr_running == 1

2007-08-09 Thread Dmitry Adamushko
ice(p->static_prio); + + /* We are the only element on the queue. */ + if (p->run_list.prev == p->run_list.next) + return; + set_tsk_need_resched(p); /* put it at the end of the queue: */ -- Best regards, Dmitry Adamushko - To unsubscribe from this lis

Re: Volanomark slows by 80% under CFS

2007-07-28 Thread Dmitry Adamushko
' just from the thin air. e.g. sleepers get an additional bonus to their 'wait_runtime' upon a wakeup _but_ the amount of "wait_runtime" == "a given bonus" will be additionally substracted from tasks which happen to run later on (grep for "sleeper_bonus&quo

Re: Volanomark slows by 80% under CFS

2007-07-28 Thread Dmitry Adamushko
f 80%) down from old scheduler > without CFS. 40 or 80 % is still a huge regression. > > Regards, > Tim > -- Best regards, Dmitry Adamushko - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

Re: [patch] CFS scheduler, -v18

2007-07-02 Thread Dmitry Adamushko
application, 10 sec. after it's got started, 1 minute, a few minute... http://people.redhat.com/mingo/cfs-scheduler/tools/cfs-debug-info.sh then send us the resulting files. TIA, -- Best regards, Dmitry Adamushko - To unsubscribe from this list: send the line "unsubscribe linux-ke

Re: [RFC][PATCH 5/6] core changes for group fairness

2007-06-13 Thread Dmitry Adamushko
ove) '(long)rem_load_move <= 0' and I think somewhere else in the code. -- Regards, vatsa -- Best regards, Dmitry Adamushko - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

Re: [RFC][PATCH 4/6] Fix (bad?) interactions between SCHED_RT and SCHED_NORMAL tasks

2007-06-12 Thread Dmitry Adamushko
t say, user's task becomes finally active after _a lot_ of inactive ticks (the user came back).. now it's in the rq and waiting for its turn (which can be easily > 1 tick).. in the mean time 'load balancing' is triggered.. and it considers the old lrq :: cpu_load[] ... P.S. ju

Re: [RFC][PATCH 4/6] Fix (bad?) interactions between SCHED_RT and SCHED_NORMAL tasks

2007-06-12 Thread Dmitry Adamushko
On 12/06/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote: On Tue, Jun 12, 2007 at 11:03:36AM +0200, Dmitry Adamushko wrote: > I had an idea of per-sched-class 'load balance' calculator. So that > update_load() (as in your patch) would look smth like : > > ...

Re: [RFC][PATCH 4/6] Fix (bad?) interactions between SCHED_RT and SCHED_NORMAL tasks

2007-06-12 Thread Dmitry Adamushko
internally in update_load_fair()) ... but again, I'll come up with some code for further discussion. -- Regards, vatsa -- Best regards, Dmitry Adamushko - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More maj

Re: [patch] CFS scheduler, -v15

2007-06-06 Thread Dmitry Adamushko
_prev_task() from __setscheduler() ? But it's not supposed to be called from here, logically-wise. You just rely on its current behavior (which is only about updating 'exec_start' and 'exec_sum') -- that's just bad. Maybe I misunderatood your intention though.. -- Reg

Re: [patch] CFS scheduler, -v15

2007-06-06 Thread Dmitry Adamushko
What do you think? sched_setscheduler() { ... on_rq = p->on_rq; if (on_rq) deactivate_task(rq, p, 0); oldprio = p->prio; __setscheduler(rq, p, policy, param->sched_priority); if (on_rq) { activate_task(rq, p, 0); ... -- R

Re: Interesting interaction between lguest and CFS

2007-06-05 Thread Dmitry Adamushko
On 05/06/07, Dmitry Adamushko <[EMAIL PROTECTED]> wrote: > now at 257428593818894 nsecs > > cpu: 0 > .nr_running: 3 > .raw_weighted_load : 2063 > .nr_switches : 242830075 > .nr_load_updates : 30172063 > .nr_unint

Re: Interesting interaction between lguest and CFS

2007-06-05 Thread Dmitry Adamushko
k_max" reported earlier.. I guess, sched_clock() is tsc-based in your case? Any way to get it switched to jiffies-based one and repeat the test? -- Best regards, Dmitry Adamushko - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message t

Re: [patch] CFS scheduler, -v12

2007-05-22 Thread Dmitry Adamushko
if (interval > HZ*NR_CPUS/10) interval = HZ*NR_CPUS/10; so it can't be > 0.2 HZ in your case (== once in 200 ms at max with HZ=1000).. am I missing something? TIA Peter -- Best regards, Dmitry Adamushko - To unsubscribe from this list: send the line "un

Re: [patch] CFS scheduler, -v12

2007-05-21 Thread Dmitry Adamushko
to the recent kernel) and so far for me it's mainly about getting sure I see things sanely. -- Best regards, Dmitry Adamushko - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http:

Re: [patch] CFS scheduler, -v12

2007-05-19 Thread Dmitry Adamushko
7;s stupid? Peter -- Peter Williams [EMAIL PROTECTED] -- Best regards, Dmitry Adamushko - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org

Re: Definition of fairness (was Re: [patch] CFS scheduler, -v11)

2007-05-09 Thread Dmitry Adamushko
7;s different in your case (besides I don't claim that everything is ok with SD and CFS) but I guess this could be one of the reasons. At least, fork()-ing the second process (with different run_time/sleep_time characteristics) from the first one would ensure both have the same "loop_per_ms

[PATCH] [DEBUG] sd-sched: monitor dynamic priority levels of a running task

2007-04-12 Thread Dmitry Adamushko
tly in the "active" array and its time_slice != 0 -- the old p->prio is not changed So the task is queued taking into account the old_prio, although this slot can be prohibited by a new p->static_prio. It's only for the very first slot so one may call it err.. a feature (?)

Re: rsdl:mouse freezing while doing git-gc

2007-04-12 Thread Dmitry Adamushko
the "rules" are never broken. Most likely Con did it already but as he is off-line we can't know. Did another test with hackbench and git-gc both at same priorities and captured the top output, in here all of them seem to have same priority.. Yep, this one looks good. thanks. s

Re: rsdl:mouse freezing while doing git-gc

2007-04-11 Thread Dmitry Adamushko
surya 20 19 3684 484 484 S 0.0 0.1 0:00.00 git-gc 12357 surya 20 19 4440 684 684 S 0.0 0.1 0:00.06 git-repack 12366 surya 20 19 4440 228 228 S 0.0 0.0 0:00.00 git-repack -- Best regards, Dmitry Adamushko - To unsubscribe from this list: send the line "unsub

Re: rsdl:mouse freezing while doing git-gc

2007-04-11 Thread Dmitry Adamushko
Do you get { pretty same / worse / much worse } feeling of interactivity with git-gc running at the default (0) and, say, 5 or 10 nice levels? TIA, -- Best regards, Dmitry Adamushko - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to

[PATCH 2.6.21-rc6-mm1] SD sched: avoid redundant reschedule in set_user_nice()

2007-04-10 Thread Dmitry Adamushko
set_user_nice() only if the task may preempt the current one. Signed-off-by: Dmitry Adamushko <[EMAIL PROTECTED]> --- --- linux-2.6.21-rc6-mm1/kernel/sched-orig.c2007-04-09 10:10:52.0 +0200 +++ linux-2.6.21-rc6-mm1/kernel/sched.c 2007-04-09 17:07:12.0 +0200 @@ -4022,14

[PATCH 2.6.21-rc6] sched: modification of TASK_PREEMPTS_CURRENT to avoid redundant reschedules (e.g. in set_user_nice)

2007-04-09 Thread Dmitry Adamushko
From: Dmitry Adamushko <[EMAIL PROTECTED]> o Fixed a mail client (shouldn't be white-space damaged now); o Andrew, a patch against SD will follow. --- o Make TASK_PREEMPTS_CURR(task, rq) return "true" only if the task's prio is higher than the current's one

Re: SD scheduler testing hitch

2007-04-09 Thread Dmitry Adamushko
d SD (besides a 2.6.13 mainline, but I have to re-run all the tests letting tenp run longer for the "sched_time" to be accumulated)... -- Best regards, Dmitry Adamushko - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

Re: SD scheduler testing hitch

2007-04-08 Thread Dmitry Adamushko
is also called from timer_ISR -> update_process_times() like scheduler_tick(). So if task's running intervals are shorter than 1/HZ, it's not always accounted --> so cpu% may be wrong for such a task... -- Best regards, Dmitry Adamushko - To unsubscribe from this list: send the li

Re: SD scheduler testing hitch

2007-04-08 Thread Dmitry Adamushko
n" time interval related issue. Like described in (*). Or maybe we both observe similar situations but have different reasons behind them. -Mike -- Best regards, Dmitry Adamushko - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

Re: [PATCH] [sched] redundant reschedule when set_user_nice() boosts a prio of a task from the "expired" array

2007-04-07 Thread Dmitry Adamushko
On 07/04/07, Andrew Morton <[EMAIL PROTECTED]> wrote: On Wed, 4 Apr 2007 22:05:40 +0200 "Dmitry Adamushko" > [...] > > o Make TASK_PREEMPTS_CURR(task, rq) return "true" only if the task's > prio is higher than the current's one and the task is

[PATCH] [sched] redundant reschedule when set_user_nice() boosts a prio of a task from the "expired" array

2007-04-04 Thread Dmitry Adamushko
r_nice(), rt_mutex_setprio() and sched_setscheduler() Signed-off-by: Dmitry Adamushko <[EMAIL PROTECTED]> -- --- linux-2.6.21-rc5/kernel/sched-orig.c2007-04-04 18:26:19.0 +0200 +++ linux-2.6.21-rc5/kernel/sched.c 2007-04-04 18:26:43.0 +0200 @@ -168,7 +168,7 @@

Re: [sched] redundant reschedule when set_user_nice() boosts a prio of a task from the "expired" array

2007-04-04 Thread Dmitry Adamushko
On 04/04/07, Ingo Molnar <[EMAIL PROTECTED]> wrote: * Dmitry Adamushko <[EMAIL PROTECTED]> wrote: > [...] > > The same is applicable to rt_mutex_setprio(). > > Of course, not a big deal, but it's easily avoidable, e.g. (delta < 0 > && array == r

  1   2   >