would be a good candidate for load-balancing. Latency
> would be affected as mentioned earlier.
>
Exactly.idle_time == spare_cpu_cycles == less cpu_utilization.I hope i
am not wrong in drawing this equivalence.if thats the case then the same
explanation as above holds good here too.
&
r very few spare cycles in the last
> schedule period would be a good candidate for load-balancing. Latency
> would be affected as mentioned earlier.
>
Exactly.idle_time == spare_cpu_cycles == less cpu_utilization.I hope i
am not wrong in drawing this equivalence.if thats the case then the s
e busy and capable cpus within a small range try to
handle the existing load.
Regards
Preeti
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.
>busiest due to the following loads
> that it calculates.
>
> SCHED_GRP1:2048
> SCHED_GRP2:4096
>
> Load calculator would probably qualify SCHED_GRP1 as the candidate
> for sd->busiest due to the following loads that it calculates
>
> SCHED_GRP1:3200
> SCHED_GRP2:1156
loaded
sched groups does not mean too few tasks.
Thank you
Regards
Preeti
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FA
eview and build
>> testing this went through (the above should produce warnings since they
>> are non void returning functions with no return statements).
>
> Thanks for reporting this, I tried to fix a build issue in the original patch
I apologise for not having taken care
stderr, "Error joining thread %d\n", i);
exit(1);
}
}
printf("%u records/s\n",
(unsigned int) (((double) records_read)/diff_time));
}
int main()
{
start_threads();
return 0;
}
Regards
Preeti U Murthy
--
To unsubscribe from this li
differently if it is >1.
Thank you
Regards
Preeti U Murthy
On Thu, Aug 23, 2012 at 7:44 PM, wrote:
> From: Ben Segall
>
> Since runqueues do not have a corresponding sched_entity we instead embed a
> sched_avg structure directly.
>
> Signed-off-by: Ben Segall
>
can opine about this issue if possible and needed.
Reviewed-by: Preeti U Murthy
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majord
ur patchset
to evacuate nearly idle towards nearly busy groups,but by using PJT's metric to
make the decision.
What do you think?
Regards
Preeti U Murthy
On Tue, Nov 6, 2012 at 6:39 PM, Alex Shi wrote:
> This patch enabled the power aware consideration in load balance.
>
> As mentioned
Hi Alex
I apologise for the delay in replying .
On Wed, Nov 7, 2012 at 6:57 PM, Alex Shi wrote:
> On 11/07/2012 12:37 PM, Preeti Murthy wrote:
>> Hi Alex,
>>
>> What I am concerned about in this patchset as Peter also
>> mentioned in the previous discussion of your ap
e, we plan to
move more security-related kernel assets to this page to enhance
protection.
Signed-off-by: Preeti Nagar
---
The RFC patch reviewed available at:
https://lore.kernel.org/linux-security-module/1610099389-28329-1-git-send-email-pna...@codeaurora.org/
---
include/asm-generic/vmlinux.lds.h
_ARGS(call_site, ptr)
> + TP_ARGS(call_site, ptr),
> +
> + TP_CONDITION(cpu_online(smp_processor_id()))
> );
>
> TRACE_EVENT(mm_page_free,
Reviewed-by: Preeti U Murthy
Regards
Preeti U Murthy
> --
> 1.9.3
>
> --
> To unsubscribe from this list: s
VENT(mm_page_free,
> +TRACE_EVENT_CONDITION(mm_page_free,
>
> TP_PROTO(struct page *page, unsigned int order),
>
> TP_ARGS(page, order),
>
> + TP_CONDITION(cpu_online(smp_processor_id())),
> +
> TP_STRUCT__entry(
> __field(uns
On Wed, Apr 29, 2015 at 2:36 PM, Preeti Murthy wrote:
> Ccing Paul,
>
> On Tue, Apr 28, 2015 at 9:21 PM, Shreyas B. Prabhu
> wrote:
>> Since tracepoints use RCU for protection, they must not be called on
>> offline cpus. trace_mm_page_free can be called on an offline
__field(int,migratetype )
> + ),
> +
> + TP_fast_assign(
> + __entry->pfn= page ? page_to_pfn(page) : -1UL;
> + __entry->order = order;
> + __entry->migratetype= m
>> Lets do this. Push the current changes as is, and when I get around to
>> adding a DEFINE_EVENT_PRINT_CONDITION(), we can modify that code to use
>> it.
>>
> Okay, sure.
Looks good then.
Reviewed-by: Preeti U Murthy
>
> Thanks,
> Shreyas
>
--
To unsu
s and comments on the idea and the changes
in the patch.
Signed-off-by: Preeti Nagar
---
include/asm-generic/vmlinux.lds.h | 10 ++
include/linux/init.h | 4
security/Kconfig | 10 ++
security/selinux/hooks.c | 4
4 files change
s and comments on the idea and the changes
in the patch.
Signed-off-by: Preeti Nagar
---
include/asm-generic/vmlinux.lds.h | 10 ++
include/linux/init.h | 4
security/Kconfig | 10 ++
security/selinux/hooks.c | 4
4 files change
for every rq in the hierarchy. But you would
never dequeue a sched_entity if it has more than 1 task in it. The
granularity of enqueue and dequeue of sched_entities is one task
at a time. You can extend this to enqueue and dequeue of a sched_entity
only if it has just one task in its queue.
Regards
On Sat, Mar 15, 2014 at 3:45 AM, Kirill Tkhai wrote:
> This reverts commit 4c6c4e38c4e9 [sched/core: Fix endless loop in
> pick_next_task()], which is not necessary after [sched/rt: Substract number
> of tasks of throttled queues from rq->nr_running]
Reviewed-by: Preeti U Murthy
&g
below restored check will be relevant.
Without the below check the difference in the loads of the wake affine
CPU and the
prev_cpu can get messed up.
Thanks
Regards
Preeti U Murthy
> task_numa_compare since commit fb13c7ee (sched/numa: Use a system-wide
> search to find swap/migrati
the hrtimer list since its a part and parcel of the timer wheel
events.
Regards
Preeti U Murthy
>
> When hres_active isn't set, we run hrtimer handlers from timer
> handlers, which means that timers would be sufficient in finding
> the next event and we don't need to check fo
Hi Kirill,
Which tree is this patch based on? __migrate_task() does a
double_rq_lock/unlock() today in mainline, doesn't it? I don't
however see that in your patch.
Regards
Preeti U Murthy
On Fri, Sep 12, 2014 at 4:33 PM, Kirill Tkhai wrote:
>
> If a task is queued but not runn
uldn't it be:
if (time_after(jiffies, this_rq->next_balance) ||
time_after(this_rq->next_balance, next_balance))
this_rq->next_balance = next_balance;
Besides this:
Reviewed-by: Preeti U Murthy
Regards
Preeti U Murthy
On Sat, Apr 26, 2014 at 1:24 AM, Jason L
Hi Nicolas,
You might want to change the subject.
s/sched: remove remaining power to the CPU/
sched: remove remaining usage of cpu *power* .
The subject has to explicitly specify in some way
that it is a change made to the terminology.
Regards
Preeti U Murthy
On Thu, May 15, 2014 at 2:27 AM
sk of the destination rq
during migration be better? This would be the closest we could
come to estimating the amount of time the task has run on this new
cpu while deciding task_hot or not no?
Regards
Preeti U Murthy
>
> Thanks!
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
CAST_ENTER again, the
pending mask is cleared
and hence should not trigger a WARN_ON().
Thanks
Regards
Preeti U Murthy
On Sun, Feb 16, 2014 at 12:51 AM, Thomas Gleixner wrote:
> Linus,
>
> please pull the latest timers-urgent-for-linus git tree from:
>
>git://git.kernel.org/
on a different NUMA node.
Looks to me that the problem lies here and not in the wake_affine()
and select_idle_siblings().
Regards
Preeti U Murthy
>
> --
> All rights reversed
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a mess
for my understanding.
Thanks!
Regards
Preeti U Murthy
On 3/24/14, Hidetoshi Seto wrote:
> + * Known bug: Return value is not monotonic in case if @last_update_time
> + * is NULL and therefore update is not performed. Because it includes
> + * cputime which is not determined idle or i
given duration will be of use.
Having said that, a tool that gives the running power efficiency
image of my system would be more useful in the long run.
Regards
Preeti U Murthy
On Tue, Mar 25, 2014 at 1:35 AM, Zoran Markovic
wrote:
> Conclusions from Energy Aware Scheduling sessions at the lat
tick_sched_timer
dies along with the hotplugged out CPU since there is no need for it any more.
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.
d update it to reflect the right cpu load average?
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please rea
t domain which does not seem like
the right thing to do.
Having said the above, the fix that Viresh has proposed along with the nohz_full
condition that Frederick added looks to solve this problem.
But just a thought on if there is scope to improve this part of the
cpufreq code.
What do you all thin
well be *non-deferrable*
timers in the list"
s/non-deferrable/deferrable.
Thanks
Regards
Preeti U Murthy
On Thu, Jan 30, 2014 at 5:09 AM, Paul E. McKenney
wrote:
> Hello, Ingo,
>
> This pull request contains latency bandaids^Woptimizations to the
> timer-wheel code that are u
e idle states in the higher indexed
states although it should have halted if the idle states' were ordered according
to their target residency.. The same holds for exit_latency.
Hence I think this patch would make sense only with additional information
like exit_latency or target_residency is
Hi Yijing,
For the powerpc part:
Acked-by: Preeti U Murthy
On Mon, Feb 10, 2014 at 7:28 AM, Yijing Wang wrote:
> Currently, clocksource_register() and __clocksource_register_scale()
> functions always return 0, it's pointless, make functions void.
> And remove the dead code
_mask right?
Any other case would trigger load balancing on the same cpu, but
we are preempt_disabled and interrupt disabled at this point.
Thanks
Regards
Preeti U Murthy
On Fri, Feb 7, 2014 at 4:40 AM, Daniel Lezcano
wrote:
> The scheduler main function 'schedule()' checks if there
ance() as idle
> time.
Should not this be "such that we *do not* measure the duration of idle_balance()
as idle time?"
Thanks
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
in addition to your check on the
latency_req == 0.
If not, you can fall through to the regular path of calling into the
cpuidle driver.
The scheduler can query the cpuidle_driver structure anyway.
What do you think?
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line
ou might want to include this change in the previous patch itself.
> + * @next_timer_event: the duration until the timer expires
> *
> * Returns the index of the idle state.
> */
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel"
- data->last_state_idx = index;
> - if (index >= 0)
> - data->needs_update = 1;
> + data->needs_update = 1;
Why is the last_state_idx not getting updated ?
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "u
Hi Thomas,
On Tue, Dec 16, 2014 at 6:19 PM, Thomas Gleixner wrote:
> On Tue, 16 Dec 2014, Preeti U Murthy wrote:
>> As far as I can see, the primary purpose of tick_nohz_irq_enter()/exit()
>> paths was to take care of *tick stopped* cases.
>>
>> Before handling inter
The differences between the above two scenarios include:
1.Reduced latency for Task1 in CASE2,which is the right task to be moved
in the above scenario.
2.Even though in the former case CPU2 is relieved of one task,its of no
use if Task3 is going to sleep most of the time.This might result in
Hi,
On 11/27/2012 11:44 AM, Alex Shi wrote:
> On 11/27/2012 11:08 AM, Preeti U Murthy wrote:
>> Hi everyone,
>>
>> On 11/27/2012 12:33 AM, Benjamin Segall wrote:
>>> So, I've been trying out using the runnable averages for load balance in
>>> a few ways,
er even though they degrade with time
and sgs->utils accounts for them. Therefore,
for core1 and core2, the sgs->utils will be slightly above 100 and the
above condition will fail, thus failing them as candidates for
group_leader,since threshold_util will be 200.
This phenomenon is seen for bal
Hi Alex,
On 03/21/2013 01:13 PM, Alex Shi wrote:
> On 03/20/2013 12:57 PM, Preeti U Murthy wrote:
>> Neither core will be able to pull the task from the other to consolidate
>> the load because the rq->util of t2 and t4, on which no process is
>> running, continue to show
On 03/21/2013 02:57 PM, Alex Shi wrote:
> On 03/21/2013 04:41 PM, Preeti U Murthy wrote:
>>>>
>> Yes, I did find this behaviour on a 2 socket, 8 core machine very
>> consistently.
>>
>> rq->util cannot go to 0, after it has begun accumulating load right?
ription.
Ok,take the example of a runqueue with 2 task groups,each with 10
tasks.Same as your previous example. Can you explain how your patch
ensures that all 20 tasks get to run atleast once in a sched_period?
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Hi Joonsoo,
On 04/04/2013 06:12 AM, Joonsoo Kim wrote:
> Hello, Preeti.
>
> So, how about extending a sched_period with rq->nr_running, instead of
> cfs_rq->nr_running? It is my quick thought and I think that we can ensure
> to run atleast once in this extending sched_per
up task.The first time
the forked task gets a chance to update the load itself,it needs to
reflect full utilization.In __update_entity_runnable_avg both
runnable_avg_period and runnable_avg_sum get equally incremented for a
forked task since it is runnable.Hence where is the chance for the l
s and migrated wake ups have load updates to
do.Forked tasks just got created,they have no load to "update" but only
to "create". This I feel is rightly done in sched_fork by this patch.
So ideally I dont think we should have any comment here.It does not
sound relevant.
>*
he utilisation is
accumulated faster by making the update window smaller.
2.Balance on nr->running only if you detect burst wakeups.
Alex, you had released a patch earlier which could detect this right?
Instead of balancing on nr_running all the time, why not balance on it
only if burst wakeups are detected. By doing so you ensure that
nr_running as a metric for load balancing is used when it is right to do
so and the reason to use it also gets well documented.
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
ed tasks.
>> enqueue_task_fair->update_entity_load_avg() during the second
>> iteration.But __update_entity_load_avg() in update_entity_load_avg()
>>
>
> When goes 'enqueue_task_fair->update_entity_load_avg()' during the
> second iteration. the se is changed.
> That
se, 0);
}
- /*
-* set the initial load avg of new task same as its load
-* in order to avoid brust fork make few cpu too heavier
-*/
- if (flags & ENQUEUE_NEWTASK)
- se->avg.load_avg_contrib = se->load.weight;
cfs_rq->
heduler and the usage of per entity load tracking can
be done without considering the real time tasks?
Regards
Preeti U Murthy
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
with what is the right
metric to use here.
Refer to this discussion:https://lkml.org/lkml/2012/10/29/448
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at
With_Patchset Without_patchset
-
Average_number_of_migrations 046
Average_number_of_records/s 9,71,114 9,45,158
With more memory intensive workloads, a higher difference in the number of
migrations is seen without any
Additional parameters for deciding a sched group's imbalance status
which are calculated using the per entity load tracking are used.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 22 --
1 file changed, 20 insertions(+), 2 deletions(-)
diff --git a/kernel/
Additional parameters which decide the busiest cpu in the chosen sched group
calculated using PJT's metric are used
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 13 +
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/
Modify certain decisions in load_balance to use the imbalance
amount as calculated by the PJT's metric.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c |5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index bd
Additional parameters introduced to perform this function which are
calculated using PJT's metrics and its helpers.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 14 ++
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/
e a balance between the loads of the
group and the number of tasks running on the group to decide the
busiest group in the sched_domain.
This means we will need to use the PJT's metrics but with an
additional constraint.
Signed-off-by: Preeti U Murthy
---
kernel/sch
Additional parameters which aid in taking the decisions in
fix_small_imbalance which are calculated using PJT's metric are used.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 54 +++
1 file changed, 33 insertions(+), 21 dele
Make appropriate modifications in check_asym_packing to reflect PJT's
metric.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c |2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 68a6b1d..3b18f5f 100644
--- a/kernel/sched/fair.c
Additional parameters introduced to perform this function which are
calculated using PJT's metrics and its helpers.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c |8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/f
Additional parameters introduced to perform this function which are
calculated using PJT's metrics and its helpers.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 34 +++---
1 file changed, 15 insertions(+), 19 deletions(-)
diff --git a/kernel/
Make decisions based on PJT's metrics and the dependent metrics
about which tasks to move to reduce the imbalance.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 14 +-
1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/
rent sched group is capable of pulling tasks upon
itself.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 33 +
1 file changed, 25 insertions(+), 8 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index aafa3c1..67a916d 100644
--- a/ke
Additional parameters which decide the amount of imbalance in the sched domain
calculated using PJT's metric are used.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 36 +++-
1 file changed, 23 insertions(+), 13 deletions(-)
diff --git a/kernel/
eshold.The call should be taken if the tasks can afford to be throttled.
This is why an additional metric has been included,which can determine how
long we can tolerate tasks not being moved even if the load is low.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 16 ++
ter *busy
* gp.
*End Result: Better candidates for lb*
Rest of the patches: now that we have our busy sched group,let us load balance
with the aid of the new metric.
*End Result: Hopefully a more sensible movement of loads*
This is how I build the picture.
Regards
Preeti
--
To unsubscribe from t
ffected although there is less load on GP1.If yes it
is a better *busy * gp.
*End Result: Better candidates for lb*
Rest of the patches: now that we have our busy sched group,let us load
balance with the aid of the new metric.
*End Result: Hopefully a more sensible movement of loads
On 10/26/2012 05:59 PM, Peter Zijlstra wrote:
> On Thu, 2012-10-25 at 23:42 +0530, Preeti U Murthy wrote:
> firstly, cfs_rq is the wrong place for a per-cpu load measure, secondly
> why add another load field instead of fixing the one we have?
Hmm..,rq->load.weight is the place.
>
rmance of
> those.
>
> Having two parallel load metrics is really not something that we
> should tolerate for too long.
>
> Thanks,
>
> Ingo
>
Right Ingo.I will incorporate this approach and post out very soon.
Thank you
Regards
Preeti
--
To unsubsc
Secondly, I think we should spend more time on when to make a call to
the frequency driver in your patchset regarding the change in the
frequency of the CPU, the scheduler wishes to request. The reason being,
the whole effort of integrating the knowledge of cpu frequency
statistics into the sched
Hi Soren,
On 09/13/2013 09:53 PM, Sören Brinkmann wrote:
> Hi Preeti,
> Thanks for the explanation but now I'm a little confused. That's a lot of
> details and I'm lacking the in depth knowledge to fully understand
> everything.
>
> Is it correct to say, that
vatsa S. Bhat and
Vaidyanathan Srinivasan for all their comments and suggestions so far.
---
Preeti U Murthy (4):
cpuidle/ppc: Split timer_interrupt() into timer handling and interrupt
handling routines
cpuidle/ppc: Add basic infrastructure to support the broadcast framework
on ppc
c
available).
So, implement the functionality of PPC_MSG_CALL_FUNC using
PPC_MSG_CALL_FUNC_SINGLE itself and release its IPI message slot, so that it
can be used for something else in the future, if desired.
Signed-off-by: Srivatsa S. Bhat
Signed-off-by: Preeti U Murthy
---
arch/powerpc/include
routines performed during regular
interrupt handling and __timer_interrupt(), which takes care of running local
timers and collecting time related stats. Now on a broadcast ipi, call
__timer_interrupt().
Signed-off-by: Preeti U Murthy
---
arch/powerpc/kernel/time.c | 69
sa S. Bhat
[Changelog modified by pre...@linux.vnet.ibm.com]
Signed-off-by: Preeti U Murthy
---
arch/powerpc/include/asm/smp.h |3 ++-
arch/powerpc/include/asm/time.h |1 +
arch/powerpc/kernel/smp.c | 19 +++
arch/powerpc/kernel/time.c
being woken up from the broadcast ipi, set the
decrementers_next_tb
to now before calling __timer_interrupt().
Signed-off-by: Preeti U Murthy
---
arch/powerpc/Kconfig|1 +
arch/powerpc/include/asm/time.h |1 +
arch/powerpc/kernel/time.c | 69 +++
above cycle repeats.
Protect the region of nomination,de-nomination and check for existence of
broadcast
cpu with a lock to ensure synchronization between them.
[1] tick_handle_oneshot_broadcast() or tick_handle_periodic_broadcast().
Signed-off-by: Preeti U Murthy
---
arch/powerpc/include/asm/t
was about to fire on it. Therefore the newly nominated broadcast cpu
should set the broadcast hrtimer on itself to expire immediately so as to not
miss wakeups under such scenarios.
Signed-off-by: Preeti U Murthy
---
arch/powerpc/include/asm/time.h |1 +
arch/powerpc/kernel/time.c
Hi Soren,
On 09/13/2013 03:50 PM, Preeti Murthy wrote:
> Hi,
>
> So the patch that Daniel points out http://lwn.net/Articles/566270/ ,
> enables broadcast functionality
> without using an external global clock device. It uses one of the per cpu
> clock devices to en
sched: Use Per-Entity-Load-Tracking metric for load balancing
From: Preeti U Murthy
Currently the load balancer weighs a task based upon its priority,and this
weight consequently gets added up to the weight of the run queue that it is
on.It is this weight of the runqueue that sums up to a
printf("%u records/s\n",
(unsigned int) (((double) records_read)/diff_time));
}
int main()
{
start_threads();
return 0;
}
END WORKLOAD
Regards
Preeti U Murthy
--
To unsubscribe from this list: sen
good idea.
It is true that we need to bring in nr_running somewhere.Let me now go
through your suggestions on where to include nr_running and get back on
this.I had planned on including nr_running while selecting the busy
group in update_sd_lb_stats,but select_task_rq_fair is yet another place
to do t
}
>
> /* Now try balancing at a lower domain level of new_cpu */
> cpu = new_cpu;
>
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
to, at that
level of sched domain.Which is fair enough.
So now the question is under such a circumstance which is the idlest
group so far.It is the group containing this_cpu,i.e.this_group.After
this sd->child is chosen which is nothing but this_group(sd hierarchy
moves towards the cpu it belongs
Hi Alex,
On 12/11/2012 10:59 AM, Alex Shi wrote:
> On 12/11/2012 01:08 PM, Preeti U Murthy wrote:
>> Hi Alex,
>>
>> On 12/10/2012 01:52 PM, Alex Shi wrote:
>>> There is 4 situations in the function:
>>> 1, no task allowed group;
>>> so min_load
On 12/11/2012 10:58 AM, Alex Shi wrote:
> On 12/11/2012 12:23 PM, Preeti U Murthy wrote:
>> Hi Alex,
>>
>> On 12/10/2012 01:52 PM, Alex Shi wrote:
>>> It is impossible to miss a task allowed cpu in a eligible group.
>>
>> The one thing I am concerned
>cfs.runnable_load_avg / nr_running;
rq->cfs.runnable_load_avg is u64 type.you will need to typecast it here
also right? how does this division work? because the return type is
unsigned long.
>
> return 0;
> }
>
Regards
Preeti U Murthy
--
To unsubscribe from this list:
= task_h_load(p);
> + load = task_h_load_avg(p);
>
> if (sched_feat(LB_MIN) && load < 16 &&
> !env->sd->nr_balance_failed)
> goto next;
>
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
On 12/11/2012 05:23 PM, Alex Shi wrote:
> On 12/11/2012 02:30 PM, Preeti U Murthy wrote:
>> On 12/11/2012 10:58 AM, Alex Shi wrote:
>>> On 12/11/2012 12:23 PM, Preeti U Murthy wrote:
>>>> Hi Alex,
>>>>
>>>> On 12/10/2012 01:52 PM, Alex Shi wr
On 10/29/2012 11:08 PM, Benjamin Segall wrote:
> Preeti Murthy writes:
>
>> Hi Paul, Ben,
>>
>> A few queries regarding this patch:
>>
>> 1.What exactly is the significance of introducing sched_avg structure
>> for a runqueue? If I have
>>un
ur suggestions.This will greatly help take the
right steps here on, in achieving the correct integration.
Thank you
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info
Hi Mike,
Thank you very much for your feedback.Considering your suggestions,I have
posted out a
proposed solution to prevent select_idle_sibling() from becoming a disadvantage
to normal
load balancing,rather aiding it.
**This patch is *without* the enablement of the per entity load tracking
m
r Zijlstra and Ingo Molnar for their valuable feedback on v1
of the RFC which was the foundation for this version.
PATCH[1/2] Aims at enabling usage of Per-Entity-Load-Tracking for load balacing
PATCH[2/2] The crux of the patchset lies here.
---
Preeti U Murthy (2):
sched: Revert
Now that we need the per-entity load tracking for load balancing,
trivially revert the patch which introduced the FAIR_GROUP_SCHED
dependence for load tracking.
Signed-off-by: Preeti U Murthy
---
include/linux/sched.h |7 +--
kernel/sched/core.c |7 +--
kernel/sched/fair.c
1 - 100 of 667 matches
Mail list logo