ng cfs's runnable_load_avg + blocked_load_avg in
>> weighted_cpuload() with my v3 patchset, aim9 shared workfile testing
>> show the performance dropped 70% more on the NHM EP machine. :(
>>
>
> Ops, the performance is still worse than just count runnable_load_avg.
> But drop
. I guess we didn't do that before.
>
>>
>> It should has some help on burst wake up benchmarks like aim7.
>>
>> Original-patch-by: Preeti U Murthy
>> Signed-off-by: Alex Shi
>> ---
>> kernel/sched/fair.c | 40 +++--
Hi Alex,
On 01/16/2013 07:38 PM, Alex Shi wrote:
> On 01/08/2013 04:41 PM, Preeti U Murthy wrote:
>> Hi Mike,
>>
>> Thank you very much for such a clear and comprehensive explanation.
>> So when I put together the problem and the proposed solution pieces in t
ders blocked load as being a part of the load of cpu2.
>>>
>>> Hi Preeti,
>>>
>>> I'm not sure that we want such steady state at cores level because we
>>> take advantage of migrating wake up tasks between cores that share
>>> their cache as Ma
pefully be overcome by this flowchart.
I have tried to tackle STEP3.STEP 3 will not prevent bouncing but a good STEP2
could tell
us if it is worth the bounce.
STEP3 Patch is given below:
***START PATCH**
sched:Reduce the overhead of se
On 01/07/2013 09:18 PM, Vincent Guittot wrote:
> On 2 January 2013 05:22, Preeti U Murthy wrote:
>> Hi everyone,
>> I have been looking at how different workloads react when the per entity
>> load tracking metric is integrated into the load balancer and what are
>>
will also try to run tbench and a few other benchmarks to
find out why the results are like below.Will update you very soon on this.
Thank you
Regards
Preeti U Murthy
On 01/06/2013 10:02 PM, Mike Galbraith wrote:
> On Sat, 2013-01-05 at 09:13 +0100, Mike Galbraith wrote:
>
>> I stil
Hi Mike,
Thank you very much for your feedback.Considering your suggestions,I have
posted out a
proposed solution to prevent select_idle_sibling() from becoming a disadvantage
to normal
load balancing,rather aiding it.
**This patch is *without* the enablement of the per entity load tracking
m
ur suggestions.This will greatly help take the
right steps here on, in achieving the correct integration.
Thank you
Regards
Preeti U Murthy
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev
sched: Use Per-Entity-Load-Tracking metric for load balancing
From: Preeti U Murthy
Currently the load balancer weighs a task based upon its priority,and this
weight consequently gets added up to the weight of the run queue that it is
on.It is this weight of the runqueue that sums up to a
t;Error joining thread %d\n", i);
exit(1);
}
}
printf("%u records/s\n",
(unsigned int) (((double) records_read)/diff_time));
}
int main()
{
start_threads();
return 0;
}
END WORKLOAD
ing.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 51 +++
1 file changed, 31 insertions(+), 20 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f8f3a29..7cd3096 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sc
Hi Vincent,
Thank you for your review.
On 11/15/2012 11:43 PM, Vincent Guittot wrote:
> Hi Preeti,
>
> On 15 November 2012 17:54, Preeti U Murthy wrote:
>> Currently the load balancer weighs a task based upon its priority,and this
>> weight consequently gets added up to
r Zijlstra and Ingo Molnar for their valuable feedback on v1
of the RFC which was the foundation for this version.
PATCH[1/2] Aims at enabling usage of Per-Entity-Load-Tracking for load balacing
PATCH[2/2] The crux of the patchset lies here.
---
Preeti U Murthy (2):
sched: Revert
load often and accurately.
The following patch does not consider CONFIG_FAIR_GROUP_SCHED AND
CONFIG_SCHED_NUMA.This is done so as to evaluate this approach starting from the
simplest scenario.Earlier discussions can be found in the link below.
Link: https://lkml.org/lkml/2012/10/25/162
Signed-off-by: Pre
Now that we need the per-entity load tracking for load balancing,
trivially revert the patch which introduced the FAIR_GROUP_SCHED
dependence for load tracking.
Signed-off-by: Preeti U Murthy
---
include/linux/sched.h |7 +--
kernel/sched/core.c |7 +--
kernel/sched/fair.c
can opine about this issue if possible and needed.
Reviewed-by: Preeti U Murthy
Regards
Preeti U Murthy
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev
- per-entity-load-tracking-with-core-sched-v1: 15 by Preeti
>
> My understanding was that this patchset by Preeti wasn't well received
> by the maintainers and is being reworked. Do we have an ETA from
> Preeti for the updates? I'm a little concerned that since
rmance of
> those.
>
> Having two parallel load metrics is really not something that we
> should tolerate for too long.
>
> Thanks,
>
> Ingo
>
Right Ingo.I will incorporate this approach and post out very soon.
Thank you
Regards
Preeti
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev
On 10/26/2012 05:59 PM, Peter Zijlstra wrote:
> On Thu, 2012-10-25 at 23:42 +0530, Preeti U Murthy wrote:
> firstly, cfs_rq is the wrong place for a per-cpu load measure, secondly
> why add another load field instead of fixing the one we have?
Hmm..,rq->load.weight is the place.
>
ffected although there is less load on GP1.If yes it
is a better *busy * gp.
*End Result: Better candidates for lb*
Rest of the patches: now that we have our busy sched group,let us load
balance with the aid of the new metric.
*End Result: Hopefully a more sensible movement of loads*
This is how I
ter *busy
* gp.
*End Result: Better candidates for lb*
Rest of the patches: now that we have our busy sched group,let us load balance
with the aid of the new metric.
*End Result: Hopefully a more sensible movement of loads*
This is how I build the picture.
Regards
Preeti
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev
stderr, "Error joining thread %d\n", i);
exit(1);
}
}
printf("%u records/s\n",
(unsigned int) (((double) records_read)/diff_time));
}
int main()
{
start_threads();
return 0;
}
Regards
Preeti U Murthy
eshold.The call should be taken if the tasks can afford to be throttled.
This is why an additional metric has been included,which can determine how
long we can tolerate tasks not being moved even if the load is low.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 16 ++
Additional parameters introduced to perform this function which are
calculated using PJT's metrics and its helpers.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 34 +++---
1 file changed, 15 insertions(+), 19 deletions(-)
diff --git a/kernel/
Additional parameters introduced to perform this function which are
calculated using PJT's metrics and its helpers.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c |8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/f
Modify certain decisions in load_balance to use the imbalance
amount as calculated by the PJT's metric.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c |5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index bd
Additional parameters for deciding a sched group's imbalance status
which are calculated using the per entity load tracking are used.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 22 --
1 file changed, 20 insertions(+), 2 deletions(-)
diff --git a/kernel/
Additional parameters introduced to perform this function which are
calculated using PJT's metrics and its helpers.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 14 ++
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/
rent sched group is capable of pulling tasks upon
itself.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 33 +
1 file changed, 25 insertions(+), 8 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index aafa3c1..67a916d 100644
--- a/ke
e a balance between the loads of the
group and the number of tasks running on the group to decide the
busiest group in the sched_domain.
This means we will need to use the PJT's metrics but with an
additional constraint.
Signed-off-by: Preeti U Murthy
---
kernel/sch
Additional parameters which aid in taking the decisions in
fix_small_imbalance which are calculated using PJT's metric are used.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 54 +++
1 file changed, 33 insertions(+), 21 dele
Additional parameters which decide the amount of imbalance in the sched domain
calculated using PJT's metric are used.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 36 +++-
1 file changed, 23 insertions(+), 13 deletions(-)
diff --git a/kernel/
Make appropriate modifications in check_asym_packing to reflect PJT's
metric.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c |2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 68a6b1d..3b18f5f 100644
--- a/kernel/sched/fair.c
Additional parameters which decide the busiest cpu in the chosen sched group
calculated using PJT's metric are used
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 13 +
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/
Make decisions based on PJT's metrics and the dependent metrics
about which tasks to move to reduce the imbalance.
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 14 +-
1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/
With_Patchset Without_patchset
-
Average_number_of_migrations 046
Average_number_of_records/s 9,71,114 9,45,158
With more memory intensive workloads, a higher difference in the number of
migrations is seen without any
r very few spare cycles in the last
> schedule period would be a good candidate for load-balancing. Latency
> would be affected as mentioned earlier.
>
Exactly.idle_time == spare_cpu_cycles == less cpu_utilization.I hope i
am not wrong in drawing this equivalence.if thats the case then the same
explanation as above holds good here too.
> Morten
Thank you
Regards
Preeti
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev
would be a good candidate for load-balancing. Latency
> would be affected as mentioned earlier.
>
Exactly.idle_time == spare_cpu_cycles == less cpu_utilization.I hope i
am not wrong in drawing this equivalence.if thats the case then the same
explanation as above holds good here too.
>
> Morten
Thank you
Regards
Preeti
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev
oups are compared between them.If we were to use PJT's metric,a
> higher load does not necessarily mean more number of tasks.This
>patch addresses this issue.
>
> 3.The next step towards integration should be in using the PJT's metric for
> comparison bet
40 matches
Mail list logo