On Fri, 2013-02-22 at 15:30 +0100, Mike Galbraith wrote:
> On Fri, 2013-02-22 at 14:06 +0100, Ingo Molnar wrote:
> > I think it might be better to measure the scheduling rate all
> > the time, and save the _shortest_ cross-cpu-wakeup and
> > same-cpu-wakeup latencies (since bootup) as a refere
On Fri, 2013-02-22 at 14:06 +0100, Ingo Molnar wrote:
> * Mike Galbraith wrote:
>
> > > > No, that's too high, you loose too much of the pretty
> > > > face. [...]
> > >
> > > Then a logical proportion of it - such as half of it?
> >
> > Hm. Better would maybe be a quick boot time benchmark,
* Mike Galbraith wrote:
> > > No, that's too high, you loose too much of the pretty
> > > face. [...]
> >
> > Then a logical proportion of it - such as half of it?
>
> Hm. Better would maybe be a quick boot time benchmark, and
> use some multiple of your cross core pipe ping-pong time?
>
On Fri, 2013-02-22 at 13:11 +0100, Ingo Molnar wrote:
> * Mike Galbraith wrote:
>
> > On Fri, 2013-02-22 at 10:54 +0100, Ingo Molnar wrote:
> > > * Mike Galbraith wrote:
> > >
> > > > On Fri, 2013-02-22 at 09:36 +0100, Peter Zijlstra wrote:
> > > > > On Fri, 2013-02-22 at 10:37 +0800, Michae
* Mike Galbraith wrote:
> On Fri, 2013-02-22 at 10:54 +0100, Ingo Molnar wrote:
> > * Mike Galbraith wrote:
> >
> > > On Fri, 2013-02-22 at 09:36 +0100, Peter Zijlstra wrote:
> > > > On Fri, 2013-02-22 at 10:37 +0800, Michael Wang wrote:
> > > > > But that's really some benefit hardly to be
On 02/22/2013 05:57 PM, Peter Zijlstra wrote:
> On Fri, 2013-02-22 at 17:11 +0800, Michael Wang wrote:
>
>> Ok, it do looks like wake_affine() lost it's value...
>
> I'm not sure we can say that on this one benchmark, there's a
> preemption advantage to running on a single cpu for pipe-test as we
On Fri, 2013-02-22 at 10:54 +0100, Ingo Molnar wrote:
> * Mike Galbraith wrote:
>
> > On Fri, 2013-02-22 at 09:36 +0100, Peter Zijlstra wrote:
> > > On Fri, 2013-02-22 at 10:37 +0800, Michael Wang wrote:
> > > > But that's really some benefit hardly to be estimate, especially when
> > > > the w
On 02/22/2013 05:39 PM, Peter Zijlstra wrote:
> On Fri, 2013-02-22 at 17:10 +0800, Michael Wang wrote:
>> On 02/22/2013 04:21 PM, Peter Zijlstra wrote:
>>> On Fri, 2013-02-22 at 10:36 +0800, Michael Wang wrote:
According to my understanding, in the old world, wake_affine() will
only
On Fri, 2013-02-22 at 17:11 +0800, Michael Wang wrote:
> Ok, it do looks like wake_affine() lost it's value...
I'm not sure we can say that on this one benchmark, there's a
preemption advantage to running on a single cpu for pipe-test as well.
We'd need to create a better benchmark to test this,
* Mike Galbraith wrote:
> On Fri, 2013-02-22 at 09:36 +0100, Peter Zijlstra wrote:
> > On Fri, 2013-02-22 at 10:37 +0800, Michael Wang wrote:
> > > But that's really some benefit hardly to be estimate, especially when
> > > the workload is heavy, the cost of wake_affine() is very high to
> > >
On Fri, 2013-02-22 at 09:36 +0100, Peter Zijlstra wrote:
> On Fri, 2013-02-22 at 10:37 +0800, Michael Wang wrote:
> > But that's really some benefit hardly to be estimate, especially when
> > the workload is heavy, the cost of wake_affine() is very high to
> > calculated se one by one, is that wor
On Fri, 2013-02-22 at 17:10 +0800, Michael Wang wrote:
> On 02/22/2013 04:21 PM, Peter Zijlstra wrote:
> > On Fri, 2013-02-22 at 10:36 +0800, Michael Wang wrote:
> >> According to my understanding, in the old world, wake_affine() will
> >> only
> >> be used if curr_cpu and prev_cpu share cache, whi
On 02/22/2013 04:36 PM, Peter Zijlstra wrote:
> On Fri, 2013-02-22 at 10:37 +0800, Michael Wang wrote:
>> But that's really some benefit hardly to be estimate, especially when
>> the workload is heavy, the cost of wake_affine() is very high to
>> calculated se one by one, is that worth for some ben
On 02/22/2013 04:21 PM, Peter Zijlstra wrote:
> On Fri, 2013-02-22 at 10:36 +0800, Michael Wang wrote:
>> According to my understanding, in the old world, wake_affine() will
>> only
>> be used if curr_cpu and prev_cpu share cache, which means they are in
>> one package, whatever search in llc sd of
On Fri, 2013-02-22 at 10:37 +0800, Michael Wang wrote:
> But that's really some benefit hardly to be estimate, especially when
> the workload is heavy, the cost of wake_affine() is very high to
> calculated se one by one, is that worth for some benefit we could not
> promise?
Look at something lik
On 02/22/2013 04:17 PM, Mike Galbraith wrote:
> On Fri, 2013-02-22 at 14:42 +0800, Michael Wang wrote:
>
>> So this is trying to take care the condition when curr_cpu(local) and
>> prev_cpu(remote) are on different nodes, which in the old world,
>> wake_affine() won't be invoked, correct?
>
> It'
On Fri, 2013-02-22 at 14:42 +0800, Michael Wang wrote:
> So this is trying to take care the condition when curr_cpu(local) and
> prev_cpu(remote) are on different nodes, which in the old world,
> wake_affine() won't be invoked, correct?
It'll be called any time this_cpu and prev_cpu aren't one an
On Fri, 2013-02-22 at 10:36 +0800, Michael Wang wrote:
> According to my understanding, in the old world, wake_affine() will
> only
> be used if curr_cpu and prev_cpu share cache, which means they are in
> one package, whatever search in llc sd of curr_cpu or prev_cpu, we
> won't
> have the chance
On 02/22/2013 01:02 PM, Mike Galbraith wrote:
> On Fri, 2013-02-22 at 10:36 +0800, Michael Wang wrote:
>> On 02/21/2013 05:43 PM, Mike Galbraith wrote:
>>> On Thu, 2013-02-21 at 17:08 +0800, Michael Wang wrote:
>>>
But is this patch set really cause regression on your Q6600? It may
sacri
On Fri, 2013-02-22 at 14:06 +0800, Michael Wang wrote:
> On 02/22/2013 01:08 PM, Mike Galbraith wrote:
> > On Fri, 2013-02-22 at 10:37 +0800, Michael Wang wrote:
> >
> >> According to the testing result, I could not agree this purpose of
> >> wake_affine() benefit us, but I'm sure that wake_affin
On Fri, 2013-02-22 at 13:26 +0800, Michael Wang wrote:
> Just confirm that I'm not on the wrong way, did the 1:N mode here means
> 1 task forked N threads, and child always talk with father?
Yes, one server, many clients.
-Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-
On 02/22/2013 01:08 PM, Mike Galbraith wrote:
> On Fri, 2013-02-22 at 10:37 +0800, Michael Wang wrote:
>
>> According to the testing result, I could not agree this purpose of
>> wake_affine() benefit us, but I'm sure that wake_affine() is a terrible
>> performance killer when system is busy.
>
>
On 02/22/2013 01:02 PM, Mike Galbraith wrote:
> On Fri, 2013-02-22 at 10:36 +0800, Michael Wang wrote:
>> On 02/21/2013 05:43 PM, Mike Galbraith wrote:
>>> On Thu, 2013-02-21 at 17:08 +0800, Michael Wang wrote:
>>>
But is this patch set really cause regression on your Q6600? It may
sacri
On Fri, 2013-02-22 at 10:37 +0800, Michael Wang wrote:
> According to the testing result, I could not agree this purpose of
> wake_affine() benefit us, but I'm sure that wake_affine() is a terrible
> performance killer when system is busy.
(hm, result is singular.. pgbench in 1:N mode only?)
--
On Fri, 2013-02-22 at 10:36 +0800, Michael Wang wrote:
> On 02/21/2013 05:43 PM, Mike Galbraith wrote:
> > On Thu, 2013-02-21 at 17:08 +0800, Michael Wang wrote:
> >
> >> But is this patch set really cause regression on your Q6600? It may
> >> sacrificed some thing, but I still think it will bene
On 02/21/2013 06:20 PM, Peter Zijlstra wrote:
> On Thu, 2013-02-21 at 12:51 +0800, Michael Wang wrote:
>> The old logical when locate affine_sd is:
>>
>> if prev_cpu != curr_cpu
>> if wake_affine()
>> prev_cpu = curr_cpu
>> new_cpu = select_id
On 02/21/2013 05:43 PM, Mike Galbraith wrote:
> On Thu, 2013-02-21 at 17:08 +0800, Michael Wang wrote:
>
>> But is this patch set really cause regression on your Q6600? It may
>> sacrificed some thing, but I still think it will benefit far more,
>> especially on huge systems.
>
> We spread on FOR
On Thu, 2013-02-21 at 12:51 +0800, Michael Wang wrote:
> The old logical when locate affine_sd is:
>
> if prev_cpu != curr_cpu
> if wake_affine()
> prev_cpu = curr_cpu
> new_cpu = select_idle_sibling(prev_cpu)
> return new_cpu
>
> Th
On Thu, 2013-02-21 at 17:08 +0800, Michael Wang wrote:
> But is this patch set really cause regression on your Q6600? It may
> sacrificed some thing, but I still think it will benefit far more,
> especially on huge systems.
We spread on FORK/EXEC, and will no longer will pull communicating tasks
On 02/21/2013 04:10 PM, Mike Galbraith wrote:
> On Thu, 2013-02-21 at 15:00 +0800, Michael Wang wrote:
>> On 02/21/2013 02:11 PM, Mike Galbraith wrote:
>>> On Thu, 2013-02-21 at 12:51 +0800, Michael Wang wrote:
On 02/20/2013 06:49 PM, Ingo Molnar wrote:
[snip]
>> [snip]
if
On 02/21/2013 04:10 PM, Mike Galbraith wrote:
> On Thu, 2013-02-21 at 15:00 +0800, Michael Wang wrote:
>> On 02/21/2013 02:11 PM, Mike Galbraith wrote:
>>> On Thu, 2013-02-21 at 12:51 +0800, Michael Wang wrote:
On 02/20/2013 06:49 PM, Ingo Molnar wrote:
[snip]
>> [snip]
if
On Thu, 2013-02-21 at 15:00 +0800, Michael Wang wrote:
> On 02/21/2013 02:11 PM, Mike Galbraith wrote:
> > On Thu, 2013-02-21 at 12:51 +0800, Michael Wang wrote:
> >> On 02/20/2013 06:49 PM, Ingo Molnar wrote:
> >> [snip]
> [snip]
> >>
> >>if wake_affine()
> >>new_cpu = select_idl
On 02/21/2013 02:11 PM, Mike Galbraith wrote:
> On Thu, 2013-02-21 at 12:51 +0800, Michael Wang wrote:
>> On 02/20/2013 06:49 PM, Ingo Molnar wrote:
>> [snip]
[snip]
>>
>> if wake_affine()
>> new_cpu = select_idle_sibling(curr_cpu)
>> else
>> new_cpu = select_id
On Thu, 2013-02-21 at 12:51 +0800, Michael Wang wrote:
> On 02/20/2013 06:49 PM, Ingo Molnar wrote:
> [snip]
> >
> > The changes look clean and reasoable, any ideas exactly *why* it
> > speeds up?
> >
> > I.e. are there one or two key changes in the before/after logic
> > and scheduling patter
On 02/20/2013 10:05 PM, Mike Galbraith wrote:
> On Wed, 2013-02-20 at 14:32 +0100, Peter Zijlstra wrote:
>> On Wed, 2013-02-20 at 11:49 +0100, Ingo Molnar wrote:
>>
>>> The changes look clean and reasoable,
>>
>> I don't necessarily agree, note that O(n^2) storage requirement that
>> Michael fail
On 02/20/2013 09:32 PM, Peter Zijlstra wrote:
> On Wed, 2013-02-20 at 11:49 +0100, Ingo Molnar wrote:
>
>> The changes look clean and reasoable,
>
> I don't necessarily agree, note that O(n^2) storage requirement that
> Michael failed to highlight ;-)
Forgive me for not explain this point in co
On 02/20/2013 06:49 PM, Ingo Molnar wrote:
[snip]
>
> The changes look clean and reasoable, any ideas exactly *why* it
> speeds up?
>
> I.e. are there one or two key changes in the before/after logic
> and scheduling patterns that you can identify as causing the
> speedup?
Hi, Ingo
Thanks fo
On Wed, 2013-02-20 at 14:32 +0100, Peter Zijlstra wrote:
> On Wed, 2013-02-20 at 11:49 +0100, Ingo Molnar wrote:
>
> > The changes look clean and reasoable,
>
> I don't necessarily agree, note that O(n^2) storage requirement that
> Michael failed to highlight ;-)
(yeah, I mentioned that needs
On Wed, 2013-02-20 at 11:49 +0100, Ingo Molnar wrote:
> The changes look clean and reasoable,
I don't necessarily agree, note that O(n^2) storage requirement that
Michael failed to highlight ;-)
> any ideas exactly *why* it speeds up?
That is indeed the most interesting part.. There's two part
* Michael Wang wrote:
> v3 change log:
> Fix small logical issues (Thanks to Mike Galbraith).
> Change the way of handling WAKE.
>
> This patch set is trying to simplify the select_task_rq_fair()
> with schedule balance map.
>
> After get rid of the complex code and reorganize the
On 01/29/2013 05:08 PM, Michael Wang wrote:
> v3 change log:
> Fix small logical issues (Thanks to Mike Galbraith).
> Change the way of handling WAKE.
>
> This patch set is trying to simplify the select_task_rq_fair() with
> schedule balance map.
>
> After get rid of the complex code
41 matches
Mail list logo