On Wed, May 15, 2024 at 01:06:13PM +0100 Qais Yousef wrote:
> On 05/15/24 07:20, Phil Auld wrote:
> > On Wed, May 15, 2024 at 10:32:38AM +0200 Peter Zijlstra wrote:
> > > On Tue, May 14, 2024 at 07:58:51PM -0400, Phil Auld wrote:
> > > >
> > > > Hi Qais
On Wed, May 15, 2024 at 10:32:38AM +0200 Peter Zijlstra wrote:
> On Tue, May 14, 2024 at 07:58:51PM -0400, Phil Auld wrote:
> >
> > Hi Qais,
> >
> > On Wed, May 15, 2024 at 12:41:12AM +0100 Qais Yousef wrote:
> > > rt_task() checks if a task has RT priority.
Hi Qais,
On Wed, May 15, 2024 at 12:41:12AM +0100 Qais Yousef wrote:
> rt_task() checks if a task has RT priority. But depends on your
> dictionary, this could mean it belongs to RT class, or is a 'realtime'
> task, which includes RT and DL classes.
>
> Since this has caused some confusion alre
On Mon, Apr 19, 2021 at 06:17:47PM +0100 Valentin Schneider wrote:
> On 19/04/21 08:59, Phil Auld wrote:
> > On Fri, Apr 16, 2021 at 10:43:38AM +0100 Valentin Schneider wrote:
> >> On 15/04/21 16:39, Rik van Riel wrote:
> >> > On Thu, 2021-04-15 at 18:58 +
On Fri, Apr 16, 2021 at 10:43:38AM +0100 Valentin Schneider wrote:
> On 15/04/21 16:39, Rik van Riel wrote:
> > On Thu, 2021-04-15 at 18:58 +0100, Valentin Schneider wrote:
> >> Consider the following topology:
> >>
> >> Long story short, preempted misfit tasks are affected by task_hot(),
> >> whil
On Thu, Mar 18, 2021 at 09:26:58AM +0800 changhuaixin wrote:
>
>
> > On Mar 17, 2021, at 4:06 PM, Peter Zijlstra wrote:
> >
> > On Wed, Mar 17, 2021 at 03:16:18PM +0800, changhuaixin wrote:
> >
> >>> Why do you allow such a large burst? I would expect something like:
> >>>
> >>> if (burst >
On Mon, Nov 09, 2020 at 03:38:15PM + Mel Gorman wrote:
> On Mon, Nov 09, 2020 at 10:24:11AM -0500, Phil Auld wrote:
> > Hi,
> >
> > On Fri, Nov 06, 2020 at 04:00:10PM + Mel Gorman wrote:
> > > On Fri, Nov 06, 2020 at 02:33:56PM +0100, Vincent Guittot wrote:
Hi,
On Fri, Nov 06, 2020 at 04:00:10PM + Mel Gorman wrote:
> On Fri, Nov 06, 2020 at 02:33:56PM +0100, Vincent Guittot wrote:
> > On Fri, 6 Nov 2020 at 13:03, Mel Gorman wrote:
> > >
> > > On Wed, Nov 04, 2020 at 09:42:05AM +, Mel Gorman wrote:
> > > > While it's possible that some other
Hi,
On Mon, Nov 02, 2020 at 12:06:21PM +0100 Vincent Guittot wrote:
> On Mon, 2 Nov 2020 at 11:50, Mel Gorman wrote:
> >
> > On Tue, Jul 14, 2020 at 08:59:41AM -0400, peter.pu...@linaro.org wrote:
> > > From: Peter Puhov
> > >
> > > v0: https://lkml.org/lkml/2020/6/16/1286
> > >
> > > Changes in
On Fri, Oct 30, 2020 at 10:16:29PM + David Laight wrote:
> From: Benjamin Segall
> > Sent: 30 October 2020 18:48
> >
> > Hui Su writes:
> >
> > > Since 'ab93a4bc955b ("sched/fair: Remove
> > > distribute_running fromCFS bandwidth")',there is
> > > nothing to protect between raw_spin_lock_irq
> @@ -5105,9 +5105,6 @@ static void do_sched_cfs_slack_timer(struct
> cfs_bandwidth *cfs_b)
> return;
>
> distribute_cfs_runtime(cfs_b);
> -
> - raw_spin_lock_irqsave(&cfs_b->lock, flags);
> - raw_spin_unlock_irqrestore(&cfs_b->lock, flags);
> }
>
> /*
> --
> 2.29.0
>
>
Nice :)
Reviewed-by: Phil Auld
--
Hi John,
On Wed, Oct 28, 2020 at 05:19:09AM -0700 John B. Wyatt IV wrote:
> Patchset of style and small fixes for the 8th iteration of the
> Core-Scheduling feature.
>
> Style fixes include changing spaces to tabs, inserting new lines before
> declarations, removing unused braces, and spelling.
>
On Thu, Oct 22, 2020 at 09:32:55PM +0100 Mel Gorman wrote:
> On Thu, Oct 22, 2020 at 07:59:43PM +0200, Rafael J. Wysocki wrote:
> > > > Agreed. I'd like the option to switch back if we make the default
> > > > change.
> > > > It's on the table and I'd like to be able to go that way.
> > > >
> > >
On Thu, Oct 22, 2020 at 03:58:13PM +0100 Colin Ian King wrote:
> On 22/10/2020 15:52, Mel Gorman wrote:
> > On Thu, Oct 22, 2020 at 02:29:49PM +0200, Peter Zijlstra wrote:
> >> On Thu, Oct 22, 2020 at 02:19:29PM +0200, Rafael J. Wysocki wrote:
> However I do want to retire ondemand, conservati
_load_avg(cfs_rq, false);
> + update_tg_load_avg(cfs_rq);
> propagate_entity_cfs_rq(se);
> }
>
> @@ -10805,7 +10804,7 @@ static void attach_entity_cfs_rq(struct sched_entity
> *se)
> /* Synchronize entity with its cfs_rq */
> update_load_avg(cfs_rq, se, sched_feat(ATTACH_AGE_LOAD) ? 0 :
> SKIP_AGE_LOAD);
> attach_entity_load_avg(cfs_rq, se);
> - update_tg_load_avg(cfs_rq, false);
> + update_tg_load_avg(cfs_rq);
> propagate_entity_cfs_rq(se);
> }
>
> --
> 2.17.1
>
LGTM,
Reviewed-by: Phil Auld
--
On Thu, Sep 24, 2020 at 10:43:12AM -0700 Tim Chen wrote:
>
>
> On 9/24/20 10:13 AM, Phil Auld wrote:
> > On Thu, Sep 24, 2020 at 09:37:33AM -0700 Tim Chen wrote:
> >>
> >>
> >> On 9/22/20 12:14 AM, Vincent Guittot wrote:
> >>
> >>>
On Thu, Sep 24, 2020 at 09:37:33AM -0700 Tim Chen wrote:
>
>
> On 9/22/20 12:14 AM, Vincent Guittot wrote:
>
> >>
>
> And a quick test with hackbench on my octo cores arm64 gives for 12
>
> Vincent,
>
> Is it octo (=10) or octa (=8) cores on a single socket for your system?
In what
Hi,
On Tue, Sep 22, 2020 at 02:54:01PM +0800 Huang Ying wrote:
> Now, AutoNUMA can only optimize the page placement among the NUMA nodes if the
> default memory policy is used. Because the memory policy specified explicitly
> should take precedence. But this seems too strict in some situations.
On Fri, Sep 18, 2020 at 12:39:28PM -0400 Phil Auld wrote:
> Hi Peter,
>
> On Mon, Sep 14, 2020 at 01:42:02PM +0200 pet...@infradead.org wrote:
> > On Mon, Sep 14, 2020 at 12:03:36PM +0200, Vincent Guittot wrote:
> > > Vincent Guittot (4):
> > > sched/fair: relax
Hi Peter,
On Mon, Sep 14, 2020 at 01:42:02PM +0200 pet...@infradead.org wrote:
> On Mon, Sep 14, 2020 at 12:03:36PM +0200, Vincent Guittot wrote:
> > Vincent Guittot (4):
> > sched/fair: relax constraint on task's load during load balance
> > sched/fair: reduce minimal imbalance threshold
> >
On Mon, Sep 14, 2020 at 01:42:02PM +0200 pet...@infradead.org wrote:
> On Mon, Sep 14, 2020 at 12:03:36PM +0200, Vincent Guittot wrote:
> > Vincent Guittot (4):
> > sched/fair: relax constraint on task's load during load balance
> > sched/fair: reduce minimal imbalance threshold
> > sched/fai
Hi Quais,
On Mon, Sep 07, 2020 at 12:02:24PM +0100 Qais Yousef wrote:
> On 09/02/20 09:54, Phil Auld wrote:
> > >
> > > I think this decoupling is not necessary. The natural place for those
> > > scheduler trace_event based on trace_points extension files is
&g
On Thu, Sep 03, 2020 at 03:30:15PM -0300 Marcelo Tosatti wrote:
> On Thu, Sep 03, 2020 at 03:23:59PM -0300, Marcelo Tosatti wrote:
> > On Tue, Sep 01, 2020 at 12:46:41PM +0200, Frederic Weisbecker wrote:
> > > Hi,
> >
> > Hi Frederic,
> >
> > Thanks for the summary! Looking forward to your commen
On Wed, Sep 02, 2020 at 12:44:42PM +0200 Dietmar Eggemann wrote:
> + Phil Auld
>
Thanks Dietmar.
> On 28/08/2020 19:26, Qais Yousef wrote:
> > On 08/28/20 19:10, Dietmar Eggemann wrote:
> >> On 28/08/2020 12:27, Qais Yousef wrote:
> >>> On 08/28/20 10
The following commit has been merged into the sched/urgent branch of tip:
Commit-ID: a1bd06853ee478d37fae9435c5521e301de94c67
Gitweb:
https://git.kernel.org/tip/a1bd06853ee478d37fae9435c5521e301de94c67
Author:Phil Auld
AuthorDate:Wed, 05 Aug 2020 16:31:38 -04:00
Committer
The count field is meant to tell if an update to nr_running
is an add or a subtract. Make it do so by adding the missing
minus sign.
Fixes: 9d246053a691 ("sched: Add a tracepoint to track rq->nr_running")
Signed-off-by: Phil Auld
---
kernel/sched/sched.h | 2 +-
1 file changed
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 9d246053a69196c7c27068870e9b4b66ac536f68
Gitweb:
https://git.kernel.org/tip/9d246053a69196c7c27068870e9b4b66ac536f68
Author:Phil Auld
AuthorDate:Mon, 29 Jun 2020 15:23:03 -04:00
Committer
Hi Peter,
On Thu, Jul 02, 2020 at 02:52:11PM +0200 Peter Zijlstra wrote:
>
> Dave hit the problem fixed by commit:
>
> b6e13e85829f ("sched/core: Fix ttwu() race")
>
> and failed to understand much of the code involved. Per his request a
> few comments to (hopefully) clarify things.
>
> Re
On Fri, Jun 26, 2020 at 11:10:28AM -0400 Joel Fernandes wrote:
> On Fri, Jun 26, 2020 at 10:36:01AM -0400, Vineeth Remanan Pillai wrote:
> > On Thu, Jun 25, 2020 at 9:47 PM Joel Fernandes
> > wrote:
> > >
> > > On Thu, Jun 25, 2020 at 4:12 PM Vineeth Remanan Pillai
> > > wrote:
> > > [...]
> > >
acepoints are added to add_nr_running() and sub_nr_running() which
are in kernel/sched/sched.h. In order to avoid CREATE_TRACE_POINTS in
the header a wrapper call is used and the trace/events/sched.h include
is moved before sched.h in kernel/sched/core.
Signed-off-by: Phil Auld
CC: Qais Yousef
CC
Hi Qais,
On Mon, Jun 22, 2020 at 01:17:47PM +0100 Qais Yousef wrote:
> On 06/19/20 10:11, Phil Auld wrote:
> > Add a bare tracepoint trace_sched_update_nr_running_tp which tracks
> > ->nr_running CPU's rq. This is used to accurately trace this data and
> > provide
On Fri, Jun 19, 2020 at 12:46:41PM -0400 Steven Rostedt wrote:
> On Fri, 19 Jun 2020 10:11:20 -0400
> Phil Auld wrote:
>
> >
> > diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h
> > index ed168b0e2c53..a6d9fe5a68cf 100644
> > --- a/inclu
acepoints are added to add_nr_running() and sub_nr_running() which
are in kernel/sched/sched.h. Since sched.h includes trace/events/tlb.h
via mmu_context.h we had to limit when CREATE_TRACE_POINTS is defined.
Signed-off-by: Phil Auld
CC: Qais Yousef
CC: Ingo Molnar
CC: Peter Zijlstra
CC: Vincen
On Tue, Jun 09, 2020 at 07:05:38AM +0800 Tao Zhou wrote:
> Hi Phil,
>
> On Mon, Jun 08, 2020 at 10:53:04AM -0400, Phil Auld wrote:
> > On Sun, Jun 07, 2020 at 09:25:58AM +0800 Tao Zhou wrote:
> > > Hi,
> > >
> > > On Fri, May 01, 2020 at 06:
> > don't start a distribution while one is already running. However, even
> > in the event that this race occurs, it is fine to have two distributions
> > running (especially now that distribute grabs the cfs_b->lock to
> > determine remaining quota before assigning).
> &g
On Thu, May 28, 2020 at 02:17:19PM -0400 Phil Auld wrote:
> On Thu, May 28, 2020 at 07:01:28PM +0200 Peter Zijlstra wrote:
> > On Sun, May 24, 2020 at 10:00:46AM -0400, Phil Auld wrote:
> > > On Fri, May 22, 2020 at 05:35:24PM -0400 Joel Fernandes wrote:
> > > > On F
On Thu, May 28, 2020 at 07:01:28PM +0200 Peter Zijlstra wrote:
> On Sun, May 24, 2020 at 10:00:46AM -0400, Phil Auld wrote:
> > On Fri, May 22, 2020 at 05:35:24PM -0400 Joel Fernandes wrote:
> > > On Fri, May 22, 2020 at 02:59:05PM +0200, Peter Zijlstra wrote:
> > >
On Fri, May 22, 2020 at 05:35:24PM -0400 Joel Fernandes wrote:
> On Fri, May 22, 2020 at 02:59:05PM +0200, Peter Zijlstra wrote:
> [..]
> > > > It doens't allow tasks for form their own groups (by for example setting
> > > > the key to that of another task).
> > >
> > > So for this, I was thinking
The following commit has been merged into the sched/urgent branch of tip:
Commit-ID: b34cb07dde7c2346dec73d053ce926aeaa087303
Gitweb:
https://git.kernel.org/tip/b34cb07dde7c2346dec73d053ce926aeaa087303
Author:Phil Auld
AuthorDate:Tue, 12 May 2020 09:52:22 -04:00
Committer
On Wed, May 13, 2020 at 03:25:29PM +0200 Vincent Guittot wrote:
> On Wed, 13 May 2020 at 15:18, Phil Auld wrote:
> >
> > On Wed, May 13, 2020 at 03:15:53PM +0200 Vincent Guittot wrote:
> > > On Wed, 13 May 2020 at 15:13, Phil Auld wrote:
> > > >
> > &g
On Wed, May 13, 2020 at 03:15:53PM +0200 Vincent Guittot wrote:
> On Wed, 13 May 2020 at 15:13, Phil Auld wrote:
> >
> > On Wed, May 13, 2020 at 03:10:28PM +0200 Vincent Guittot wrote:
> > > On Wed, 13 May 2020 at 14:45, Phil Auld wrote:
> > > >
> > >
On Wed, May 13, 2020 at 03:10:28PM +0200 Vincent Guittot wrote:
> On Wed, 13 May 2020 at 14:45, Phil Auld wrote:
> >
> > Hi Vincent,
> >
> > On Wed, May 13, 2020 at 02:33:35PM +0200 Vincent Guittot wrote:
> > > enqueue_task_fair jumps to enqueue_
the same pattern as
> enqueue_task_fair(). This fixes a problem already faced with the latter and
> add an optimization in the last for_each_sched_entity loop.
>
> Reported-by Tao Zhou
> Reviewed-by: Phil Auld
> Signed-off-by: Vincent Guittot
> ---
>
> v2 changes:
> - R
it doesn't jump to the label then se must be NULL for
the loop to terminate. The final loop is a NOP if se is NULL. The check
wasn't protecting that.
Otherwise still
> Reviewed-by: Phil Auld
Cheers,
Phil
> Signed-off-by: Vincent Guittot
> ---
>
> v2 changes:
> -
with this one as well. As expected, since
the first patch fixed the issue I was seeing and I wasn't hitting
the assert here anyway, I didn't hit the assert.
But I also didn't hit any other issues, new or old.
It makes sense to use the same logic flow here as enqueue_task_fair.
Reviewed-by: Phil Auld
Cheers,
Phil
--
uct *p,
> int flags)
>
> }
>
> +enqueue_throttle:
> if (cfs_bandwidth_used()) {
> /*
> * When bandwidth control is enabled; the cfs_rq_throttled()
> --
> 2.17.1
>
Reviewed-by: Phil Auld
--
On Tue, May 12, 2020 at 04:10:48PM +0200 Peter Zijlstra wrote:
> On Tue, May 12, 2020 at 09:52:22AM -0400, Phil Auld wrote:
> > sched/fair: Fix enqueue_task_fair warning some more
> >
> > The recent patch, fe61468b2cb (sched/fair: Fix enqueue_task_fair warning)
> >
add fixes and review tags.
Suggested-by: Vincent Guittot
Signed-off-by: Phil Auld
Cc: Peter Zijlstra (Intel)
Cc: Vincent Guittot
Cc: Ingo Molnar
Cc: Juri Lelli
Reviewed-by: Vincent Guittot
Reviewed-by: Dietmar Eggemann
Fixes: fe61468b2cb (sched/fair: Fix enqueue_task_fair warning)
---
ke
Hi Dietmar,
On Tue, May 12, 2020 at 11:00:16AM +0200 Dietmar Eggemann wrote:
> On 11/05/2020 22:44, Phil Auld wrote:
> > On Mon, May 11, 2020 at 09:25:43PM +0200 Vincent Guittot wrote:
> >> On Thu, 7 May 2020 at 22:36, Phil Auld wrote:
> >>>
> >>> sche
On Mon, May 11, 2020 at 09:25:43PM +0200 Vincent Guittot wrote:
> On Thu, 7 May 2020 at 22:36, Phil Auld wrote:
> >
> > sched/fair: Fix enqueue_task_fair warning some more
> >
> > The recent patch, fe61468b2cb (sched/fair: Fix enqueue_task_fair warning)
> > did
first loop.
Address this by calling leaf_add_rq_list if there are throttled parents while
doing the second for_each_sched_entity loop.
Suggested-by: Vincent Guittot
Signed-off-by: Phil Auld
Cc: Peter Zijlstra (Intel)
Cc: Vincent Guittot
Cc: Ingo Molnar
Cc: Juri Lelli
---
kernel/sched/fair.c | 7
Hi Vincent,
On Thu, May 07, 2020 at 05:06:29PM +0200 Vincent Guittot wrote:
> Hi Phil,
>
> On Wed, 6 May 2020 at 20:05, Phil Auld wrote:
> >
> > Hi Vincent,
> >
> > Thanks for taking a look. More below...
> >
> > On Wed, May 06, 2020 at 06:36:45
On Thu, May 07, 2020 at 06:29:44PM +0200 Jirka Hladky wrote:
> Hi Mel,
>
> we are not targeting just OMP applications. We see the performance
> degradation also for other workloads, like SPECjbb2005 and
> SPECjvm2008. Even worse, it also affects a higher number of threads.
> For example, comparing
Hi Vincent,
On Thu, May 07, 2020 at 05:06:29PM +0200 Vincent Guittot wrote:
> Hi Phil,
>
> On Wed, 6 May 2020 at 20:05, Phil Auld wrote:
> >
> > Hi Vincent,
> >
> > Thanks for taking a look. More below...
> >
> > On Wed, May 06, 2020 at 06:36:45
Hi Vincent,
Thanks for taking a look. More below...
On Wed, May 06, 2020 at 06:36:45PM +0200 Vincent Guittot wrote:
> Hi Phil,
>
> - reply to all this time
>
> On Wed, 6 May 2020 at 16:18, Phil Auld wrote:
> >
> > sched/fair: Fix enqueue_task_fair warning some mo
first loop.
Address this issue by saving the se pointer when the first loop exits and
resetting it before doing the fix up, if needed.
Signed-off-by: Phil Auld
Cc: Peter Zijlstra (Intel)
Cc: Vincent Guittot
Cc: Ingo Molnar
Cc: Juri Lelli
---
kernel/sched/fair.c | 4
1 file changed, 4 insertion
On Mon, Oct 21, 2019 at 10:44:20AM +0200 Vincent Guittot wrote:
> On Mon, 21 Oct 2019 at 09:50, Ingo Molnar wrote:
> >
> >
> > * Vincent Guittot wrote:
> >
> > > Several wrong task placement have been raised with the current load
> > > balance algorithm but their fixes are not always straight for
On Tue, Oct 08, 2019 at 05:53:11PM +0200 Vincent Guittot wrote:
> Hi Phil,
>
...
> While preparing v4, I have noticed that I have probably oversimplified
> the end of find_idlest_group() in patch "sched/fair: optimize
> find_idlest_group" when it compares local vs the idlest other group.
> Espe
Hi Vincent,
On Thu, Sep 19, 2019 at 09:33:31AM +0200 Vincent Guittot wrote:
> Several wrong task placement have been raised with the current load
> balance algorithm but their fixes are not always straight forward and
> end up with using biased values to force migrations. A cleanup and rework
> of
20, cfs_quota_us = 3200)
[ 1393.965140] cfs_period_timer[cpu11]: period too short, but cannot scale up
without losing precision (cfs_period_us = 20, cfs_quota_us = 3200)
I suspect going higher could cause the original lockup, but that'd be the case
with the old code as well.
An
Hi Xuewei,
On Fri, Oct 04, 2019 at 05:28:15PM -0700 Xuewei Zhang wrote:
> On Fri, Oct 4, 2019 at 6:14 AM Phil Auld wrote:
> >
> > On Thu, Oct 03, 2019 at 07:05:56PM -0700 Xuewei Zhang wrote:
> > > +cc neeln...@google.com and hao...@google.com, they helped a lot
>
On Thu, Oct 03, 2019 at 07:05:56PM -0700 Xuewei Zhang wrote:
> +cc neeln...@google.com and hao...@google.com, they helped a lot
> for this issue. Sorry I forgot to include them when sending out the patch.
>
> On Thu, Oct 3, 2019 at 5:55 PM Phil Auld wrote:
> >
> > Hi
Hi,
On Thu, Oct 03, 2019 at 05:12:43PM -0700 Xuewei Zhang wrote:
> quota/period ratio is used to ensure a child task group won't get more
> bandwidth than the parent task group, and is calculated as:
> normalized_cfs_quota() = [(quota_us << 20) / period_us]
>
> If the quota/period ratio was chang
wrong
group in find_busiest_group due to using the average load. The second was in
fix_small_imbalance(). The "load" of the lu.C tasks was so low it often failed
to move anything even when it did find a group that was overloaded (nr_running
> width). I have two small patches which fix this but since Vincent was
> embarking
on a re-work which also addressed this I dropped them.
We've also run a series of performance tests we use to check for regressions
and
did not find any bad results on our workloads and systems.
So...
Tested-by: Phil Auld
Cheers,
Phil
--
On Wed, Aug 28, 2019 at 06:01:14PM +0200 Peter Zijlstra wrote:
> On Wed, Aug 28, 2019 at 11:30:34AM -0400, Phil Auld wrote:
> > On Tue, Aug 27, 2019 at 11:50:35PM +0200 Peter Zijlstra wrote:
>
> > > And given MDS, I'm still not entirely convinced it all makes sense.
On Tue, Aug 27, 2019 at 11:50:35PM +0200 Peter Zijlstra wrote:
> On Tue, Aug 27, 2019 at 10:14:17PM +0100, Matthew Garrett wrote:
> > Apple have provided a sysctl that allows applications to indicate that
> > specific threads should make use of core isolation while allowing
> > the rest of the sy
On Fri, Aug 23, 2019 at 10:28:02AM -0700 bseg...@google.com wrote:
> Dave Chiluk writes:
>
> > On Wed, Aug 21, 2019 at 12:36 PM wrote:
> >>
> >> Qian Cai writes:
> >>
> >> > The linux-next commit "sched/fair: Fix low cpu usage with high
> >> > throttling by removing expiration of cpu-local slic
er does:
raw_spin_lock(&rq->lock);
update_rq_clock(rq);
which triggers the warning because of not using the rq_lock wrappers.
So, use the wrappers.
Signed-off-by: Phil Auld
Cc: Peter Zijlstra (Intel)
Cc: Ingo Molnar
Cc: Valentin Schneider
Cc: Dietmar Eggemann
---
On Fri, Aug 09, 2019 at 06:43:09PM +0100 Valentin Schneider wrote:
> On 09/08/2019 14:33, Phil Auld wrote:
> > On Tue, Aug 06, 2019 at 03:03:34PM +0200 Peter Zijlstra wrote:
> >> On Thu, Aug 01, 2019 at 09:37:49AM -0400, Phil Auld wrote:
> >>> Enabling WARN_DOU
On Mon, Aug 12, 2019 at 05:52:04AM -0700 tip-bot for Phil Auld wrote:
> Commit-ID: a46d14eca7b75fffe35603aa8b81df654353d80f
> Gitweb:
> https://git.kernel.org/tip/a46d14eca7b75fffe35603aa8b81df654353d80f
> Author: Phil Auld
> AuthorDate: Thu, 1 Aug 2019 09:37:49 -0
Commit-ID: a46d14eca7b75fffe35603aa8b81df654353d80f
Gitweb: https://git.kernel.org/tip/a46d14eca7b75fffe35603aa8b81df654353d80f
Author: Phil Auld
AuthorDate: Thu, 1 Aug 2019 09:37:49 -0400
Committer: Thomas Gleixner
CommitDate: Mon, 12 Aug 2019 14:45:34 +0200
sched/fair: Use rq_lock
On Fri, Aug 09, 2019 at 06:21:22PM +0200 Dietmar Eggemann wrote:
> On 8/8/19 1:01 PM, tip-bot for Phil Auld wrote:
>
> [...]
>
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 19c58599e967..d9407517dae9 100644
> > --- a/kernel/sched/fair.c
On Tue, Aug 06, 2019 at 03:03:34PM +0200 Peter Zijlstra wrote:
> On Thu, Aug 01, 2019 at 09:37:49AM -0400, Phil Auld wrote:
> > Enabling WARN_DOUBLE_CLOCK in /sys/kernel/debug/sched_features causes
>
> ISTR there were more issues; but it sure is good to start picking them
> off
Commit-ID: 6b8fd01b21f5f2701b407a7118f236ba4c41226d
Gitweb: https://git.kernel.org/tip/6b8fd01b21f5f2701b407a7118f236ba4c41226d
Author: Phil Auld
AuthorDate: Thu, 1 Aug 2019 09:37:49 -0400
Committer: Peter Zijlstra
CommitDate: Thu, 8 Aug 2019 09:09:31 +0200
sched/fair: Use rq_lock
On Tue, Aug 06, 2019 at 10:41:25PM +0800 Aaron Lu wrote:
> On 2019/8/6 22:17, Phil Auld wrote:
> > On Tue, Aug 06, 2019 at 09:54:01PM +0800 Aaron Lu wrote:
> >> On Mon, Aug 05, 2019 at 04:09:15PM -0400, Phil Auld wrote:
> >>> Hi,
> >>>
> >&g
On Tue, Aug 06, 2019 at 09:54:01PM +0800 Aaron Lu wrote:
> On Mon, Aug 05, 2019 at 04:09:15PM -0400, Phil Auld wrote:
> > Hi,
> >
> > On Fri, Aug 02, 2019 at 11:37:15AM -0400 Julien Desfossez wrote:
> > > We tested both Aaron's and Tim's patches and here
On Tue, Aug 06, 2019 at 03:03:34PM +0200 Peter Zijlstra wrote:
> On Thu, Aug 01, 2019 at 09:37:49AM -0400, Phil Auld wrote:
> > Enabling WARN_DOUBLE_CLOCK in /sys/kernel/debug/sched_features causes
>
> ISTR there were more issues; but it sure is good to start picking them
> o
On Tue, Aug 06, 2019 at 02:04:16PM +0800 Hillf Danton wrote:
>
> On Mon, 5 Aug 2019 22:07:05 +0800 Phil Auld wrote:
> >
> > If we're to clear that flag right there, outside of the lock pinning code,
> > then I think we might as well just remove the flag and all as
Hi,
On Fri, Aug 02, 2019 at 11:37:15AM -0400 Julien Desfossez wrote:
> We tested both Aaron's and Tim's patches and here are our results.
>
> Test setup:
> - 2 1-thread sysbench, one running the cpu benchmark, the other one the
> mem benchmark
> - both started at the same time
> - both are pinn
On Fri, Aug 02, 2019 at 05:20:38PM +0800 Hillf Danton wrote:
>
> On Thu, 1 Aug 2019 09:37:49 -0400 Phil Auld wrote:
> >
> > Enabling WARN_DOUBLE_CLOCK in /sys/kernel/debug/sched_features causes
> > warning to fire in update_rq_clock. This seems to be caused by onlining
&g
e raw locking
removes this warning.
Signed-off-by: Phil Auld
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: Vincent Guittot
---
Resend with PATCH instead of CHANGE in subject, and more recent upstream x86
backtrace.
kernel/sched/fair.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
On Fri, Jul 26, 2019 at 04:54:11PM +0200 Peter Zijlstra wrote:
> Make sure the entire for loop has stop_cpus_in_progress set.
>
> Cc: Valentin Schneider
> Cc: Aaron Lu
> Cc: keesc...@chromium.org
> Cc: mi...@kernel.org
> Cc: Pawan Gupta
> Cc: Phil Auld
> Cc: torva..
0/0x130
[ 612.546585] online_fair_sched_group+0x70/0x140
[ 612.551092] sched_online_group+0xd0/0xf0
[ 612.555082] sched_autogroup_create_attach+0xd0/0x198
[ 612.560108] sys_setsid+0x140/0x160
[ 612.563579] el0_svc_naked+0x44/0x48
Signed-off-by: Phil Auld
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc:
the kernel just fine in cgroup v2. A user who wishes
> for the previous affinity mask to be restored in this fallback case can use
> that mechanism instead.
>
> This patch modifies scheduler behavior by instead resetting the mask to
> task_cs(tsk)->cpus_allowed by default, and cpu_pos
On Tue, Jun 11, 2019 at 04:24:43PM +0200 Peter Zijlstra wrote:
> On Tue, Jun 11, 2019 at 10:12:19AM -0400, Phil Auld wrote:
>
> > That looks reasonable to me.
> >
> > Out of curiosity, why not bool? Is sizeof bool architecture dependent?
>
> Yeah, sizeof(_Bool)
On Tue, Jun 11, 2019 at 03:53:25PM +0200 Peter Zijlstra wrote:
> On Thu, Jun 06, 2019 at 10:21:01AM -0700, bseg...@google.com wrote:
> > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> > index efa686eeff26..60219acda94b 100644
> > --- a/kernel/sched/sched.h
> > +++ b/kernel/sched/sched.h
>
> booldistribute_running;
> + boolslack_started;
> #endif
> };
>
> --
> 2.22.0.rc1.257.g3120a18244-goog
>
I think this looks good. I like not delaying that further even if it
does not fix Dave's use case.
It does make it glaring that I should have used false/true for setting
distribute_running though :)
Acked-by: Phil Auld
--
On Fri, May 24, 2019 at 10:14:36AM -0500 Dave Chiluk wrote:
> On Fri, May 24, 2019 at 9:32 AM Phil Auld wrote:
> > On Thu, May 23, 2019 at 02:01:58PM -0700 Peter Oskolkov wrote:
>
> > > If the machine runs at/close to capacity, won't the overallocation
> >
On Sat, May 18, 2019 at 11:37:56PM +0800 Aubrey Li wrote:
> On Wed, Apr 24, 2019 at 12:18 AM Vineeth Remanan Pillai
> wrote:
> >
> > From: Peter Zijlstra (Intel)
> >
> > Instead of only selecting a local task, select a task for all SMT
> > siblings for every reschedule on the core (irrespective w
On Mon, Apr 29, 2019 at 09:25:35PM +0800 Li, Aubrey wrote:
> On 2019/4/29 14:14, Ingo Molnar wrote:
> >
> > * Li, Aubrey wrote:
> >
> >>> I suspect it's pretty low, below 1% for all rows?
> >>
> >> Hope my this mail box works for this...
> >>
> >> .---
On Fri, Apr 26, 2019 at 04:13:07PM +0200 Peter Zijlstra wrote:
> On Thu, Apr 25, 2019 at 10:26:53AM -0400, Phil Auld wrote:
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index e8e5f26db052..b312ea1e28a4 100644
> > --- a/kernel/sched/core.c
> >
On Thu, Apr 25, 2019 at 08:53:43PM +0200 Ingo Molnar wrote:
> Interesting. This strongly suggests sub-optimal SMT-scheduling in the
> non-saturated HT case, i.e. a scheduler balancing bug.
>
> As long as loads are clearly below the physical cores count (which they
> are in the early phases of yo
On Wed, Apr 24, 2019 at 08:43:36PM + Vineeth Remanan Pillai wrote:
> > A minor nitpick. I find keeping the vruntime base readjustment in
> > core_prio_less probably is more straight forward rather than pass a
> > core_cmp bool around.
>
> The reason I moved the vruntime base adjustment to __p
On Tue, Apr 23, 2019 at 04:18:17PM + Vineeth Remanan Pillai wrote:
> From: Peter Zijlstra (Intel)
>
> Marks all tasks in a cgroup as matching for core-scheduling.
>
> Signed-off-by: Peter Zijlstra (Intel)
> ---
> kernel/sched/core.c | 62
> ker
Hi,
On Tue, Apr 23, 2019 at 04:18:05PM + Vineeth Remanan Pillai wrote:
> Second iteration of the core-scheduling feature.
Thanks for spinning V2 of this.
>
> This version fixes apparent bugs and performance issues in v1. This
> doesn't fully address the issue of core sharing between process
Hi Sasha,
On Tue, Apr 16, 2019 at 08:32:09AM -0700 tip-bot for Phil Auld wrote:
> Commit-ID: 2e8e19226398db8265a8e675fcc0118b9e80c9e8
> Gitweb:
> https://git.kernel.org/tip/2e8e19226398db8265a8e675fcc0118b9e80c9e8
> Author: Phil Auld
> AuthorDate: Tue, 19 Mar 2019
On Tue, Apr 09, 2019 at 03:05:27PM +0200 Peter Zijlstra wrote:
> On Tue, Apr 09, 2019 at 08:48:16AM -0400, Phil Auld wrote:
> > Hi Ingo, Peter,
> >
> > On Wed, Apr 03, 2019 at 01:38:39AM -0700 tip-bot for Phil Auld wrote:
> > > Commit-ID: 06ec5d30e8d57b820d44df6
Commit-ID: 2e8e19226398db8265a8e675fcc0118b9e80c9e8
Gitweb: https://git.kernel.org/tip/2e8e19226398db8265a8e675fcc0118b9e80c9e8
Author: Phil Auld
AuthorDate: Tue, 19 Mar 2019 09:00:05 -0400
Committer: Ingo Molnar
CommitDate: Tue, 16 Apr 2019 16:50:05 +0200
sched/fair: Limit
On Tue, Apr 09, 2019 at 03:05:27PM +0200 Peter Zijlstra wrote:
> On Tue, Apr 09, 2019 at 08:48:16AM -0400, Phil Auld wrote:
> > Hi Ingo, Peter,
> >
> > On Wed, Apr 03, 2019 at 01:38:39AM -0700 tip-bot for Phil Auld wrote:
> > > Commit-ID: 06ec5d30e8d57b820d44df6
sed if
> + * _every_ other avenue has been traveled.
> + **/
> +
> void cpuset_cpus_allowed_fallback(struct task_struct *tsk)
> {
> rcu_read_lock();
> - do_set_cpus_allowed(tsk, task_cs(tsk)->effective_cpus);
> + do_set_cpus_allowed(tsk, is_in_v2_mode() ?
> + task_cs(tsk)->cpus_allowed : cpu_possible_mask);
> rcu_read_unlock();
>
> /*
> --
> 2.18.1
>
Fwiw,
Acked-by: Phil Auld
--
1 - 100 of 133 matches
Mail list logo