On Mon, 12 Oct 2020, Peter Zijlstra wrote:
> On Sat, Oct 10, 2020 at 06:14:23PM +0200, Julia Lawall wrote:
> > Prior to v5.8 on my machine this was a rare event, because there were not
> > many of these background processes. But in v5.8, the default governor for
> > Intel machines without the
On Sat, Oct 10, 2020 at 06:14:23PM +0200, Julia Lawall wrote:
> Prior to v5.8 on my machine this was a rare event, because there were not
> many of these background processes. But in v5.8, the default governor for
> Intel machines without the HWP feature was changed from intel_pstate to
> intel_cp
On Mon, 12 Oct 2020, Vincent Guittot wrote:
> On Mon, 12 Oct 2020 at 12:34, Julia Lawall wrote:
> >
> > > > Would it be useful to always check whether prev is idle, perhaps in
> > > > wake_affine_idle or perhaps in select_idle_sibling?
> > >
> > > Yes, that would make sense to add a condition
On Mon, 12 Oct 2020 at 12:34, Julia Lawall wrote:
>
> > > Would it be useful to always check whether prev is idle, perhaps in
> > > wake_affine_idle or perhaps in select_idle_sibling?
> >
> > Yes, that would make sense to add a condition in wake_affine_idle to
> > return prev if this cpu is not i
> > Would it be useful to always check whether prev is idle, perhaps in
> > wake_affine_idle or perhaps in select_idle_sibling?
>
> Yes, that would make sense to add a condition in wake_affine_idle to
> return prev if this cpu is not idle (or about to become idle)
The case where this cpu is idle
Hi Julia,
On Sat, 10 Oct 2020 at 18:14, Julia Lawall wrote:
>
> Hello,
>
> Previously, I was wondering about why starting in Linux v5.8 my unblocking
> threads were moving to different sockets more frequently than in previous
> releases. Now, I think that I have found the reason.
>
> The first i
Hello,
Previously, I was wondering about why starting in Linux v5.8 my unblocking
threads were moving to different sockets more frequently than in previous
releases. Now, I think that I have found the reason.
The first issue is the change from runnable load average to load average
in computing w
On Thu, 3 Sep 2020, Valentin Schneider wrote:
>
> Hi Julia,
>
> On 03/09/20 15:09, Julia Lawall wrote:
> > Uses of SD_LOAD_BALANCE were removed in commit e669ac8ab952 (first
> > released in v5.8), with the comment:
> >
> > The SD_LOAD_BALANCE flag is se
On Thu, 3 Sep 2020, Valentin Schneider wrote:
>
> Hi Julia,
>
> On 03/09/20 15:09, Julia Lawall wrote:
> > Uses of SD_LOAD_BALANCE were removed in commit e669ac8ab952 (first
> > released in v5.8), with the comment:
> >
> > The SD_LOAD_BALANCE flag is se
Hi Julia,
On 03/09/20 15:09, Julia Lawall wrote:
> Uses of SD_LOAD_BALANCE were removed in commit e669ac8ab952 (first
> released in v5.8), with the comment:
>
> The SD_LOAD_BALANCE flag is set unconditionally for all domains in
> sd_init().
>
> I have the impression that th
Uses of SD_LOAD_BALANCE were removed in commit e669ac8ab952 (first
released in v5.8), with the comment:
The SD_LOAD_BALANCE flag is set unconditionally for all domains in
sd_init().
I have the impression that this was not quite true. The NUMA domain was
not initialized with sd_init, and didn
Committer: Peter Zijlstra
CommitterDate: Thu, 30 Apr 2020 20:14:39 +02:00
sched: Remove checks against SD_LOAD_BALANCE
The SD_LOAD_BALANCE flag is set unconditionally for all domains in
sd_init(). By making the sched_domain->flags syctl interface read-only, we
have removed the last piece
Committer: Peter Zijlstra
CommitterDate: Thu, 30 Apr 2020 20:14:39 +02:00
sched/topology: Kill SD_LOAD_BALANCE
That flag is set unconditionally in sd_init(), and no one checks for it
anymore. Remove it.
Signed-off-by: Valentin Schneider
Signed-off-by: Peter Zijlstra (Intel)
Link: https
13 matches
Mail list logo