On Sat, Oct 24, 2020 at 08:27:16AM -0400, Vineeth Pillai wrote:
> 
> 
> On 10/24/20 7:10 AM, Vineeth Pillai wrote:
> > 
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 93a3b874077d..4cae5ac48b60 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -4428,12 +4428,14 @@ pick_next_entity(struct cfs_rq *cfs_rq, struct
> > sched_entity *curr)
> >                         se = second;
> >         }
> > 
> > -       if (cfs_rq->next && wakeup_preempt_entity(cfs_rq->next, left) <
> > 1) {
> > +       if (left && cfs_rq->next &&
> > +                       wakeup_preempt_entity(cfs_rq->next, left) < 1) {
> >                 /*
> >                  * Someone really wants this to run. If it's not unfair,
> > run it.
> >                  */
> >                 se = cfs_rq->next;
> > -       } else if (cfs_rq->last && wakeup_preempt_entity(cfs_rq->last,
> > left) < 1) {
> > +       } else if (left && cfs_rq->last &&
> > +                       wakeup_preempt_entity(cfs_rq->last, left) < 1) {
> >                 /*
> >                  * Prefer last buddy, try to return the CPU to a
> > preempted task.
> > 
> > 
> > There reason for left being NULL needs to be investigated. This was
> > there from v1 and we did not yet get to it. I shall try to debug later
> > this week.
> 
> Thinking more about it and looking at the crash, I think that
> 'left == NULL' can happen in pick_next_entity for core scheduling.
> If a cfs_rq has only one task that is running, then it will be
> dequeued and 'left = __pick_first_entity()' will be NULL as the
> cfs_rq will be empty. This would not happen outside of coresched
> because we never call pick_tack() before put_prev_task() which
> will enqueue the task back.
> 
> With core scheduling, a cpu can call pick_task() for its sibling while
> the sibling is still running the active task and put_prev_task has yet
> not been called. This can result in 'left == NULL'.

Quite correct. Hurmph.. the reason we do this is because... we do the
update_curr() the wrong way around. And I can't seem to remember why we
do that (it was in my original patches).

Something like so seems the obvious thing to do, but I can't seem to
remember why we're not doing it :-(

--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6950,15 +6950,10 @@ static struct task_struct *pick_task_fai
        do {
                struct sched_entity *curr = cfs_rq->curr;
 
-               se = pick_next_entity(cfs_rq, NULL);
+               if (curr && curr->on_rq)
+                       update_curr(cfs_rq);
 
-               if (curr) {
-                       if (se && curr->on_rq)
-                               update_curr(cfs_rq);
-
-                       if (!se || entity_before(curr, se))
-                               se = curr;
-               }
+               se = pick_next_entity(cfs_rq, curr);
 
                cfs_rq = group_cfs_rq(se);
        } while (cfs_rq);

Reply via email to