On 3/19/19 7:29 PM, Subhra Mazumdar wrote:
> 
> On 3/18/19 8:41 AM, Julien Desfossez wrote:
>> The case where we try to acquire the lock on 2 runqueues belonging to 2
>> different cores requires the rq_lockp wrapper as well otherwise we
>> frequently deadlock in there.
>>
>> This fixes the crash reported in
>> 1552577311-8218-1-git-send-email-jdesfos...@digitalocean.com
>>
>> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
>> index 76fee56..71bb71f 100644
>> --- a/kernel/sched/sched.h
>> +++ b/kernel/sched/sched.h
>> @@ -2078,7 +2078,7 @@ static inline void double_rq_lock(struct rq *rq1, 
>> struct rq *rq2)
>>           raw_spin_lock(rq_lockp(rq1));
>>           __acquire(rq2->lock);    /* Fake it out ;) */
>>       } else {
>> -        if (rq1 < rq2) {
>> +        if (rq_lockp(rq1) < rq_lockp(rq2)) {
>>               raw_spin_lock(rq_lockp(rq1));
>>               raw_spin_lock_nested(rq_lockp(rq2), SINGLE_DEPTH_NESTING);
>>           } else {


Pawan was seeing occasional crashes and lock up that's avoided by doing the 
following.
We're trying to dig a little more tracing to see why pick_next_entity is 
returning
NULL.

Tim

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 5349ebedc645..4c7f353b8900 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7031,6 +7031,8 @@ pick_next_task_fair(struct rq *rq, struct task_struct 
*prev, struct rq_flags *rf
                }
 
                se = pick_next_entity(cfs_rq, curr);
+               if (!se)
+                       return NULL;
                cfs_rq = group_cfs_rq(se);
        } while (cfs_rq);
 
@@ -7070,6 +7072,8 @@ pick_next_task_fair(struct rq *rq, struct task_struct 
*prev, struct rq_flags *rf
 
        do {
                se = pick_next_entity(cfs_rq, NULL);
+               if (!se)
+                       return NULL;
                set_next_entity(cfs_rq, se);
                cfs_rq = group_cfs_rq(se);
        } while (cfs_rq);

Reply via email to