On Tue, 11 Jul 2017 22:39:24 +0800
Alex Shi <alex....@linaro.org> wrote:

> Any comments for this little change? It's passed on 0day testing.

I think the problem was that this was a third patch after two
documentation patches. Where, people put documentation review at the
bottom of their priority list.

This should have been sent as separate patch on its own.

> 
> Thanks
> Alex
> 
> On 07/07/2017 10:52 AM, Alex Shi wrote:
> > We don't need to adjust prio before new pi_waiter adding. The prio
> > only need update after pi_waiter change or task priority change.
> > 
> > Signed-off-by: Alex Shi <alex....@linaro.org>
> > Cc: Steven Rostedt <rost...@goodmis.org>
> > Cc: Sebastian Siewior <bige...@linutronix.de>
> > Cc: Mathieu Poirier <mathieu.poir...@linaro.org>
> > Cc: Juri Lelli <juri.le...@arm.com>
> > Cc: Thomas Gleixner <t...@linutronix.de>
> > To: linux-ker...@vger.kernel.org
> > To: Ingo Molnar <mi...@redhat.com>
> > To: Peter Zijlstra <pet...@infradead.org>
> > ---
> >  kernel/locking/rtmutex.c | 1 -
> >  1 file changed, 1 deletion(-)
> > 
> > diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
> > index 28cd09e..d1fe41f 100644
> > --- a/kernel/locking/rtmutex.c
> > +++ b/kernel/locking/rtmutex.c
> > @@ -963,7 +963,6 @@ static int task_blocks_on_rt_mutex(struct rt_mutex 
> > *lock,
> >             return -EDEADLK;
> >  
> >     raw_spin_lock(&task->pi_lock);
> > -   rt_mutex_adjust_prio(task);

Interesting, I did some git mining and this was added with the original
entry of the rtmutex.c (23f78d4a0). Looking at even that version, I
don't see the purpose of adjusting the task prio here. It is done
before anything changes in the task.

Reviewed-by: Steven Rostedt (VMware) <rost...@goodmis.org>

-- Steve


> >     waiter->task = task;
> >     waiter->lock = lock;
> >     waiter->prio = task->prio;
> >   

--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to