On 07/05, Johannes Berg wrote:
>
> @@ -257,7 +261,9 @@ static void run_workqueue(struct cpu_wor
>  
>               BUG_ON(get_wq_data(work) != cwq);
>               work_clear_pending(work);
> +             lock_acquire(&cwq->wq->lockdep_map, 0, 0, 0, 2, _THIS_IP_);
>               f(work);
> +             lock_release(&cwq->wq->lockdep_map, 1, _THIS_IP_);

Johannes, my apologies. You were worried about recursion, and you were right,
sorry!

Currently it is allowed that work->func() does flush_workqueue() on its own
workqueue. So we have

        run_workqueue()
                work->func()
                        flush_workqueue()
                                run_workqueue()

All but work->func() take wq->lockdep_map, I guess check_deadlock() won't be
happy.

In your initial patch, wq->lockdep_map was taken in flush_cpu_workqueue() when
cwq->thread != current, but this is still not enough. Because we take the same
lock when flush_workqueue() does flush_cpu_workqueue() on another CPU.

run_workqueue() is easy, it can check cwq->run_depth == 1 before lock/unlock.

Anybody sees a simple soultion? Perhaps, some clever trick with LOCKDEP ?


OTOH. Perhaps we can can forbid such a behaviour? Andrew, do you know any
good example of "keventd trying to flush its own queue" ?

In any case, I think both patches are great, thanks for doing this!

Oleg.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to