Steven Rostedt <[EMAIL PROTECTED]> wrote:
>
> /*
>   * Try to acquire jbd_lock_bh_state() against the buffer, when j_list_lock
>  is
>   * held.  For ranking reasons we must trylock.  If we lose, schedule away
>  and
>   * return 0.  j_list_lock is dropped in this case.
>   */
>  static int inverted_lock(journal_t *journal, struct buffer_head *bh)
>  {
>       if (!jbd_trylock_bh_state(bh)) {
>               spin_unlock(&journal->j_list_lock);
>               schedule();
>               return 0;
>       }
>       return 1;
>  }
> 

That's very lame code, that.  The old "I don't know what the heck to do now
so I'll schedule" trick.  Sorry.

>  I guess one way to solve this is to add a wait queue here (before
>  schedule()), and have the one holding the lock to wake up all on the
>  waitqueue when they release it.

yup.  A patch against mainline would be appropriate, please.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to