On Fri, Jun 15, 2018 at 02:08:27PM +0200, Thomas Hellstrom wrote:

> @@ -772,6 +856,25 @@ __ww_mutex_add_waiter(struct mutex_waiter *waiter,
>       }
>  
>       list_add_tail(&waiter->list, pos);
> +     if (__mutex_waiter_is_first(lock, waiter))
> +             __mutex_set_flag(lock, MUTEX_FLAG_WAITERS);
> +
> +     /*
> +      * Wound-Wait: if we're blocking on a mutex owned by a younger context,
> +      * wound that such that we might proceed.
> +      */
> +     if (!is_wait_die) {
> +             struct ww_mutex *ww = container_of(lock, struct ww_mutex, base);
> +
> +             /*
> +              * See ww_mutex_set_context_fastpath(). Orders setting
> +              * MUTEX_FLAG_WAITERS (atomic operation) vs the ww->ctx load,
> +              * such that either we or the fastpath will wound @ww->ctx.
> +              */
> +             smp_mb__after_atomic();
> +
> +             __ww_mutex_wound(lock, ww_ctx, ww->ctx);
> +     }

I think we want the smp_mb__after_atomic() in the same branch as
__mutex_set_flag(). So something like:

        if (__mutex_waiter_is_first()) {
                __mutex_set_flag();
                if (!is_wait_die)
                        smp_mb__after_atomic();
        }

Or possibly even without the !is_wait_die. The rules for
smp_mb__*_atomic() are such that we want it unconditional after an
atomic, otherwise the semantics get too fuzzy.

Alan (rightfully) complained about that a while ago when he was auditing
users.


_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Reply via email to