On Wed, May 24, 2017 at 09:38:28AM -0400, Waiman Long wrote:
> 
>   # of thread     w/o patch    with patch      % Change
>   -----------     ---------    ----------      --------
>        4         4053.3 Mop/s  4223.7 Mop/s     +4.2%
>        8         3310.4 Mop/s  3406.0 Mop/s     +2.9%
>       12         2576.4 Mop/s  2674.6 Mop/s     +3.8%

Waiman, could you run those numbers again but with the below 'fixed' ?

> @@ -361,6 +361,13 @@ static void pv_kick_node(struct qspinlock *lock, struct 
> mcs_spinlock *node)
>        * observe its next->locked value and advance itself.
>        *
>        * Matches with smp_store_mb() and cmpxchg() in pv_wait_node()
> +      *
> +      * The write to next->locked in arch_mcs_spin_unlock_contended()
> +      * must be ordered before the read of pn->state in the cmpxchg()
> +      * below for the code to work correctly. However, this is not
> +      * guaranteed on all architectures when the cmpxchg() call fails.
> +      * Both x86 and PPC can provide that guarantee, but other
> +      * architectures not necessarily.
>        */

        smp_mb();

>       if (cmpxchg(&pn->state, vcpu_halted, vcpu_hashed) != vcpu_halted)
>               return;

Ideally this Power CPU can optimize back-to-back SYNC instructions, but
who knows...

Reply via email to