On Fri, Dec 01, 2017 at 11:12:51AM +0000, Konstantin Ananyev wrote:
> On x86 it  is possible to use lock-prefixed instructions to get
> the similar effect as mfence.
> As pointed by Java guys, on most modern HW that gives a better
> performance than using mfence:
> https://shipilev.net/blog/2014/on-the-fence-with-dependencies/
> That patch adopts that technique for rte_smp_mb() implementation.
> On BDW 2.2 mb_autotest on single lcore reports 2X cycle reduction,
> i.e. from ~110 to ~55 cycles per operation.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.anan...@intel.com>
> ---
>  .../common/include/arch/x86/rte_atomic.h           | 45 
> +++++++++++++++++++++-
>  1 file changed, 43 insertions(+), 2 deletions(-)
> 
<snip>
> + * As pointed by Java guys, that makes possible to use lock-prefixed
> + * instructions to get the same effect as mfence and on most modern HW
> + * that gives a better perfomarnce than using mfence:
> + * https://shipilev.net/blog/2014/on-the-fence-with-dependencies/
> + * So below we use that technique for rte_smp_mb() implementation.
> + */
> +
> +#ifdef RTE_ARCH_I686
> +#define      RTE_SP  RTE_STR(esp)
> +#else
> +#define      RTE_SP  RTE_STR(rsp)
> +#endif
> +
> +#define RTE_MB_DUMMY_MEMP    "-128(%%" RTE_SP ")"
> +
> +static __rte_always_inline void
> +rte_smp_mb(void)
> +{
> +     asm volatile("lock addl $0," RTE_MB_DUMMY_MEMP "; " ::: "memory");
> +}

Rather than #defining RTE_SP and RTE_MB_DUMMY_MEMP, why not just put the
#ifdef into the rte_smp_mb itself and have two asm volatile lines with
hard-coded register names in them? It would be shorter and I think a lot
easier to read.

/Bruce

Reply via email to