On Fri, Sep 27, 2019 at 7:43 AM Gavin Hu <gavin...@arm.com> wrote:
>
> In acquiring a spinlock, cores repeatedly poll the lock variable.
> This is replaced by rte_wait_until_equal API.
>
> Running the micro benchmarking and the testpmd and l3fwd traffic tests
> on ThunderX2, Ampere eMAG80 and Arm N1SDP, everything went well and no
> notable performance gain nor degradation was measured.
>
> Signed-off-by: Gavin Hu <gavin...@arm.com>
> Reviewed-by: Ruifeng Wang <ruifeng.w...@arm.com>
> Reviewed-by: Phil Yang <phil.y...@arm.com>
> Reviewed-by: Steve Capper <steve.cap...@arm.com>
> Reviewed-by: Ola Liljedahl <ola.liljed...@arm.com>
> Reviewed-by: Honnappa Nagarahalli <honnappa.nagaraha...@arm.com>
> Tested-by: Pavan Nikhilesh <pbhagavat...@marvell.com>
> ---
>  .../common/include/arch/arm/rte_spinlock.h         | 26 
> ++++++++++++++++++++++
>  1 file changed, 26 insertions(+)
>
> diff --git a/lib/librte_eal/common/include/arch/arm/rte_spinlock.h 
> b/lib/librte_eal/common/include/arch/arm/rte_spinlock.h
> index 1a6916b..b61c055 100644
> --- a/lib/librte_eal/common/include/arch/arm/rte_spinlock.h
> +++ b/lib/librte_eal/common/include/arch/arm/rte_spinlock.h
> @@ -16,6 +16,32 @@ extern "C" {
>  #include <rte_common.h>
>  #include "generic/rte_spinlock.h"
>
> +/* armv7a does support WFE, but an explicit wake-up signal using SEV is
> + * required (must be preceded by DSB to drain the store buffer) and
> + * this is less performant, so keep armv7a implementation unchanged.
> + */
> +#ifndef RTE_FORCE_INTRINSICS

Earlier, in the same file, I can see:
https://git.dpdk.org/dpdk/tree/lib/librte_eal/common/include/arch/arm/rte_spinlock.h?h=v19.08#n8

#ifndef RTE_FORCE_INTRINSICS
#  error Platform must be built with CONFIG_RTE_FORCE_INTRINSICS
#endif

IIUC, this is dead code.

> +static inline void
> +rte_spinlock_lock(rte_spinlock_t *sl)
> +{
> +       unsigned int tmp;
> +       /* http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.
> +        * faqs/ka16809.html
> +        */
> +       asm volatile(
> +               "1:     ldaxr %w[tmp], %w[locked]\n"
> +               "cbnz   %w[tmp], 2f\n"
> +               "stxr   %w[tmp], %w[one], %w[locked]\n"
> +               "cbnz   %w[tmp], 1b\n"
> +               "ret\n"
> +               "2:     sevl\n"
> +               "wfe\n"
> +               "jmp    1b\n"
> +               : [tmp] "=&r" (tmp), [locked] "+Q"(sl->locked)
> +               : [one] "r" (1)
> +}
> +#endif
> +
>  static inline int rte_tm_supported(void)
>  {
>         return 0;
> --
> 2.7.4
>


-- 
David Marchand

Reply via email to