Enable both c11 atomic and non c11 atomic lock-free stack for aarch64. Signed-off-by: Phil Yang <phil.y...@arm.com> Reviewed-by: Honnappa Nagarahalli <honnappa.nagaraha...@arm.com> Tested-by: Honnappa Nagarahalli <honnappa.nagaraha...@arm.com> --- doc/guides/prog_guide/env_abstraction_layer.rst | 4 ++-- doc/guides/rel_notes/release_19_08.rst | 3 +++ lib/librte_stack/rte_stack_lf_c11.h | 4 ++-- lib/librte_stack/rte_stack_lf_generic.h | 4 ++-- 4 files changed, 9 insertions(+), 6 deletions(-)
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst index f15bcd9..d569f95 100644 --- a/doc/guides/prog_guide/env_abstraction_layer.rst +++ b/doc/guides/prog_guide/env_abstraction_layer.rst @@ -592,8 +592,8 @@ Known Issues Alternatively, applications can use the lock-free stack mempool handler. When considering this handler, note that: - - It is currently limited to the x86_64 platform, because it uses an - instruction (16-byte compare-and-swap) that is not yet available on other + - It is currently limited to the aarch64 and x86_64 platforms, because it uses + an instruction (16-byte compare-and-swap) that is not yet available on other platforms. - It has worse average-case performance than the non-preemptive rte_ring, but software caching (e.g. the mempool cache) can mitigate this by reducing the diff --git a/doc/guides/rel_notes/release_19_08.rst b/doc/guides/rel_notes/release_19_08.rst index 3da2667..e2e00b9 100644 --- a/doc/guides/rel_notes/release_19_08.rst +++ b/doc/guides/rel_notes/release_19_08.rst @@ -99,6 +99,9 @@ New Features Updated ``librte_telemetry`` to fetch the global metrics from the ``librte_metrics`` library. +* **Added Lock-free Stack for aarch64.** + + The lock-free stack implementation is enabled for aarch64 platforms. Removed Items ------------- diff --git a/lib/librte_stack/rte_stack_lf_c11.h b/lib/librte_stack/rte_stack_lf_c11.h index 3d677ae..67c21fd 100644 --- a/lib/librte_stack/rte_stack_lf_c11.h +++ b/lib/librte_stack/rte_stack_lf_c11.h @@ -36,7 +36,7 @@ __rte_stack_lf_push_elems(struct rte_stack_lf_list *list, struct rte_stack_lf_elem *last, unsigned int num) { -#ifndef RTE_ARCH_X86_64 +#if !defined(RTE_ARCH_X86_64) && !defined(RTE_ARCH_ARM64) RTE_SET_USED(first); RTE_SET_USED(last); RTE_SET_USED(list); @@ -88,7 +88,7 @@ __rte_stack_lf_pop_elems(struct rte_stack_lf_list *list, void **obj_table, struct rte_stack_lf_elem **last) { -#ifndef RTE_ARCH_X86_64 +#if !defined(RTE_ARCH_X86_64) && !defined(RTE_ARCH_ARM64) RTE_SET_USED(obj_table); RTE_SET_USED(last); RTE_SET_USED(list); diff --git a/lib/librte_stack/rte_stack_lf_generic.h b/lib/librte_stack/rte_stack_lf_generic.h index 3182151..488fd9f 100644 --- a/lib/librte_stack/rte_stack_lf_generic.h +++ b/lib/librte_stack/rte_stack_lf_generic.h @@ -36,7 +36,7 @@ __rte_stack_lf_push_elems(struct rte_stack_lf_list *list, struct rte_stack_lf_elem *last, unsigned int num) { -#ifndef RTE_ARCH_X86_64 +#if !defined(RTE_ARCH_X86_64) && !defined(RTE_ARCH_ARM64) RTE_SET_USED(first); RTE_SET_USED(last); RTE_SET_USED(list); @@ -84,7 +84,7 @@ __rte_stack_lf_pop_elems(struct rte_stack_lf_list *list, void **obj_table, struct rte_stack_lf_elem **last) { -#ifndef RTE_ARCH_X86_64 +#if !defined(RTE_ARCH_X86_64) && !defined(RTE_ARCH_ARM64) RTE_SET_USED(obj_table); RTE_SET_USED(last); RTE_SET_USED(list); -- 2.7.4