Hello,
On 23/10/2020 11:29, David Marchand wrote:
On Thu, Oct 22, 2020 at 5:12 PM Medvedkin, Vladimir
<vladimir.medved...@intel.com> wrote:
Hi David,
On 22/10/2020 12:52, David Marchand wrote:
On Mon, Oct 19, 2020 at 5:05 PM Vladimir Medvedkin
<vladimir.medved...@intel.com> wrote:
Add type argument to dir24_8_get_lookup_fn()
Now it supports 3 different lookup implementations:
RTE_FIB_DIR24_8_SCALAR_MACRO
RTE_FIB_DIR24_8_SCALAR_INLINE
RTE_FIB_DIR24_8_SCALAR_UNI
Add new rte_fib_set_lookup_fn() - user can change lookup
function type runtime.
Signed-off-by: Vladimir Medvedkin <vladimir.medved...@intel.com>
Acked-by: Konstantin Ananyev <konstantin.anan...@intel.com>
We create a fib object with a type: either RTE_FIB_DUMMY or
RTE_FIB_DIR24_8 (separate topic, we probably do not need
RTE_FIB_TYPE_MAX).
RTE_FIB_TYPE_MAX is used for early sanity check. I can remove it
(relying on that init_dataplane() will return error for improper type),
if you think that it is better to get rid of it.
Applications could start using it.
If you don't need it, don't expose it.
A validation on type <= RTE_FIB_DIR24_8 should be enough.
Will remove it in v14, thanks!
This lookup API is dir24_8 specific.
If we won't abstract the lookup selection type, why not change this
API as dir24_8 specific?
I.e. s/rte_fib_set_lookup_fn/rte_fib_dir24_8_set_lookup_fn/g
The same would apply to FIB6 trie implementation.
Good point. In future I want to add more data plane algorithms such as
DXR or Poptrie for example. In this case I don't really want to have
separate function for every supported algorithm, i.e. I think it is
better to have single rte_fib_set_lookup_fn(). But on the other hand it
needs to be generic in this case. In future releases I want to get rid
of different dir24_8's scalar implementations (MACRO/INLINE/UNI). After
this we can change types to algorithm agnostic names:
RTE_FIB_SCALAR,
RTE_FIB_VECTOR_AVX512
Is there a real benefit from those 3 scalar lookup implementations for dir24_8 ?
Initially I've sent 3 different implementations to get responses from
the community what implementation should I leave. Test results on
different IA CPU's shows that MACRO based implementation perform
slightly faster. So I think there is no benefit from keeping other
implementations.
--
Regards,
Vladimir