On Wed,  6 Sep 2023 10:20:13 -0700
Stephen Hemminger <step...@networkplumber.org> wrote:

>  static __rte_always_inline
>  struct rte_rand_state *__rte_rand_get_state(void)
>  {
> -     unsigned int idx;
> +     struct rte_rand_state *rand_state = &RTE_PER_LCORE(rte_rand_state);
> +     uint64_t seed;
>  
> -     idx = rte_lcore_id();
> +     seed = __atomic_load_n(&rte_rand_seed, __ATOMIC_RELAXED);
> +     if (unlikely(seed != rand_state->seed)) {
> +             rand_state->seed = seed;
>  
> -     /* last instance reserved for unregistered non-EAL threads */
> -     if (unlikely(idx == LCORE_ID_ANY))
> -             idx = RTE_MAX_LCORE;
> +             seed += rte_thread_self().opaque_id;
> +             __rte_srand_lfsr258(seed, rand_state);
> +     }

Not sure about this.
It would change the semantics of rte_srand so that if passed the same
value across multiple runs, it would still generate different values because
thread_id is not the same.  Using rte_lcore() instead would cause repeatablity
but then there would still be a bug if two non-EAL threads used random.
Both threads would get the same sequence of numbers, but that is true
with current code.

Reply via email to