<snip>

> > Subject: [PATCH v3 09/12] service: avoid race condition for MT unsafe
> > service
> >
> > From: Honnappa Nagarahalli <honnappa.nagaraha...@arm.com>
> >
> > There has possible that a MT unsafe service might get configured to
> > run on another core while the service is running currently. This might
> > result in the MT unsafe service running on multiple cores
> > simultaneously. Use 'execute_lock' always when the service is MT
> > unsafe.
> >
> > Fixes: e9139a32f6e8 ("service: add function to run on app lcore")
> > Cc: sta...@dpdk.org
> >
> > Signed-off-by: Honnappa Nagarahalli <honnappa.nagaraha...@arm.com>
> > Reviewed-by: Phil Yang <phil.y...@arm.com>
> > Reviewed-by: Gavin Hu <gavin...@arm.com>
> 
> We should put "fix" in the title, once converged on an implementation.
Ok, will replace 'avoid' with 'fix' (once we agree on the solution)

> 
> Regarding Fixes and stable backport, we should consider if fixing this in 
> stable
> with a performance degradation, fixing with more complex solution, or
> documenting a known issue a better solution.
> 
> 
> This fix (always taking the atomic lock) will have a negative performance
> impact on existing code using services. We should investigate a way to fix it
> without causing datapath performance degradation.
Trying to gauge the impact on the existing applications...
The documentation does not explicitly disallow run time mapping of cores to 
service.
1) If the applications are mapping the cores to services at run time, they are 
running with a bug. IMO, bug fix resulting in a performance drop should be 
acceptable.
2) If the service is configured to run on single core (num_mapped_cores == 1), 
but service is set to MT unsafe - this will have a (possible) performance 
impact.
        a) This can be solved by setting the service to MT safe and can be 
documented. This might be a reasonable solution for applications which are 
compiling with
                   future DPDK releases.
        b) We can also solve this using symbol versioning - the old version of 
this function will use the old code, the new version of this function will use 
the code in
                   this patch. So, if the application is run with future DPDK 
releases without recompiling, it will continue to use the old version. If the 
application is compiled 
                   with future releases, they can use solution in 2a. We also 
should think if this is an appropriate solution as this would force 1) to 
recompile to get the fix.
3) If the service is configured to run on multiple cores (num_mapped_cores > 
1), then for those applications, the lock is being taken already. These 
applications might see some improvements as this patch removes few instructions.

> 
> I think there is a way to achieve this by moving more checks/time to the
> control path (lcore updating the map), and not forcing the datapath lcore to
> always take an atomic.
I think 2a above is the solution.

> 
> In this particular case, we have a counter for number of iterations that a
Which counter are you thinking about?
All the counters I checked are not atomic operations currently. If we are going 
to use counters they have to be atomic, which means additional cycles in the 
data path.

> service has done. If this increments we know that the lcore running the
> service has re-entered the critical section, so would see an updated "needs
> atomic" flag.
> 
> This approach may introduce a predictable branch on the datapath, however
> the cost of a predictable branch vs always taking an atomic is order(s?) of
> magnitude, so a branch is much preferred.
> 
> It must be possible to avoid the datapath overhead using a scheme like this. 
> It
> will likely be more complex than your proposed change below, however if it
> avoids datapath performance drops I feel that a more complex solution is
> worth investigating at least.
I do not completely understand the approach you are proposing, may be you can 
elaborate more. But, it seems to be based on a counter approach. Following is 
my assessment on what happens if we use a counter. Let us say we kept track of 
how many cores are running the service currently. We need an atomic counter 
other than 'num_mapped_cores'. Let us call that counter 'num_current_cores'. 
The code to call the service would look like below.

1) rte_atomic32_inc(&num_current_cores); /* this results in a full memory 
barrier */
2) if (__atomic_load_n(&num_current_cores, __ATOMIC_ACQUIRE) == 1) { /* 
rte_atomic_read is not enough here as it does not provide the required memory 
barrier for any architecture */
3)      run_service(); /* Call the service */
4) }
5) rte_atomic32_sub(&num_current_cores); /* Calling rte_atomic32_clear is not 
enough as it is not an atomic operation and does not provide the required 
memory barrier */

But the above code has race conditions in lines 1 and 2. It is possible that 
none of the cores will ever get to run the service as they all could 
simultaneously increment the counter. Hence lines 1 and 2 together need to be 
atomic, which is nothing but 'compare-exchange' operation.

BTW, the current code has a bug where it calls 
'rte_atomic_clear(&s->execute_lock)', it is missing memory barriers which 
results in clearing the execute_lock before the service has completed running. 
I suggest changing the 'execute_lock' to rte_spinlock_t and using 
rte_spinlock_try_lock and rte_spinlock_unlock APIs.

> 
> A unit test is required to validate a fix like this - although perhaps found 
> by
> inspection/review, a real-world test to validate would give confidence.
Agree, need to have a test case.

> 
> 
> Thoughts on such an approach?
> 
> 
> 
> > ---
> >  lib/librte_eal/common/rte_service.c | 11 +++++------
> >  1 file changed, 5 insertions(+), 6 deletions(-)
> >
> > diff --git a/lib/librte_eal/common/rte_service.c
> > b/lib/librte_eal/common/rte_service.c
> > index 557b5a9..32a2f8a 100644
> > --- a/lib/librte_eal/common/rte_service.c
> > +++ b/lib/librte_eal/common/rte_service.c
> > @@ -50,6 +50,10 @@ struct rte_service_spec_impl {
> >     uint8_t internal_flags;
> >
> >     /* per service statistics */
> > +   /* Indicates how many cores the service is mapped to run on.
> > +    * It does not indicate the number of cores the service is running
> > +    * on currently.
> > +    */
> >     rte_atomic32_t num_mapped_cores;
> >     uint64_t calls;
> >     uint64_t cycles_spent;
> > @@ -370,12 +374,7 @@ service_run(uint32_t i, struct core_state *cs,
> > uint64_t service_mask,
> >
> >     cs->service_active_on_lcore[i] = 1;
> >
> > -   /* check do we need cmpset, if MT safe or <= 1 core
> > -    * mapped, atomic ops are not required.
> > -    */
> > -   const int use_atomics = (service_mt_safe(s) == 0) &&
> > -                           (rte_atomic32_read(&s-
> >num_mapped_cores) > 1);
> > -   if (use_atomics) {
> > +   if (service_mt_safe(s) == 0) {
> >             if (!rte_atomic32_cmpset((uint32_t *)&s->execute_lock, 0, 1))
> >                     return -EBUSY;
> >
> > --
> > 2.7.4

Reply via email to