<snip>
> >
> > >
> > > Hi Phil,
> > >
> > > Good catch - thanks for the fix.   I've commented in-line:
> > >
> > > > -----Original Message-----
> > > > From: Phil Yang <phil.y...@arm.com>
> > > > Sent: Friday, June 12, 2020 6:20 AM
> > > > To: dev@dpdk.org; Carrillo, Erik G <erik.g.carri...@intel.com>
> > > > Cc: d...@linux.vnet.ibm.com; honnappa.nagaraha...@arm.com;
> > > > ruifeng.w...@arm.com; dharmik.thak...@arm.com; n...@arm.com;
> > > > sta...@dpdk.org
> > > > Subject: [PATCH 1/3] eventdev: fix race condition on timer list
> > > > counter
> > > >
> > > > The n_poll_lcores counter and poll_lcore array are shared between
> > > > lcores and the update of these variables are out of the protection
> > > > of spinlock on each lcore timer list. The read-modify-write
> > > > operations of the counter are not atomic, so it has the potential
> > > > of race condition
> > > between lcores.
> > > >
> > > > Use c11 atomics with RELAXED ordering to prevent confliction.
> > > >
> > > > Fixes: cc7b73ea9e3b ("eventdev: add new software timer adapter")
> > > > Cc: erik.g.carri...@intel.com
> > > > Cc: sta...@dpdk.org
> > > >
> > > > Signed-off-by: Phil Yang <phil.y...@arm.com>
> > > > Reviewed-by: Dharmik Thakkar <dharmik.thak...@arm.com>
> > > > Reviewed-by: Ruifeng Wang <ruifeng.w...@arm.com>
> > > > ---
> > > >  lib/librte_eventdev/rte_event_timer_adapter.c | 16
> > > > ++++++++++++----
> > > >  1 file changed, 12 insertions(+), 4 deletions(-)
> > > >
> > > > diff --git a/lib/librte_eventdev/rte_event_timer_adapter.c
> > > > b/lib/librte_eventdev/rte_event_timer_adapter.c
> > > > index 005459f..6a0e283 100644
> > > > --- a/lib/librte_eventdev/rte_event_timer_adapter.c
> > > > +++ b/lib/librte_eventdev/rte_event_timer_adapter.c
> > > > @@ -583,6 +583,7 @@ swtim_callback(struct rte_timer *tim)
> > > > uint16_t nb_evs_invalid = 0;  uint64_t opaque;  int ret;
> > > > +int n_lcores;
> > > >
> > > >  opaque = evtim->impl_opaque[1];
> > > >  adapter = (struct rte_event_timer_adapter *)(uintptr_t)opaque; @@
> > > > -605,8 +606,12 @@ swtim_callback(struct rte_timer *tim)
> > > >        "with immediate expiry value");  }
> > > >
> > > > -if (unlikely(rte_atomic16_test_and_set(&sw-
> > > > >in_use[lcore].v)))
> > > > -sw->poll_lcores[sw->n_poll_lcores++] = lcore;
> > > > +if (unlikely(rte_atomic16_test_and_set(&sw-
> > > > >in_use[lcore].v))) {
> > > > +n_lcores = __atomic_fetch_add(&sw->n_poll_lcores,
> > > > 1,
> > > > +__ATOMIC_RELAXED);
> > Since this commit will be back ported, we should prefer to use
> > rte_atomic APIs for this commit. Otherwise, we will have a mix of
> > rte_atomic and C11 APIs.
> > My suggestion is to fix this bug using rte_atomic so that backported
> > code will have only rte_atomic APIs. Add another commit (if required)
> > in this series to make the bug fix use C11 APIs (this commit will not be
> backported).
> 
> Hi Honnappa,
> 
> It doesn't have an applicable rte_atomic_XXX API to fix this issue.
> The rte_atomic32_inc doesn't return the original value of the input parameter
> and rte_atomic32_add_return can only return the new value.
Ok, understood.

> 
> Meanwhile, the rte_timer_alt_manage & rte_timer_stop_all API not support
> rte_atomic type parameters. We might need to rewrite these two APIs if we
> want to use rte_atomic operations for n_pol_lcores and poll_lcores array.
> 
> So, a better solution could be to backport the entire c11 solution to stable
> releases.
I am ok with the approach.
Erik, are you ok with this?

> 
> Thanks,
> Phil
> 
> >
> > >
> > > Just a nit, but let's align the continued line with the opening
> > > parentheses in this location and below.  With these changes:
> > >
> > > Acked-by: Erik Gabriel Carrillo <erik.g.carri...@intel.com>
> > >
> > > > +__atomic_store_n(&sw->poll_lcores[n_lcores],
> > > > lcore,
> > > > +__ATOMIC_RELAXED);
> > > > +}
> > > >  } else {
> > > >  EVTIM_BUF_LOG_DBG("buffered an event timer expiry event");
> > > >
> > > > @@ -1011,6 +1016,7 @@ __swtim_arm_burst(const struct
> > > > rte_event_timer_adapter *adapter,  uint32_t lcore_id =
> > > > rte_lcore_id();  struct rte_timer *tim, *tims[nb_evtims];
> > > > uint64_t cycles;
> > > > +int n_lcores;
> > > >
> > > >  #ifdef RTE_LIBRTE_EVENTDEV_DEBUG
> > > >  /* Check that the service is running. */ @@ -1033,8 +1039,10 @@
> > > > __swtim_arm_burst(const struct rte_event_timer_adapter *adapter,
> > > > if (unlikely(rte_atomic16_test_and_set(&sw->in_use[lcore_id].v)))
> > > > {  EVTIM_LOG_DBG("Adding lcore id = %u to list of lcores to
> > > poll",
> > > >        lcore_id);
> > > > -sw->poll_lcores[sw->n_poll_lcores] = lcore_id;
> > > > -++sw->n_poll_lcores;
> > > > +n_lcores = __atomic_fetch_add(&sw->n_poll_lcores, 1,
> > > > +__ATOMIC_RELAXED); __atomic_store_n(&sw->poll_lcores[n_lcores],
> > > > +lcore_id, __ATOMIC_RELAXED);
> > > >  }
> > > >
> > > >  ret = rte_mempool_get_bulk(sw->tim_pool, (void **)tims,
> > > > --
> > > > 2.7.4
> >

Reply via email to