> -----Original Message-----
> From: Carrillo, Erik G <erik.g.carri...@intel.com>
> Sent: Thursday, June 18, 2020 11:18 PM
> To: Phil Yang <phil.y...@arm.com>; dev@dpdk.org
> Cc: d...@linux.vnet.ibm.com; Honnappa Nagarahalli
> <honnappa.nagaraha...@arm.com>; Ruifeng Wang
> <ruifeng.w...@arm.com>; Dharmik Thakkar <dharmik.thak...@arm.com>;
> nd <n...@arm.com>; sta...@dpdk.org
> Subject: RE: [PATCH 1/3] eventdev: fix race condition on timer list counter
> 
> Hi Phil,
> 
> Good catch - thanks for the fix.   I've commented in-line:
> 
> > -----Original Message-----
> > From: Phil Yang <phil.y...@arm.com>
> > Sent: Friday, June 12, 2020 6:20 AM
> > To: dev@dpdk.org; Carrillo, Erik G <erik.g.carri...@intel.com>
> > Cc: d...@linux.vnet.ibm.com; honnappa.nagaraha...@arm.com;
> > ruifeng.w...@arm.com; dharmik.thak...@arm.com; n...@arm.com;
> > sta...@dpdk.org
> > Subject: [PATCH 1/3] eventdev: fix race condition on timer list counter
> >
> > The n_poll_lcores counter and poll_lcore array are shared between lcores
> > and the update of these variables are out of the protection of spinlock on
> > each lcore timer list. The read-modify-write operations of the counter are
> > not atomic, so it has the potential of race condition between lcores.
> >
> > Use c11 atomics with RELAXED ordering to prevent confliction.
> >
> > Fixes: cc7b73ea9e3b ("eventdev: add new software timer adapter")
> > Cc: erik.g.carri...@intel.com
> > Cc: sta...@dpdk.org
> >
> > Signed-off-by: Phil Yang <phil.y...@arm.com>
> > Reviewed-by: Dharmik Thakkar <dharmik.thak...@arm.com>
> > Reviewed-by: Ruifeng Wang <ruifeng.w...@arm.com>
> > ---
> >  lib/librte_eventdev/rte_event_timer_adapter.c | 16 ++++++++++++----
> >  1 file changed, 12 insertions(+), 4 deletions(-)
> >
> > diff --git a/lib/librte_eventdev/rte_event_timer_adapter.c
> > b/lib/librte_eventdev/rte_event_timer_adapter.c
> > index 005459f..6a0e283 100644
> > --- a/lib/librte_eventdev/rte_event_timer_adapter.c
> > +++ b/lib/librte_eventdev/rte_event_timer_adapter.c
> > @@ -583,6 +583,7 @@ swtim_callback(struct rte_timer *tim)
> >     uint16_t nb_evs_invalid = 0;
> >     uint64_t opaque;
> >     int ret;
> > +   int n_lcores;
> >
> >     opaque = evtim->impl_opaque[1];
> >     adapter = (struct rte_event_timer_adapter *)(uintptr_t)opaque;
> > @@ -605,8 +606,12 @@ swtim_callback(struct rte_timer *tim)
> >                                   "with immediate expiry value");
> >             }
> >
> > -           if (unlikely(rte_atomic16_test_and_set(&sw-
> > >in_use[lcore].v)))
> > -                   sw->poll_lcores[sw->n_poll_lcores++] = lcore;
> > +           if (unlikely(rte_atomic16_test_and_set(&sw-
> > >in_use[lcore].v))) {
> > +                   n_lcores = __atomic_fetch_add(&sw->n_poll_lcores,
> > 1,
> > +                                           __ATOMIC_RELAXED);
> 
> Just a nit, but let's align the continued line with the opening parentheses in
> this location and below.  With these changes:

Thanks Erik. 
I will do it in the new version.

> 
> Acked-by: Erik Gabriel Carrillo <erik.g.carri...@intel.com>
> 
> > +                   __atomic_store_n(&sw->poll_lcores[n_lcores],
> > lcore,
> > +                                           __ATOMIC_RELAXED);
> > +           }
> >     } else {
> >             EVTIM_BUF_LOG_DBG("buffered an event timer expiry
> > event");
> >
> > @@ -1011,6 +1016,7 @@ __swtim_arm_burst(const struct
> > rte_event_timer_adapter *adapter,
> >     uint32_t lcore_id = rte_lcore_id();
> >     struct rte_timer *tim, *tims[nb_evtims];
> >     uint64_t cycles;
> > +   int n_lcores;
> >
> >  #ifdef RTE_LIBRTE_EVENTDEV_DEBUG
> >     /* Check that the service is running. */ @@ -1033,8 +1039,10 @@
> > __swtim_arm_burst(const struct rte_event_timer_adapter *adapter,
> >     if (unlikely(rte_atomic16_test_and_set(&sw->in_use[lcore_id].v))) {
> >             EVTIM_LOG_DBG("Adding lcore id = %u to list of lcores to
> > poll",
> >                           lcore_id);
> > -           sw->poll_lcores[sw->n_poll_lcores] = lcore_id;
> > -           ++sw->n_poll_lcores;
> > +           n_lcores = __atomic_fetch_add(&sw->n_poll_lcores, 1,
> > +                                           __ATOMIC_RELAXED);
> > +           __atomic_store_n(&sw->poll_lcores[n_lcores], lcore_id,
> > +                                           __ATOMIC_RELAXED);
> >     }
> >
> >     ret = rte_mempool_get_bulk(sw->tim_pool, (void **)tims,
> > --
> > 2.7.4

Reply via email to