Hi Konstantin,

> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Tuesday, August 29, 2017 5:57 AM
> To: Carrillo, Erik G <erik.g.carri...@intel.com>; rsanf...@akamai.com
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v2 1/3] timer: add per-installer pending lists
> for each lcore
> 
> Hi Gabriel,
> 
> >
> > Instead of each priv_timer struct containing a single skiplist, this
> > commit adds a skiplist for each enabled lcore to priv_timer. In the
> > case that multiple lcores repeatedly install timers on the same target
> > lcore, this change reduces lock contention for the target lcore's
> > skiplists and increases performance.
> 
> I am not an rte_timer expert, but there is one thing that worries me:
> It seems that complexity of timer_manage() has increased with that patch
> quite a bit:
> now  it has to check/process up to RTE_MAX_LCORE skiplists instead of one,
> also it has to somehow to properly sort up to RTE_MAX_LCORE lists of
> retrieved (ready to run) timers.
> Wouldn't all that affect it's running time?
> 

Yes, it would indeed increase it.

> I understand your intention to reduce lock contention, but I suppose at least
> it could be done in a configurable way.
> Let say allow user to specify  dimension of pending_lists[] at init phase or 
> so.
> Then timer from lcore_id=N will endup in
> pending_lists[N%RTE_DIM(pendilng_list)].
> 

This is a neat idea, and seemingly would allow the original performance to be 
maintained for applications where contention is not an issue.   I'll look into 
this change, as it may address other developers' concerns as well - thanks for 
the suggestion.

> Another thought - might be better to divide pending timers list not by client
> (lcore) id, but by expiration time - some analog of  timer wheel or so.
> That, I think might greatly decrease the probability that timer_manage() and
> timer_add() will try to access the same list.
> From other side timer_manage() still would have to consume skip-lists one
> by one.
> Though I suppose that's quite radical change from what we have right now.
> Konstantin
> 

Thanks,
Gabriel

Reply via email to