> -----Original Message-----
> From: Zoltan Kiss [mailto:zoltan.kiss at linaro.org]
> Sent: Monday, May 18, 2015 2:31 PM
> To: Ananyev, Konstantin; dev at dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] mempool: limit cache_size
> 
> 
> 
> On 18/05/15 14:14, Ananyev, Konstantin wrote:
> >
> >
> >> -----Original Message-----
> >> From: Zoltan Kiss [mailto:zoltan.kiss at linaro.org]
> >> Sent: Monday, May 18, 2015 1:50 PM
> >> To: Ananyev, Konstantin; dev at dpdk.org
> >> Subject: Re: [dpdk-dev] [PATCH] mempool: limit cache_size
> >>
> >>
> >>
> >> On 18/05/15 13:41, Ananyev, Konstantin wrote:
> >>>
> >>>
> >>>> -----Original Message-----
> >>>> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Zoltan Kiss
> >>>> Sent: Monday, May 18, 2015 1:28 PM
> >>>> To: dev at dpdk.org
> >>>> Subject: Re: [dpdk-dev] [PATCH] mempool: limit cache_size
> >>>>
> >>>> Hi,
> >>>>
> >>>> Any opinion on this patch?
> >>>>
> >>>> Regards,
> >>>>
> >>>> Zoltan
> >>>>
> >>>> On 13/05/15 19:59, Zoltan Kiss wrote:
> >>>>> Otherwise cache_flushthresh can be bigger than n, and
> >>>>> a consumer can starve others by keeping every element
> >>>>> either in use or in the cache.
> >>>>>
> >>>>> Signed-off-by: Zoltan Kiss <zoltan.kiss at linaro.org>
> >>>>> ---
> >>>>>     lib/librte_mempool/rte_mempool.c | 3 ++-
> >>>>>     lib/librte_mempool/rte_mempool.h | 2 +-
> >>>>>     2 files changed, 3 insertions(+), 2 deletions(-)
> >>>>>
> >>>>> diff --git a/lib/librte_mempool/rte_mempool.c 
> >>>>> b/lib/librte_mempool/rte_mempool.c
> >>>>> index cf7ed76..ca6cd9c 100644
> >>>>> --- a/lib/librte_mempool/rte_mempool.c
> >>>>> +++ b/lib/librte_mempool/rte_mempool.c
> >>>>> @@ -440,7 +440,8 @@ rte_mempool_xmem_create(const char *name, unsigned 
> >>>>> n, unsigned elt_size,
> >>>>>         mempool_list = RTE_TAILQ_CAST(rte_mempool_tailq.head, 
> >>>>> rte_mempool_list);
> >>>>>
> >>>>>         /* asked cache too big */
> >>>>> -       if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE) {
> >>>>> +       if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE ||
> >>>>> +           (uint32_t) cache_size * CACHE_FLUSHTHRESH_MULTIPLIER > n) {
> >>>>>                 rte_errno = EINVAL;
> >>>>>                 return NULL;
> >>>>>         }
> >>>
> >>> Why just no 'cache_size > n' then?
> >>
> >> The commit message says: "Otherwise cache_flushthresh can be bigger than
> >> n, and a consumer can starve others by keeping every element either in
> >> use or in the cache."
> >
> > Ah yes, you right - your condition is more restrictive, which is better.
> > Though here you implicitly convert cache_size and n to floats and compare 2 
> > floats :
> > (uint32_t) cache_size * CACHE_FLUSHTHRESH_MULTIPLIER > n)
> > Shouldn't it be:
> > (uint32_t)(cache_size * CACHE_FLUSHTHRESH_MULTIPLIER) > n)
> > So we do conversion back to uint32_t compare to unsigned integers instead?
> > Same as below:
> > mp->cache_flushthresh = (uint32_t)
> >                  (cache_size * CACHE_FLUSHTHRESH_MULTIPLIER);
> 
> To bring it further: how about ditching the whole cache_flushthresh
> member of the mempool structure, and use this:
> 
> #define CACHE_FLUSHTHRESH(mp) (uint32_t)((mp)->cache_size * 1.5)

That's quite expensive and I think would slow down mempool_put() quite a lot .
So I'd suggest we keep cache_flushthresh as it is.

> 
> Furthermore, do we want to expose the flush threshold multiplier through
> the config file?

Hmm, my opinion is no - so far no one ask for that,
and as general tendency - we trying to reduce number of options in config file.
Do you have any good justification when current value is not good enough?
Anyway, that probably could be a subject of another patch/discussion.
Konstantin

> 
> > ?
> >
> > In fact, as we use it more than once, it probably makes sense to create a 
> > macro for it,
> > something like:
> > #define CALC_CACHE_FLUSHTHRESH(c)   ((uint32_t)((c) *  
> > CACHE_FLUSHTHRESH_MULTIPLIER)
> >
> > Or even
> >
> > #define CALC_CACHE_FLUSHTHRESH(c)   ((typeof (c))((c) *  
> > CACHE_FLUSHTHRESH_MULTIPLIER)
> >
> >
> > Konstantin
> >
> >>
> >>> Konstantin
> >>>
> >>>>> diff --git a/lib/librte_mempool/rte_mempool.h 
> >>>>> b/lib/librte_mempool/rte_mempool.h
> >>>>> index 9001312..a4a9610 100644
> >>>>> --- a/lib/librte_mempool/rte_mempool.h
> >>>>> +++ b/lib/librte_mempool/rte_mempool.h
> >>>>> @@ -468,7 +468,7 @@ typedef void (rte_mempool_ctor_t)(struct 
> >>>>> rte_mempool *, void *);
> >>>>>      *   If cache_size is non-zero, the rte_mempool library will try to
> >>>>>      *   limit the accesses to the common lockless pool, by maintaining 
> >>>>> a
> >>>>>      *   per-lcore object cache. This argument must be lower or equal to
> >>>>> - *   CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE. It is advised to choose
> >>>>> + *   CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE and n / 1.5. It is advised to 
> >>>>> choose
> >>>>>      *   cache_size to have "n modulo cache_size == 0": if this is
> >>>>>      *   not the case, some elements will always stay in the pool and 
> >>>>> will
> >>>>>      *   never be used. The access to the per-lcore table is of course
> >>>>>

Reply via email to