> On Feb 24, 2023, at 6:56 AM, Honnappa Nagarahalli 
> <honnappa.nagaraha...@arm.com> wrote:
> 
> 
> 
>> -----Original Message-----
>> From: Morten Brørup <m...@smartsharesystems.com>
>> Sent: Friday, February 24, 2023 6:13 AM
>> To: Harris, James R <james.r.har...@intel.com>; dev@dpdk.org
>> Subject: RE: Bug in rte_mempool_do_generic_get?
>> 
>>> 
>> 
>>> If you have a mempool with 2048 objects, shouldn't 4 cores each be able to 
>>> do a 512 buffer bulk get, regardless of the configured cache size?
>> 
>> No, the scenario you described above is the expected behavior. I think it is
>> documented somewhere that objects in the caches are unavailable for other
>> cores, but now I cannot find where this is documented.
>> 

Thanks Morten.

Yeah, I think it is documented somewhere, but I also couldn’t find it.  I was 
aware of cores not being able to allocate from another core’s cache.  My 
surprise was that in a pristine new mempool, that 4 cores could not each do one 
initial 512 buffer bulk get.  But I also see that even before the a2833ecc5 
patch, the cache would get populated on gets less than cache size, in addition 
to the buffers requested by the user.  So if cache size is 256, and bulk get is 
for 128 buffers, it pulls 384 buffers from backing pool - 128 for the caller, 
another 256 to prefill the cache.  Your patch makes this cache filling 
consistent between less-than-cache-size and greater-than-or-equal-to-cache-size 
cases.

>> Furthermore, since the effective per-core cache size is 1.5 * configured 
>> cache
>> size, a configured cache size of 256 may leave up to 384 objects in each per-
>> core cache.
>> 
>> With 4 cores, you can expect up to 3 * 384 = 1152 objects sitting in the
>> caches of other cores. If you want to be able to pull 512 objects with each
>> core, the pool size should be 4 * 512 + 1152 objects.
> May be we should document this in mempool library?
> 

Maybe.  But this case I described here is a bit wonky - SPDK should never have 
been specifying a non-zero cache in this case.  We only noticed this change in 
behavior because we were creating the mempool with a cache when we shouldn’t 
have.

-Jim


Reply via email to