On Tue, Dec 28, 2021 at 03:28:45PM +0100, Morten Brørup wrote:
> Hi mempool maintainers and DPDK team.
> 
> Does anyone know the reason or history why CACHE_FLUSHTHRESH_MULTIPLIER was 
> chosen to be 1.5? I think it is counterintuitive.
> 
> The mempool cache flush threshold was introduced in DPDK version 1.3; it was 
> not in DPDK version 1.2. The copyright notice for rte_mempool.c says year 
> 2012.
> 
> 
> Here is my analysis:
> 
> With the multiplier of 1.5, a mempool cache is allowed to be filled up to 50 
> % above than its target size before its excess entries are flushed to the 
> mempool (thereby reducing the cache length to the target size).
> 
> In the opposite direction, a mempool cache is allowed to be drained 
> completely, i.e. up to 100 % below its target size.
> 
> My instinct tells me that it would be more natural to let a mempool cache go 
> the same amount above and below its target size, i.e. using a flush 
> multiplier of 2 instead of 1.5.
> 
> Also, the cache should be allowed to fill up to and including the flush 
> threshold, so it is flushed when the threshold is exceeded, instead of when 
> it is reached.
> 
> Here is a simplified example:
> 
> Imagine a cache target size of 32, corresponding to a typical packet burst. 
> With a flush threshold of 2 (and len > threshold instead of len >= 
> threshold), the cache could hold 1 +/-1 packet bursts. With the current 
> multiplier it can only hold [0 .. 1.5[ packet bursts, not really providing a 
> lot of elasticity.
> 
Hi Morten,

Interesting to see this being looked at again. The original idea of adding
in some extra room above the requested value was to avoid the worst-case
scenario of a pool oscillating between full and empty repeatedly due to the
addition/removal of perhaps a single packet. As for why 1.5 was chosen as
the value, I don't recall any particular reason for it myself. The main
objective was to have a separate flush and size values so that we could go
a bit above full, and when flushing, not emptying the entire cache down to
zero.

In terms of the behavioural points you make above, I wonder if symmetry is
actually necessary or desirable in this case. After all, the ideal case is
probably to keep the mempool neither full nor empty, so that both
allocations or frees can be done without having to go to the underlying
shared data structure. To accomodate this, the mempool will only flush when
the number of elements is greater than size * 1.5, and then it only flushes
elements down to size, ensuring that allocations can still take place.
On allocation, new buffers are taken when we don't have enough in the cache
to fullfil the request, and then the cache is filled up to size, not to the
flush threshold.

Now, for the scenario you describe - where the mempool cache size is set to
be the same as the burst size, this scheme probably does break down, in
that we don't really have any burst elasticity. However, I would query if
that is a configuration that is used, since to the user it should appear
correctly to provide no elasticity. Looking at testpmd, and our example
apps, the standard there is a burst size of 32 and mempool cache of ~256.
In OVS code, netdev-dpdk.c seems to initialize the mempool  with cache size
of RTE_MEMPOOL_CACHE_MAX_SIZE (through define MP_CACHE_SZ). In all these
cases, I think the 1.5 threshold should work just fine for us. That said,
if you want to bump it up to 2x, I can't say I'd object strongly as it
should be harmless, I think.

Just my initial thoughts,

/Bruce

Reply via email to