On Thu, Jan 6, 2022 at 5:54 PM Morten Brørup <m...@smartsharesystems.com> wrote:
>
> A flush threshold for the mempool cache was introduced in DPDK version
> 1.3, but rte_mempool_do_generic_get() was not completely updated back
> then.
>
> The incompleteness did not cause any functional bugs, so this patch
> could be considered refactoring for the purpose of cleaning up.
>
> This patch completes the update of rte_mempool_do_generic_get() as
> follows:
>
> 1. A few comments were malplaced or no longer correct.
> Some comments have been updated/added/corrected.
>
> 2. The code that initially screens the cache request was not updated.
> The initial screening compared the request length to the cache size,
> which was correct before, but became irrelevant with the introduction of
> the flush threshold. E.g. the cache can hold up to flushthresh objects,
> which is more than its size, so some requests were not served from the
> cache, even though they could be.
> The initial screening has now been corrected to match the initial
> screening in rte_mempool_do_generic_put(), which verifies that a cache
> is present, and that the length of the request does not overflow the
> memory allocated for the cache.
>
> 3. The code flow for satisfying the request from the cache was weird.
> The likely code path where the objects are simply served from the cache
> was treated as unlikely; now it is treated as likely.
> And in the code path where the cache was backfilled first, numbers were
> added and subtracted from the cache length; now this code path simply
> sets the cache length to its final value.
>
> 4. The objects were returned in reverse order.
> Returning the objects in reverse order is not necessary, so rte_memcpy()
> is now used instead.

Have you checked the performance with network workload?
IMO, reverse order makes sense(LIFO vs FIFO).
The LIFO makes the cache warm as the same buffers are reused frequently.

Reply via email to