Zero-copy access to the mempool cache is beneficial for PMD performance, and 
must be provided by the mempool library to fix [Bug 1052] without a performance 
regression.

[Bug 1052]: https://bugs.dpdk.org/show_bug.cgi?id=1052


This RFC offers a conceptual zero-copy put function, where the application 
promises to store some objects, and in return gets an address where to store 
them.

I would like some early feedback.

Notes:
* Allowing the 'cache' parameter to be NULL, and getting it from the mempool 
instead, was inspired by rte_mempool_cache_flush().
* Asserting that the 'mp' parameter is not NULL is not done by other functions, 
so I omitted it here too.

NB: Please ignore formatting. Also, this code has not even been compile tested.

/**
 * Promise to put objects in a mempool via zero-copy access to a user-owned 
mempool cache.
 *
 * @param cache
 *   A pointer to the mempool cache.
 * @param mp
 *   A pointer to the mempool.
 * @param n
 *   The number of objects to be put in the mempool cache.
 * @return
 *   The pointer to where to put the objects in the mempool cache.
 *   NULL on error
 *   with rte_errno set appropriately.
 */
static __rte_always_inline void *
rte_mempool_cache_put_bulk_promise(struct rte_mempool_cache *cache,
        struct rte_mempool *mp,
        unsigned int n)
{
    void **cache_objs;

    if (cache == NULL)
        cache = rte_mempool_default_cache(mp, rte_lcore_id());
    if (cache == NULL) {
        rte_errno = EINVAL;
        return NULL;
    }

    rte_mempool_trace_cache_put_bulk_promise(cache, mp, n);

    /* The request itself is too big for the cache */
    if (unlikely(n > cache->flushthresh)) {
        rte_errno = EINVAL;
        return NULL;
    }

    /*
     * The cache follows the following algorithm:
     *   1. If the objects cannot be added to the cache without crossing
     *      the flush threshold, flush the cache to the backend.
     *   2. Add the objects to the cache.
     */

    if (cache->len + n <= cache->flushthresh) {
        cache_objs = &cache->objs[cache->len];
        cache->len += n;
    } else {
        cache_objs = &cache->objs[0];
        rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len);
        cache->len = n;
    }

    RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1);
    RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n);

    return cache_objs;
}


Med venlig hilsen / Kind regards,
-Morten Brørup


Reply via email to