On 5/24/2016 4:35 PM, Jerin Jacob wrote:
> On Thu, May 19, 2016 at 02:44:59PM +0100, David Hunt wrote:
>> +    /*
>> +     * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
>> +     * set the correct index into the handler table.
>> +     */
>> +    if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
>> +            rte_mempool_set_handler(mp, "ring_sp_sc");
>> +    else if (flags & MEMPOOL_F_SP_PUT)
>> +            rte_mempool_set_handler(mp, "ring_sp_mc");
>> +    else if (flags & MEMPOOL_F_SC_GET)
>> +            rte_mempool_set_handler(mp, "ring_mp_sc");
>> +    else
>> +            rte_mempool_set_handler(mp, "ring_mp_mc");
> IMO, We should decouple the implementation specific flags of _a_
> external pool manager implementation from the generic rte_mempool_create_empty
> function as going further when we introduce new flags for custom HW 
> accelerated
> external pool manager then this common code will be bloated.

These flags are only there to maintain backward compatibility for the 
default handlers. I would not
envisage adding more flags to this, I would suggest just adding a new 
handler using the new API calls.
So I would not see this code growing much in the future.


>> +/** Structure storing the table of registered handlers. */
>> +struct rte_mempool_handler_table {
>> +    rte_spinlock_t sl;     /**< Spinlock for add/delete. */
>> +    uint32_t num_handlers; /**< Number of handlers in the table. */
>> +    /** Storage for all possible handlers. */
>> +    struct rte_mempool_handler handler[RTE_MEMPOOL_MAX_HANDLER_IDX];
>> +};
> add __rte_cache_aligned to this structure to avoid "handler" memory
> cacheline being shared with other variables

Will do.

>> +
>> +/** Array of registered handlers */
>> +extern struct rte_mempool_handler_table rte_mempool_handler_table;
>> +
>> +/**
>> + * @internal Get the mempool handler from its index.
>> + *
>> + * @param handler_idx
>> + *   The index of the handler in the handler table. It must be a valid
>> + *   index: (0 <= idx < num_handlers).
>> + * @return
>> + *   The pointer to the handler in the table.
>> + */
>> +static struct rte_mempool_handler *
> inline?

Will do.

>>              /* How many do we require i.e. number to fill the cache + the 
>> request */
>> -            ret = rte_ring_mc_dequeue_bulk(mp->ring, 
>> &cache->objs[cache->len], req);
>> +            ret = rte_mempool_ext_get_bulk(mp,
> This makes inline function to a function pointer. Nothing wrong in
> that. However, Do you see any performance drop with "local cache" only
> use case?
>
> http://dpdk.org/dev/patchwork/patch/12993/

With the latest mempool manager patch (without 12933), I see no 
performance degradation on my Haswell machine.
However, when I apply patch 12933, I'm seeing a 200-300 kpps drop.

With 12933, the mempool_perf_autotest is showing 24% more 
enqueues/dequeues, but testpmd forwarding
traffic between 2 40Gig interfaces from a hardware traffic generator  
with one core doing the forwarding
is showing a drop of 200-300kpps.

Regards,
Dave.



---snip---


Reply via email to