On Mon, May 30, 2016 at 12:27:26PM +0100, Hunt, David wrote:
> 
> New mempool handlers will use rte_mempool_create_empty(),
> rte_mempool_set_handler(),
> then rte_mempool_populate_*(). These three functions are new to this
> release, to no problem

Having separate APIs for external pool-manager create is worrisome in
application perspective. Is it possible to have rte_mempool_[xmem]_create
for the both external and existing SW pool manager and make
rte_mempool_create_empty and rte_mempool_populate_*  internal functions.

IMO, We can do that by selecting  specific rte_mempool_set_handler()
based on _flags_ encoding, something like below

bit 0 - 16   // generic bits uses across all the pool managers
bit 16 - 23  // pool handler specific flags bits
bit 24 - 31  // to select the specific pool manager(Up to 256 different flavors 
of
pool managers, For backward compatibility, make '0'(in 24-31) to select
existing SW pool manager.

and applications can choose the handlers by selecting the flag in
rte_mempool_[xmem]_create, That way it will be easy in testpmd or any other
applications to choose the pool handler from command line etc in future.

and we can remove "mbuf: get default mempool handler from configuration"
change-set OR just add if RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is defined then set
the same with rte_mempool_set_handler in rte_mempool_[xmem]_create.

What do you think?

> to add a parameter to one of them for the config data. Also since we're
> adding some new
> elements to the mempool structure, how about we add a new pointer for a void
> pointer to a
> config data structure, as defined by the handler.
> 
> So, new element in rte_mempool struct alongside the *pool
>     void *pool;
>     void *pool_config;
> 
> Then add a param to the rte_mempool_set_handler function:
> int
> rte_mempool_set_handler(struct rte_mempool *mp, const char *name, void
> *pool_config)

IMO, Maybe we need to have _set_ and _get_.So I think we can have
two separate callback in external pool-manger for that if required.
IMO, For now, We can live with pool manager specific 8 bits(bit 16 -23)
for the configuration as mentioned above and add the new callbacks for
set and get when required?

> > 2) IMO, It is better to change void *pool in struct rte_mempool to
> > anonymous union type, something like below, so that mempool
> > implementation can choose the best type.
> >     union {
> >             void *pool;
> >             uint64_t val;
> >     }
> 
> Could we do this by using the union for the *pool_config suggested above,
> would that give
> you what you need?

It would be an extra overhead for external pool manager to _alloc_ memory
and store the allocated pointer in mempool struct(as *pool) and use pool for
pointing other data structures as some implementation need only
limited bytes to store the external pool manager specific context.

In order to fix this problem, We may classify fast path and slow path
elements in struct rte_mempool and move all fast path elements in first
cache line and create an empty opaque space in the remaining bytes in the
cache line so that if the external pool manager needs only limited space
then it is not required to allocate the separate memory to save the
per core cache  in fast-path

something like below,
union {
        void *pool;
        uint64_t val;
        uint8_t extra_mem[16] // available free bytes in fast path cache line

}

Other points,

1) Is it possible to remove unused is_mp in  __mempool_put_bulk
function as it is just a internal function.

2) Considering "get" and "put" are the fast-path callbacks for
pool-manger, Is it possible to avoid the extra overhead of the following
_load_ and additional cache line on each call,
rte_mempool_handler_table.handler[handler_idx]

I understand it is for multiprocess support but I am thing can we
introduce something like ethernet API support for multiprocess and
resolve "put" and "get" functions pointer on init and store in
struct mempool. Some thinking like

file: drivers/net/ixgbe/ixgbe_ethdev.c
search for if (rte_eal_process_type() != RTE_PROC_PRIMARY) {

Jerin

Reply via email to