Hi Olivier,
On 04/09/2017 14:34, Olivier MATZ wrote:
Hi Sergio,
On Mon, Sep 04, 2017 at 10:41:56AM +0100, Sergio Gonzalez Monroy wrote:
On 15/08/2017 09:07, Santosh Shukla wrote:
v3:
- Rebased on top of v17.11-rc0
- Updated version.map entry to v17.11.
v2:
DPDK has support for hw and sw mempool. Those mempool
can work optimal for specific PMD's.
Example:
sw ring based PMD for Intel NICs.
HW mempool manager dpaa2 for dpaa2 PMD.
HW mempool manager fpa for octeontx PMD.
There could be a use-case where different vendor NIC's used
on the same platform and User like to configure mempool in such a way that
each of those NIC's should use their preferred mempool(For performance reasons).
Current mempool infrastrucure don't support such use-case.
This patchset tries to address that problem in 2 steps:
0) Allowing user to dynamically configure mempool handle by
passing pool handle as eal arg to `--mbuf-pool-ops=<pool-handle>`.
1) Allowing PMD's to advertise their preferred pool to an application.
From an application point of view:
- The application must ask PMD about their preferred pool.
- PMD to respond back with preferred pool otherwise
CONFIG_RTE_MEMPOOL_DEFAULT_OPS will be used for that PMD.
* Application programming sequencing would be
char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE];
rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempool /* out
*/);
rte_mempool_create_empty();
rte_mempool_set_ops_byname( , pref_memppol, );
rte_mempool_populate_default();
What about introducing an API like:
rte_pktmbuf_poll_create_with_ops (..., ops_name, config_pool);
I think that API would help for the case the application wants an mbuf pool
with ie. stack handler.
Sure we can do the empty/set_ops/populate sequence, but the only thing we
want to change from default pktmbuf_pool_create API is the pool handler.
Application just needs to decide the ops handler to use, either default or
one suggested by PMD?
I think ideally we would have similar APIs:
- rte_mempool_create_with_ops (...)
- rte_memppol_xmem_create_with_ops (...)
Today, we may only want to change the mempool handler, but if we
need to change something else tomorrow, we would need to add another
parameter again, breaking the ABI.
If we pass a config structure, adding a new field in it would also break the
ABI, except if the structure is opaque, with accessors. These accessors would be
functions (ex: mempool_cfg_new, mempool_cfg_set_pool_ops, ...). This is not so
much different than what we have now.
The advantage I can see of working on a config structure instead of directly on
a mempool is that the api can be reused to build a default config.
That said, I think it's quite orthogonal to this patch since we still require
the ethdev api.
Olivier
Fair enough.
Just to double check that we are on the same page:
- rte_mempool_create is just there for backwards compatibility and a
sequence of create_empty -> set_ops (optional) ->init ->
populate_default -> obj_iter (optional) is the recommended way of
creating mempools.
- if application wants to use re_mempool_xmem_create with different pool
handler needs to replicate function and add set_ops step.
Now if rte_pktmbuf_pool_create is still the preferred mechanism for
applications to create mbuf pools, wouldn't it make sense to offer the
option of either pass the ops_name as suggested before or for the
application to just set a different pool handler? I understand the it is
either breaking API or introducing a new API, but is the solution to
basically "copy" the whole function in the application and add an
optional step (set_ops)?
With these patches we can:
- change the default pool handler with EAL option, which does *not*
allow for pktmbuf_pool_create with different handlers.
- We can check the PMDs preferred/supported handlers but then we need to
implement function with whole sequence, cannot use pktmbuf_pool_create.
It looks to me then that any application that wants to use different
pool handlers than default/ring need to implement their own create_pool
with set_ops.
Thanks,
Sergio