On 6/19/2017 6:31 PM, Jerin Jacob wrote:
-----Original Message-----
Date: Mon, 19 Jun 2017 17:22:46 +0530
From: Hemant Agrawal <hemant.agra...@nxp.com>
To: Santosh Shukla <santosh.shu...@caviumnetworks.com>,
 olivier.m...@6wind.com, dev@dpdk.org
CC: jerin.ja...@caviumnetworks.com
Subject: Re: [PATCH 0/2] Allow application set mempool handle
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:45.0) Gecko/20100101
 Thunderbird/45.8.0

On 6/1/2017 1:35 PM, Santosh Shukla wrote:
Some platform can have two different NICs for example external PCI Intel
40G card and Integrated NIC like vNIC/octeontx/dpaa2.

Both NICs like to use their preferred pool e.g. external PCI card/ vNIC's
preferred pool would be the ring based pool and octeontx/dpaa2 preferred would
be ext-mempools.
Right now, Framework doesn't support such case. Only one pool can be
used across two different NIC's. For that, user has to statically set
CONFIG_RTE_MEMPOOL_DEFAULT_OPS=<pool-name>.

So proposing two approaches:
Patch 1) Introducing eal option --pkt-mempool=<pool-name>
Patch 2) Introducing ethdev API called _get_preferred_pool(), where PMD driver
gets a chance to advertise their pool capability to the application. And based
on that hint- application creates pools for that driver.

If the system is having more than one heterogeneous ethernet device with different mempool, the application has to create different mempool for each of the ethernet device.

However, let's take a case
As system has a DPAA2 eth device, which only work with dpaa2 mempools.
System also detect a standard PCI NIC, which can work with any software mempool (e.g ring_mp_mc) or with dpaa2 mempool. Given the preference, PCI NIC will have preferred as software mempool. how the application will choose between these, if it want to create only one mempool? Or, how the scheme will work if the application want to create only one mempool?



The idea is good. it will help the vendors with hw mempool support.

On a similar line, I also submitted a patch to check the existence of a
mempool instance.
http://dpdk.org/dev/patchwork/patch/15877/

Option 1) requires manual knowledge of underlying NIC and different commands
for different machines.

Option 2) this will help more as it allows the application to take decision
autonomously.

In addition to it, we can also extend the overall MEMPOOL_OPS support.
 3)  currently we support defining only one "RTE_MBUF_DEFAULT_MEMPOOL_OPS"
    this can be supported to publish a priority list of MEMPOOL_OPS in
config. if one is not available, application can try the next one in
priority list as supported by the platform.

4) we can also try something, where the existing application can also be
supported.
        - default mempool is configured as alias. This is with empty ops.
        - based on the mempool detections on the bus, the bus configure the
mempool ops internally with the actual ones.

What if both HW are on PCIe bus(That the case for us)? Any scheme to fix
that?

Nothing without a hackish approach. In our case, there is only a one mempool type on one type of platform specific bus.





Santosh Shukla (2):
  eal: Introducing option to set mempool handle
  ether/ethdev: Allow pmd to advertise preferred pool capability

 lib/librte_eal/bsdapp/eal/eal.c                 |  9 +++++++
 lib/librte_eal/bsdapp/eal/rte_eal_version.map   |  7 +++++
 lib/librte_eal/common/eal_common_options.c      |  3 +++
 lib/librte_eal/common/eal_internal_cfg.h        |  2 ++
 lib/librte_eal/common/eal_options.h             |  2 ++
 lib/librte_eal/common/include/rte_eal.h         |  9 +++++++
 lib/librte_eal/linuxapp/eal/eal.c               | 36 +++++++++++++++++++++++++
 lib/librte_eal/linuxapp/eal/rte_eal_version.map |  7 +++++
 lib/librte_ether/rte_ethdev.c                   | 16 +++++++++++
 lib/librte_ether/rte_ethdev.h                   | 21 +++++++++++++++
 lib/librte_ether/rte_ether_version.map          |  7 +++++
 lib/librte_mbuf/rte_mbuf.c                      |  8 ++++--
 12 files changed, 125 insertions(+), 2 deletions(-)






Reply via email to