On Thursday 07 September 2017 03:36 PM, santosh wrote:
> Hi Hemant,
>
>
> On Thursday 07 September 2017 02:51 PM, Hemant Agrawal wrote:
>> On 9/4/2017 6:44 PM, santosh wrote:
>>> Hi Olivier,
>>>
>>>
>>> On Monday 04 September 2017 05:41 PM, Olivier MATZ wrote:
>>>> Hi Santosh,
>>>>
>>>> On Tue, Aug 15, 2017 at 01:37:17PM +0530, Santosh Shukla wrote:
>>>>> Now that dpdk supports more than one mempool drivers and
>>>>> each mempool driver works best for specific PMD, example:
>>>>> - sw ring based mempool for Intel PMD drivers
>>>>> - dpaa2 HW mempool manager for dpaa2 PMD driver.
>>>>> - fpa HW mempool manager for Octeontx PMD driver.
>>>>>
>>>>> Application like to know `preferred mempool vs PMD driver`
>>>>> information in advance before port setup.
>>>>>
>>>>> Introducing rte_eth_dev_get_preferred_pool_ops() API,
>>>>> which allows PMD driver to advertise their pool capability to application.
>>>>>
>>>>> Application side programing sequence would be:
>>>>>
>>>>> char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE];
>>>>> rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempoolx /*out*/);
>>>>> rte_mempool_create_empty();
>>>>> rte_mempool_set_ops_byname( , pref_memppol, );
>>>>> rte_mempool_populate_default();
>>>>>
>>>>> Signed-off-by: Santosh Shukla <santosh.shu...@caviumnetworks.com>
>>>>> ---
>>>>> v2 --> v3:
>>>>> - Updated version.map entry to DPDK_v17.11.
>>>>>
>>>>> v1 --> v2:
>>>>> - Renamed _get_preferred_pool to _get_preferred_pool_ops().
>>>>> Per v1 review feedback, Olivier suggested to rename api
>>>>> to rte_eth_dev_pool_ops_supported(), considering that 2nd param
>>>>> for that api will return pool handle 'priority' for that port.
>>>>> However, per v1 [1], we're opting for approach 1) where
>>>>> ethdev API returns _preferred_ pool handle to application and Its upto
>>>>> application to decide on policy - whether application wants to create
>>>>> pool with received preferred pool handle or not. For more discussion 
>>>>> details
>>>>> on this topic refer [1].
>>>> Well, I still think it would be more flexible to have an API like
>>>>  rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool)
>>>>
>>>> It supports the easy case (= one preferred mempool) without much pain,
>>>> and provides a more precise manner to describe what is supported or not
>>>> by the driver. Example: "pmd_foo" prefers "mempool_foo" (best perf), but
>>>> also supporst "mempool_stack" and "mempool_ring", but "mempool_bar"
>>>> won't work at all.
>>>>
>>>> Having only one preferred pool_ops also prevents from smoothly renaming
>>>> a pool (supporting both during some time) or to have 2 names for
>>>> different variants of the same pool_ops (ex: ring_mp_mc, ring_sp_sc).
>>>>
>>>> But if the users (I guess at least Cavium and NXP) are happy with
>>>> what you propose, I'm fine with it.
>>> preferred handle based upon real world use-case and same thing raised
>>> at [1].
>>>
>>> Hi Hemant, Are you ok with proposed preferred API?
>>>
>>> [1] http://dpdk.org/dev/patchwork/patch/24944/
>>>
>> The current patch is ok, but it is better if you can extend it to provide 
>> list of preferred pools (in preference order) instead of just one pool. This 
>> will become helpful. I will avoid providing list of not-supported pools etc.
>>
>> A driver can have more than one preferred pool, depend on the resource 
>> availability one or other can be used. I am also proposing this in my 
>> proposal[1].
>>
>> [1] http://dpdk.org/dev/patchwork/patch/26377/
>>
> Ok, then sticking to Olivier api but slight change in param,
> example:
> /** * Get list of supported pools for a port * * @param port_id [in] * 
> Pointer to port identifier of the device. * @param pools [out] * Pointer to 
> the list of supported pools for that port. * Returns with array of pool ops 
> name handler of size * RTE_MEMPOOL_OPS_NAMESIZE. * @return * >=0: Success; 
> PMD has updated supported pool list. * <0: Failure; */ int 
> rte_eth_dev_pools_ops_supported(uint8_t port_id, char **pools) Hemant, 
> Olivier: Does above api make sense? Pl. confirm. Thanks.
Sorry for the font, resending proposed API:

/** 
 * Get list of supported pools for a port 
 * @param port_id [in] 
 *   Pointer to port identifier of the device. 
 * @param pools [out] 
 * Pointer to the list of supported pools for that port. 
 * Returns with array of pool ops name handler of size 
 * RTE_MEMPOOL_OPS_NAMESIZE. 
 * @return
 * >=0: Success; PMD has updated supported pool list. 
 * <0: Failure; 
 */ 

int rte_eth_dev_pools_ops_supported(uint8_t port_id, char **pools)

Hemant, Olivier: Does above api make sense? Pl. confirm. Thanks.

>>
>>>>> --- a/lib/librte_ether/rte_ethdev.c
>>>>> +++ b/lib/librte_ether/rte_ethdev.c
>>>>> @@ -3409,3 +3409,21 @@ rte_eth_dev_adjust_nb_rx_tx_desc(uint8_t port_id,
>>>>>
>>>>>      return 0;
>>>>>  }
>>>>> +
>>>>> +int
>>>>> +rte_eth_dev_get_preferred_pool_ops(uint8_t port_id, char *pool)
>>>>> +{
>>>>> +    struct rte_eth_dev *dev;
>>>>> +    const char *tmp;
>>>>> +
>>>>> +    RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
>>>>> +
>>>>> +    dev = &rte_eth_devices[port_id];
>>>>> +
>>>>> +    if (*dev->dev_ops->get_preferred_pool_ops == NULL) {
>>>>> +        tmp = rte_eal_mbuf_default_mempool_ops();
>>>>> +        snprintf(pool, RTE_MBUF_POOL_OPS_NAMESIZE, "%s", tmp);
>>>>> +        return 0;
>>>>> +    }
>>>>> +    return (*dev->dev_ops->get_preferred_pool_ops)(dev, pool);
>>>>> +}
>>>> I think adding the length of the pool buffer to the function arguments
>>>> would be better: only documenting that the length is
>>>> RTE_MBUF_POOL_OPS_NAMESIZE looks a bit weak to me, because if one day it
>>>> changes to another value, the users of the function may not notice it
>>>> (no ABI/API change).
>>>>
>>>>
>>>> One more comment: it would be helpful to have one user of this API in
>>>> the example apps or testpmd.
>>> Yes. I will add in v3. Thanks.
>>>
>>>> Olivier
>>>

Reply via email to