> -----Original Message-----
> From: Naga Harish K, S V <s.v.naga.haris...@intel.com>
> Sent: Wednesday, January 29, 2025 10:35 AM
> To: Shijith Thotton <sthot...@marvell.com>; dev@dpdk.org
> Cc: Pavan Nikhilesh Bhagavatula <pbhagavat...@marvell.com>; Pathak, Pravin
> <pravin.pat...@intel.com>; Hemant Agrawal <hemant.agra...@nxp.com>;
> Sachin Saxena <sachin.sax...@nxp.com>; Mattias R_nnblom
> <mattias.ronnb...@ericsson.com>; Jerin Jacob <jer...@marvell.com>; Liang
> Ma <lian...@liangbit.com>; Mccarthy, Peter <peter.mccar...@intel.com>;
> Van Haaren, Harry <harry.van.haa...@intel.com>; Carrillo, Erik G
> <erik.g.carri...@intel.com>; Gujjar, Abhinandan S
> <abhinandan.guj...@intel.com>; Amit Prakash Shukla
> <amitpraka...@marvell.com>; Burakov, Anatoly <anatoly.bura...@intel.com>
> Subject: [EXTERNAL] RE: [RFC PATCH] eventdev: adapter API to configure
> multiple Rx queues
> > >
> > >This requires a change to the rte_event_eth_rx_adapter_queue_add()
> > >stable API parameters.
> > >This is an ABI breakage and may not be possible now.
> > >It requires changes to many current applications that are using the
> > >rte_event_eth_rx_adapter_queue_add() stable API.
> > >
> >
> > What I meant by mapping was to retain the stable API parameters as they are.
> > Internally, the API can use the proposed eventdev PMD operation
> > (eth_rx_adapter_queues_add) without causing an ABI break, as shown below.
> >
> > int rte_event_eth_rx_adapter_queue_add(uint8_t id, uint16_t eth_dev_id,
> >                 int32_t rx_queue_id,
> >                 const struct rte_event_eth_rx_adapter_queue_conf *conf) {
> >         if (rx_queue_id == -1)
> >                 dev->dev_ops->eth_rx_adapter_queues_add)(
> >                         dev, &rte_eth_devices[eth_dev_id], 0,
> >                         conf, 0);
> >         else
> >                 dev->dev_ops->eth_rx_adapter_queues_add)(
> >                         dev, &rte_eth_devices[eth_dev_id], &rx_queue_id,
> >                         conf, 1);
> > }
> >
> > With above change, old op (eth_rx_adapter_queue_add) can be removed as
> > both API (stable and proposed) will be using eth_rx_adapter_queues_add.


Since this thread is not converging and looks like it is due to confusion.
I am trying to summarize my understanding to define the next steps(like if 
needed, we need to reach tech board if there are no consensus)


Problem statement:
==================
1) Implementation of rte_event_eth_rx_adapter_queue_add() in HW typically uses 
an administrative
function to enable it. Typically, it translated to sending a mailbox to PF 
driver etc.
So, this function takes "time" to complete in HW implementations.
2) For SW implementations, this won't take time as there is no other actors 
involved.
3) There are customer use cases, they add 300+ 
rte_event_eth_rx_adapter_queue_add() on 
application bootup, that is introducing significant boot time for the 
application.
Number of queues are function of number of ethdev ports, number  of ethdev Rx 
queues per port
and number of event queues.


Expected outcome of problem statement:
======================================
1) The cases where application knows queue mapping(typically at boot time case),
application can call burst variant of rte_event_eth_rx_adapter_queue_add() 
function
to amortize the cost. Similar scheme used DPDK in control path API where 
latency is critical,
like rte_acl_add_rules() or rte_flow via template scheme.
2) Solution should not break ABI or any impact to SW drivers.
3) Avoid duplicating the code as much as possible


Proposed solution:
==================
1) Update eventdev_eth_rx_adapter_queue_add_t() PMD (Internal ABI) API to take 
burst parameters
2) Add new rte_event_eth_rx_adapter_queue*s*_add() function and wire to use 
updated PMD API
3) Use rte_event_eth_rx_adapter_queue_add() as 
rte_event_eth_rx_adapter_queue*s*_add(...., 1)

If so, I am not sure what is the cons of this approach, it will let to have 
optimized applications when
a) Application knows the queue mapping at priorly (typically in boot time)
b) Allow HW drivers to optimize without breaking anything for SW drivers
c) Provide applications to decide burst vs non burst selection based on the 
needed and performance requirements

Reply via email to