>-----Original Message-----
>From: Nipun Gupta <nipun.gu...@nxp.com>
>Sent: Monday, September 30, 2019 1:17 PM
>To: Jerin Jacob <jerinjac...@gmail.com>
>Cc: Pavan Nikhilesh Bhagavatula <pbhagavat...@marvell.com>; Jerin
>Jacob Kollanukkaran <jer...@marvell.com>;
>bruce.richard...@intel.com; Akhil Goyal <akhil.go...@nxp.com>;
>Marko Kovacevic <marko.kovace...@intel.com>; Ori Kam
><or...@mellanox.com>; Radu Nicolau <radu.nico...@intel.com>;
>Tomasz Kantecki <tomasz.kante...@intel.com>; Sunil Kumar Kori
><sk...@marvell.com>; dev@dpdk.org
>Subject: RE: [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event: add
>eventdev main loop
>
>
>
>> -----Original Message-----
>> From: Jerin Jacob <jerinjac...@gmail.com>
>> Sent: Monday, September 30, 2019 12:08 PM
>> To: Nipun Gupta <nipun.gu...@nxp.com>
>> Cc: Pavan Nikhilesh Bhagavatula <pbhagavat...@marvell.com>; Jerin
>Jacob
>> Kollanukkaran <jer...@marvell.com>; bruce.richard...@intel.com;
>Akhil
>> Goyal <akhil.go...@nxp.com>; Marko Kovacevic
>> <marko.kovace...@intel.com>; Ori Kam <or...@mellanox.com>;
>Radu
>> Nicolau <radu.nico...@intel.com>; Tomasz Kantecki
>> <tomasz.kante...@intel.com>; Sunil Kumar Kori
><sk...@marvell.com>;
>> dev@dpdk.org
>> Subject: Re: [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event: add
>> eventdev main loop
>>
>> On Mon, Sep 30, 2019 at 11:08 AM Nipun Gupta
><nipun.gu...@nxp.com>
>> wrote:
>> >
>> >
>> >
>> > > -----Original Message-----
>> > > From: Pavan Nikhilesh Bhagavatula <pbhagavat...@marvell.com>
>> > > Sent: Friday, September 27, 2019 8:05 PM
>> > > To: Nipun Gupta <nipun.gu...@nxp.com>; Jerin Jacob
>Kollanukkaran
>> > > <jer...@marvell.com>; bruce.richard...@intel.com; Akhil Goyal
>> > > <akhil.go...@nxp.com>; Marko Kovacevic
><marko.kovace...@intel.com>;
>> > > Ori Kam <or...@mellanox.com>; Radu Nicolau
><radu.nico...@intel.com>;
>> > > Tomasz Kantecki <tomasz.kante...@intel.com>; Sunil Kumar Kori
>> > > <sk...@marvell.com>
>> > > Cc: dev@dpdk.org
>> > > Subject: RE: [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event:
>add
>> > > eventdev main loop
>> > >
>> > > >>
>> > > >> From: Pavan Nikhilesh <pbhagavat...@marvell.com>
>> > > >>
>> > > >> Add event dev main loop based on enabled l2fwd options and
>> > > >eventdev
>> > > >> capabilities.
>> > > >>
>> > > >> Signed-off-by: Pavan Nikhilesh <pbhagavat...@marvell.com>
>> > > >> ---
>> > > >
>> > > ><snip>
>> > > >
>> > > >> +          if (flags & L2FWD_EVENT_TX_DIRECT) {
>> > > >> +                  rte_event_eth_tx_adapter_txq_set(mbuf, 0);
>> > > >> +                  while
>> > > >> (!rte_event_eth_tx_adapter_enqueue(event_d_id,
>> > > >> +                                                          port_id,
>> > > >> +                                                          &ev, 1)
>> > > >&&
>> > > >> +                                  !*done)
>> > > >> +                          ;
>> > > >> +          }
>> > > >
>> > > >In the TX direct mode we can send packets directly to the
>ethernet
>> > > >device using ethdev
>> > > >API's. This will save unnecessary indirections and event unfolds
>within
>> > > >the driver.
>> > >
>> > > How would we guarantee atomicity of access to Tx queues?
>Between
>> cores
>> > > as we can only use one Tx queue.
>> > > Also, if SCHED_TYPE is ORDERED how would we guarantee flow
>ordering?
>> > > The capability of MT_LOCKFREE and flow ordering is abstracted
>through `
>> > > rte_event_eth_tx_adapter_enqueue `.
>> >
>> > I understand your objective here. Probably in your case the DIRECT
>is
>> equivalent
>> > to giving the packet to the scheduler, which will pass on the packet
>to the
>> destined device.
>> > On NXP platform, DIRECT implies sending the packet directly to the
>device
>> (eth/crypto),
>> > and scheduler will internally pitch in.
>> > Here we will need another option to send it directly to the device.
>> > We can set up a call to discuss the same, or send patch regarding this
>to you
>> to incorporate
>> > the same in your series.
>>
>> Yes. Sending the patch will make us understand better.
>>
>> Currently, We have two different means for abstracting Tx adapter
>fast
>> path changes,
>> a) SINGLE LINK QUEUE
>> b) rte_event_eth_tx_adapter_enqueue()
>>
>> Could you please share why any of the above schemes do not work
>for NXP
>> HW?
>> If there is no additional functionality in
>> rte_event_eth_tx_adapter_enqueue(), you could
>> simply call direct ethdev tx burst function pointer to make
>> abstraction  intact to avoid
>> one more code flow in the fast path.
>>
>> If I guess it right since NXP HW supports MT_LOCKFREE and only
>atomic, due
>> to
>> that, calling eth_dev_tx_burst will be sufficient. But abstracting
>> over rte_event_eth_tx_adapter_enqueue()
>> makes application life easy. You can call the low level DPPA2 Tx
>function in
>> rte_event_eth_tx_adapter_enqueue() to avoid any performance
>> impact(We
>> are doing the same).
>
>Yes, that’s correct regarding our H/W capability.
>Agree that the application will become complex by adding more code
>flow,
>but calling Tx functions internally may lead to additional CPU cycles.
>Give us a couple of days to analyze the performance impact, and as you
>also say, I too
>don't think it would be much. We should be able to manage it in within
>our driver.

When application calls rte_event_eth_tx_adapter_queue_add() based on 
the eth_dev_id the underlying eventdevice can set 
set rte_event_eth_tx_adapter_enqueue() to directly call a function which 
does the platform specific Tx.

i.e if eth_dev is net/dpaa and event dev is also net/dpaa we need _not_ call 
`rte_eth_tx_burst()` in ` rte_event_eth_tx_adapter_enqueue()` it can directly
Invoke the platform specific Rx function which would avoid function pointer 
indirection.

>
>>
>>
>> >
>> > >
>> > > @see examples/eventdev_pipeline and app/test-
>> eventdev/test_pipeline_*.
>> >
>> > Yes, we are aware of that, They are one way of representing, how
>to build
>> a complete eventdev pipeline.
>> > They don't work on NXP HW.
>> > We plan to send patches for them to fix them for NXP HW soon.
>> >
>> > Regards,
>> > Nipun
>> >
>> > >
>> > > >
>> > > >> +
>> > > >> +          if (timer_period > 0)
>> > > >> +                  __atomic_fetch_add(&eventdev_rsrc-
>> > > >>stats[mbuf-
>> > > >> >port].tx,
>> > > >> +                                     1, __ATOMIC_RELAXED);
>> > > >> +  }
>> > > >> +}

Reply via email to