-----Original Message-----
> Date: Wed, 11 Jul 2018 02:02:32 +0000
> From: Honnappa Nagarahalli <honnappa.nagaraha...@arm.com>
> To: Jerin Jacob <jerin.ja...@caviumnetworks.com>
> CC: "dev@dpdk.org" <dev@dpdk.org>, Gavin Hu <gavin...@arm.com>, nd
>  <n...@arm.com>, Hemant Agrawal <hemant.agra...@nxp.com>
> Subject: RE: [dpdk-dev] [RFC] queue: introduce queue APIs and driver
>  framework
> 
> External Email
> 
> -----Original Message-----
> From: Jerin Jacob <jerin.ja...@caviumnetworks.com>
> Sent: Wednesday, June 27, 2018 11:20 AM
> To: Honnappa Nagarahalli <honnappa.nagaraha...@arm.com>
> Cc: dev@dpdk.org; Gavin Hu <gavin...@arm.com>; nd <n...@arm.com>
> Subject: Re: [dpdk-dev] [RFC] queue: introduce queue APIs and driver framework
> 
> -----Original Message-----
> > Date: Wed, 27 Jun 2018 11:06:13 -0500
> > From: Honnappa Nagarahalli <honnappa.nagaraha...@arm.com>
> > To: dev@dpdk.org
> > CC: honnappa.nagaraha...@arm.com, gavin...@arm.com, n...@arm.com
> > Subject: [dpdk-dev] [RFC] queue: introduce queue APIs and driver
> > framework
> > X-Mailer: git-send-email 2.7.4
> >
> >
> > DPDK offers pipeline model of packet processing. One of the key
> > components of this model is the core to core packet exchange.
> > rte_ring and rte_event_ring functions are 2 methods provided currently
> > for core to core communication. However, these two do not separate the
> > APIs from implementation. This does not allow using hardware queue
> > implementations in pipeline model.
> > This change adds queue APIs and driver framework so that HW queues can
> > be used for core to core communication in pipeline model.
> > When different implementations (ex: HW queues and rte_ring) are used
> 
> Just to understand, Do you have any HW in mind where it can do generic multi 
> producer/multi consumer queue operations for core to core in HW as offload.
> 
>> It is my understanding that NXP SoCs provide this capability (Hemant, please 
>> correct me if I am wrong).
>> It is not needed that the offload is a queue. It can be some other mechanism 
>> (for ex: enqueue/dequeue via the scheduler) as long as it performs better 
>> than the rte_ring implementation.

eventdev already abstracts CPU to CPU communication for HW offloads.
If NXP's HW comes under scheduler offload then it is already abstracted over
eventdev.


> 
> > for the same object in different platforms, it is important to make
> > sure that the application is portable. Hence features of different
> > implementations must be elevated to the API level, so that the
> > application writers can make the right choice.
> > Currently, basic APIs are created, will add more required APIs as this
> > progresses.
> >
> > Honnappa Nagarahalli (1):
> >   queue: introduce queue APIs and driver framework
> >
> >  lib/librte_queue/rte_queue.c        | 122 ++++++++++++++++++++++
> >  lib/librte_queue/rte_queue.h        | 200 
> > ++++++++++++++++++++++++++++++++++++
> >  lib/librte_queue/rte_queue_driver.h | 157
> > ++++++++++++++++++++++++++++
> >  3 files changed, 479 insertions(+)
> >  create mode 100644 lib/librte_queue/rte_queue.c  create mode 100644
> > lib/librte_queue/rte_queue.h  create mode 100644
> > lib/librte_queue/rte_queue_driver.h
> >
> > --
> > 2.7.4
> >

Reply via email to