On Tue, Aug 16, 2022 at 10:04 PM Honnappa Nagarahalli
<honnappa.nagaraha...@arm.com> wrote:
>
> <snip>
>
> > > From: Jerin Jacob [mailto:jerinjac...@gmail.com]
> > > Sent: Tuesday, 16 August 2022 15.13
> > >
> > > On Wed, Aug 3, 2022 at 8:49 PM Stephen Hemminger
> > > <step...@networkplumber.org> wrote:
> > > >
> > > > On Wed, 3 Aug 2022 18:58:37 +0530
> > > > <jer...@marvell.com> wrote:
> > > >
> > > > > Roadmap
> > > > > -------
> > > > > 1) Address the comments for this RFC.
> > > > > 2) Common code for mldev
> > > > > 3) SW mldev driver based on TVM (https://tvm.apache.org/)
> > > >
> > > > Having a SW implementation is important because then it can be
> > > covered
> > > > by tests.
> > >
> > > Yes. That reason for adding TVM based SW driver as item (3).
> > >
> > > Is there any other high level or API level comments before proceeding
> > > with v1 and implementation.
> >
> > Have you seriously considered if the DPDK Project is the best home for this
> > project? I can easily imagine the DPDK development process being a hindrance
> > in many aspects for an evolving AI/ML library. Off the top of my head, it 
> > would
> > probably be better off as a separate project, like SPDK.
> There is a lot of talk about using ML in networking workloads. Although, I am 
> not very sure on how the use case looks like. For ex: is the inference engine 
> going to be inline (i.e. the packet goes through the inference engine before 
> coming to the CPU and provide some data (what sort of data?)), look aside 
> (does it require the packets to be sent to the inference engine or is it some 
> other data?), what would be an end to end use case? A sample application 
> using these APIs would be helpful.

Simple application for the inference usage is added in the cover letter.

Regarding the use cases, There are many like firewall, intrusion
detection etc. Most of the use cases are driven by product
requirements and SW IP vendors try to keep it to themselves as a
product differentiate factor.
That is the prime reason for DPDK scope only for inference where IO is
involved. Model creation and training etc will heavily vary based on
use case but not the inference model.

>
> IMO, if we need to share the packets with the inference engine, then it fits 
> into DPDK.

Yes. Yes for networking or ORAN use cases the interface data comes
over wire and result can go over wire.

>
> As I understand, there are many mature open source projects for ML/inference 
> outside of DPDK. Does it make sense for DPDK to adopt those projects rather 
> than inventing our own?

#  AI/ML compiler libraries more focused on model creation and
training etc (Thats where actual value addition the AI/ML libraries
can offer) and
minimal part for inference (It is just added for testing the model)
# Considering the inference is the scope of the DPDK. DPDK is ideal
place for following reasons

a) Inference scope is very limited.
b) Avoid memcpy of inference data (Use directly from network or
other class of device like cryptodev, regexdev)
c) Reuse highspeed IO interface like  PCI backed driver etc
d) Integration with other DPDK subsystems like eventdev etc for job completion.
e) Also support more inline offloads by merging two device classes
like rte_secuity.
f) Run the inference model from different AI/ML compiler frameworks or
abstract the inference usage.
Similar concept is already applied to other DPDK device classes like
1) In Regexdev,  The compiler generates the rule database which is out
of scope of DPDK. DPDK API just loads the rule database
2) In Gpudev, The GPU kernel etc out of scope of DPDK.DPDK cares about
IO interface.

>
> >
> > If all this stuff can be completely omitted at build time, I have no 
> > objections.
> >
> > A small note about naming (not intending to start a flame war, so please 
> > feel
> > free to ignore!): I haven't worked seriously with ML/AI since university 
> > three
> > decades ago, so I'm quite rusty in the domain. However, I don't see any
> > Machine Learning functions proposed by this API. The library provides an 
> > API to
> > an Inference Engine - but nobody says the inference model stems from
> > Machine Learning; it might as well be a hand crafted model. Do you plan to
> > propose APIs for training the models? If not, the name of the library could
> > confuse some potential users.
> I think, at least on the edge devices, we need an inference device as ML 
> requires more cycles/power.
>
> >
> > > Or Anyone else interested to review or contribute to this new DPDK
> > > device class?
>

Reply via email to