On Mon, Oct 11, 2021 at 2:42 PM Thomas Monjalon <tho...@monjalon.net> wrote:
>
> 11/10/2021 10:43, Jerin Jacob:
> > On Mon, Oct 11, 2021 at 1:48 PM Thomas Monjalon <tho...@monjalon.net> wrote:
> > > 10/10/2021 12:16, Jerin Jacob:
> > > > On Fri, Oct 8, 2021 at 11:13 PM <eagost...@nvidia.com> wrote:
> > > > >
> > > > > From: eagostini <eagost...@nvidia.com>
> > > > >
> > > > > In heterogeneous computing system, processing is not only in the CPU.
> > > > > Some tasks can be delegated to devices working in parallel.
> > > > >
> > > > > The goal of this new library is to enhance the collaboration between
> > > > > DPDK, that's primarily a CPU framework, and GPU devices.
> > > > >
> > > > > When mixing network activity with task processing on a non-CPU device,
> > > > > there may be the need to put in communication the CPU with the device
> > > > > in order to manage the memory, synchronize operations, exchange info, 
> > > > > etc..
> > > > >
> > > > > This library provides a number of new features:
> > > > > - Interoperability with GPU-specific library with generic handlers
> > > > > - Possibility to allocate and free memory on the GPU
> > > > > - Possibility to allocate and free memory on the CPU but visible from 
> > > > > the GPU
> > > > > - Communication functions to enhance the dialog between the CPU and 
> > > > > the GPU
> > > >
> > > > In the RFC thread, There was one outstanding non technical issues on 
> > > > this,
> > > >
> > > > i.e
> > > > The above features are driver specific details. Does the DPDK
> > > > _application_ need to be aware of this?
> > >
> > > I don't see these features as driver-specific.
> >
> > That is the disconnect. I see this as more driver-specific details
> > which are not required to implement an "application" facing API.
>
> Indeed this is the disconnect.
> I already answered but it seems you don't accept the answer.

Same with you. That is why I requested, we need to get opinions from others.
Some of them already provided opinions in RFC.

>
> First, this is not driver-specific. It is a low-level API.

What is the difference between low-level API and driver-level API.


>
> > For example, If we need to implement application facing" subsystems like 
> > bbdev,
> > If we make all this driver interface, you can still implement the
> > bbdev API as a driver without
> > exposing HW specific details like how devices communicate to CPU, how
> > memory is allocated etc
> >  to "application".
>
> There are 2 things to understand here.
>
> First we want to allow the application using the GPU for needs which are
> not exposed by any other DPDK API.
>
> Second, if we want to implement another DPDK API like bbdev,
> then the GPU implementation would be exposed as a vdev in bbdev,
> using the HW GPU device being a PCI in gpudev.
> They are two different levels, got it?

Exactly. So what is the point of exposing low-level driver API to
"application",
why not it is part of the internal driver API. My point is, why the
application needs to worry
about, How the CPU <-> Device communicated? CPU < -> Device memory
visibility etc.

>
> > > > aka DPDK device class has a fixed personality and it has API to deal
> > > > with abstracting specific application specific
> > > > end user functionality like ethdev, cryptodev, eventdev irrespective
> > > > of underlying bus/device properties.
> > >
> > > The goal of the lib is to allow anyone to invent any feature
> > > which is not already available in DPDK.
> > >
> > > > Even similar semantics are required for DPU(Smart NIC)
> > > > communitication. I am planning to
> > > > send RFC in coming days to address the issue without the application
> > > > knowing the Bus/HW/Driver details.
> > >
> > > gpudev is not exposing bus/hw/driver details.
> > > I don't understand what you mean.
> >
> > See above.
>
>
>

Reply via email to