On Tue, Jun 8, 2021 at 12:05 PM Thomas Monjalon <tho...@monjalon.net> wrote:
>
> 08/06/2021 06:10, Jerin Jacob:
> > On Mon, Jun 7, 2021 at 10:17 PM Thomas Monjalon <tho...@monjalon.net> wrote:
> > >
> > > 07/06/2021 15:54, Jerin Jacob:
> > > > On Mon, Jun 7, 2021 at 4:13 PM Thomas Monjalon <tho...@monjalon.net> 
> > > > wrote:
> > > > > 07/06/2021 09:20, Wang, Haiyue:
> > > > > > From: Honnappa Nagarahalli <honnappa.nagaraha...@arm.com>
> > > > > > > If we keep CXL in mind, I would imagine that in the future the 
> > > > > > > devices on PCIe could have their own
> > > > > > > local memory. May be some of the APIs could use generic names. 
> > > > > > > For ex: instead of calling it as
> > > > > > > "rte_gpu_malloc" may be we could call it as "rte_dev_malloc". 
> > > > > > > This way any future device which hosts
> > > > > > > its own memory that need to be managed by the application, can 
> > > > > > > use these APIs.
> > > > > > >
> > > > > >
> > > > > > "rte_dev_malloc" sounds a good name,
> > > > >
> > > > > Yes I like the idea.
> > > > > 2 concerns:
> > > > >
> > > > > 1/ Device memory allocation requires a device handle.
> > > > > So far we avoided exposing rte_device to the application.
> > > > > How should we get a device handle from a DPDK application?
> > > >
> > > > Each device behaves differently at this level. In the view of the
> > > > generic application, the architecture should like
> > > >
> > > > < Use DPDK subsystem as rte_ethdev, rte_bbdev etc for SPECIFIC function 
> > > > >
> > > > ^
> > > > |
> > > > < DPDK driver>
> > > > ^
> > > > |
> > > > <rte_device with this new callbacks >
> > >
> > > I think the formatting went wrong above.
> > >
> > > I would add more to the block diagram:
> > >
> > > class device API      - computing device API
> > >         |            |              |
> > > class device driver -   computing device driver
> > >         |                           |
> > >        EAL device with memory callback
> > >
> > > The idea above is that the class device driver can use services
> > > of the new computing device library.
> >
> > Yes. The question is, do we need any public DPDK _application_ APIs for 
> > that?
>
> To have something generic!
>
> > If it is public API then the scope is much bigger than that as the 
> > application
> > can use it directly and it makes it non portable.
>
> It is a non-sense. If we make an API, it will be better portable.

The portal application will be using class device API.
For example, when application needs to call rte_gpu_malloc() vs rte_malloc() ?
Is it better the use of drivers specific functions used in "class
device driver" not exposed?



> The only part which is non-portable is the program on the device
> which may be different per computing device.
> The synchronization with the DPDK application should be portable
> if we define some good API.
>
> > if the scope is only, the class driver consumption then the existing
> > "bus"  _kind of_
> > abstraction/API makes sense to me.
> >
> > Where it abstracts,
> > -FW download of device
> > -Memory management of device
> > -Opaque way to enq/deque jobs to the device.
> >
> > And above should be consumed by "class driver" not "application".
> >
> > If the application doing do that, we are in rte_raw device territory.
>
> I'm sorry I don't understand what you make such assertion.
> It seems you don't want generic API (which is the purpose of DPDK).

I would like to have a generic _application_ API if the application
_needs_ to use it.

The v1 nowhere close to any compute device description.

It has a memory allocation API. It is the device attribute, not
strictly tied to ONLY TO computing device.

So at least, I am asking to have concrete
proposal on "compute device" schematic rather than start with memory API
and rubber stamp as new device adds anything in future.

When we added any all the class devices to DPDK, Everyone had a complete view
of it is function(at RFC of each subsystem had enough API to express
the "basic" usage)
and purpose from the _application_ PoV. I see that is missing here.


>
>

Reply via email to