On Fri, Oct 8, 2021 at 11:13 PM <eagost...@nvidia.com> wrote:
>
> From: eagostini <eagost...@nvidia.com>
>
> In heterogeneous computing system, processing is not only in the CPU.
> Some tasks can be delegated to devices working in parallel.
>
> The goal of this new library is to enhance the collaboration between
> DPDK, that's primarily a CPU framework, and GPU devices.
>
> When mixing network activity with task processing on a non-CPU device,
> there may be the need to put in communication the CPU with the device
> in order to manage the memory, synchronize operations, exchange info, etc..
>
> This library provides a number of new features:
> - Interoperability with GPU-specific library with generic handlers
> - Possibility to allocate and free memory on the GPU
> - Possibility to allocate and free memory on the CPU but visible from the GPU
> - Communication functions to enhance the dialog between the CPU and the GPU

In the RFC thread, There was one outstanding non technical issues on this,

i.e
The above features are driver specific details. Does the DPDK
_application_ need to be aware of this?
aka DPDK device class has a fixed personality and it has API to deal
with abstracting specific application specific
end user functionality like ethdev, cryptodev, eventdev irrespective
of underlying bus/device properties.

Even similar semantics are required for DPU(Smart NIC)
communitication. I am planning to
send RFC in coming days to address the issue without the application
knowing the Bus/HW/Driver details.

Irrespective of the RFC I am planning to send and since the new
library needs techboard approval, You may
request that the techboard decide approval for this library. Also, As
far as I remember a minimum a SW driver in
additional to HW driver to accept a new driver class.

Just my 2c to save your cycles.

Reply via email to