On Wed, 28 Aug 2024 00:20:23 -0700 Mina Almasry wrote:
> > On Sun, 25 Aug 2024 04:15:02 + Mina Almasry wrote:
> > > +void net_devmem_free_dmabuf(struct net_iov *niov)
> > > +{
> > > + struct net_devmem_dmabuf_binding *binding = net_iov_binding(niov);
> > > + unsigned long dma_addr = n
On Tue, Aug 27, 2024 at 7:15 PM Jakub Kicinski wrote:
>
> On Sun, 25 Aug 2024 04:15:02 + Mina Almasry wrote:
> > +void net_devmem_free_dmabuf(struct net_iov *niov)
> > +{
> > + struct net_devmem_dmabuf_binding *binding = net_iov_binding(niov);
> > + unsigned long dma_addr = net_devmem_
On Sun, 25 Aug 2024 04:15:02 + Mina Almasry wrote:
> +void net_devmem_free_dmabuf(struct net_iov *niov)
> +{
> + struct net_devmem_dmabuf_binding *binding = net_iov_binding(niov);
> + unsigned long dma_addr = net_devmem_get_dma_addr(niov);
> +
> + if (gen_pool_has_addr(binding->chun
Implement netdev devmem allocator. The allocator takes a given struct
netdev_dmabuf_binding as input and allocates net_iov from that
binding.
The allocation simply delegates to the binding's genpool for the
allocation logic and wraps the returned memory region in a net_iov
struct.
Signed-off-by: