Adding Dmitry as well.

Am 11.11.22 um 12:45 schrieb Lukasz Wiecaszek:
The reason behind that patch is associated with videobuf2 subsystem
(or more genrally with v4l2 framework) and user created
dma buffers (udmabuf). In some circumstances
when dealing with V4L2_MEMORY_DMABUF buffers videobuf2 subsystem
wants to use dma_buf_vmap() method on the attached dma buffer.
As udmabuf does not have .vmap operation implemented,
such dma_buf_vmap() natually fails.

videobuf2_common: [cap-000000003473b2f1] __vb2_queue_alloc: allocated 3 
buffers, 1 plane(s) each
videobuf2_common: [cap-000000003473b2f1] __prepare_dmabuf: buffer for plane 0 
changed
videobuf2_common: [cap-000000003473b2f1] __prepare_dmabuf: failed to map dmabuf 
for plane 0
videobuf2_common: [cap-000000003473b2f1] __buf_prepare: buffer preparation 
failed: -14

The patch itself seems to be strighforward.
It adds implementation of .vmap method to 'struct dma_buf_ops udmabuf_ops'.
.vmap method itself uses vm_map_ram() to map pages linearly
into the kernel virtual address space (only if such mapping
hasn't been created yet).

Of hand that sounds sane to me.

You should probably mention somewhere in a code comment that the cached vaddr is protected by the reservation lock being taken. That's not necessary obvious to everybody.

Apart from that looks good to me.

Regards,
Christian.


Signed-off-by: Lukasz Wiecaszek <lukasz.wiecas...@gmail.com>
---
  drivers/dma-buf/udmabuf.c | 18 ++++++++++++++++++
  1 file changed, 18 insertions(+)

diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
index 2bcdb935a3ac..8649fcbd05c4 100644
--- a/drivers/dma-buf/udmabuf.c
+++ b/drivers/dma-buf/udmabuf.c
@@ -12,6 +12,7 @@
  #include <linux/slab.h>
  #include <linux/udmabuf.h>
  #include <linux/hugetlb.h>
+#include <linux/vmalloc.h>
static int list_limit = 1024;
  module_param(list_limit, int, 0644);
@@ -26,6 +27,7 @@ struct udmabuf {
        struct page **pages;
        struct sg_table *sg;
        struct miscdevice *device;
+       void *vaddr;
  };
static vm_fault_t udmabuf_vm_fault(struct vm_fault *vmf)
@@ -57,6 +59,21 @@ static int mmap_udmabuf(struct dma_buf *buf, struct 
vm_area_struct *vma)
        return 0;
  }
+static int vmap_udmabuf(struct dma_buf *buf, struct dma_buf_map *map)
+{
+       struct udmabuf *ubuf = buf->priv;
+
+       if (!ubuf->vaddr) {
+               ubuf->vaddr = vm_map_ram(ubuf->pages, ubuf->pagecount, -1);
+               if (!ubuf->vaddr)
+                       return -EINVAL;
+       }
+
+       dma_buf_map_set_vaddr(map, ubuf->vaddr);
+
+       return 0;
+}
+
  static struct sg_table *get_sg_table(struct device *dev, struct dma_buf *buf,
                                     enum dma_data_direction direction)
  {
@@ -159,6 +176,7 @@ static const struct dma_buf_ops udmabuf_ops = {
        .unmap_dma_buf     = unmap_udmabuf,
        .release           = release_udmabuf,
        .mmap              = mmap_udmabuf,
+       .vmap              = vmap_udmabuf,
        .begin_cpu_access  = begin_cpu_udmabuf,
        .end_cpu_access    = end_cpu_udmabuf,
  };

Reply via email to