> +void INIT_HEAP_HELPER_BUFFER(struct heap_helper_buffer *buffer,
> +                          void (*free)(struct heap_helper_buffer *))

Please use a lower case naming following the naming scheme for the
rest of the file.

> +static void *dma_heap_map_kernel(struct heap_helper_buffer *buffer)
> +{
> +     void *vaddr;
> +
> +     vaddr = vmap(buffer->pages, buffer->pagecount, VM_MAP, PAGE_KERNEL);
> +     if (!vaddr)
> +             return ERR_PTR(-ENOMEM);
> +
> +     return vaddr;
> +}

Unless I'm misreading the patches this is used for the same pages that
also might be dma mapped.  In this case you need to use
flush_kernel_vmap_range and invalidate_kernel_vmap_range in the right
places to ensure coherency between the vmap and device view.  Please
also document the buffer ownership, as this really can get complicated.

> +static vm_fault_t dma_heap_vm_fault(struct vm_fault *vmf)
> +{
> +     struct vm_area_struct *vma = vmf->vma;
> +     struct heap_helper_buffer *buffer = vma->vm_private_data;
> +
> +     vmf->page = buffer->pages[vmf->pgoff];
> +     get_page(vmf->page);
> +
> +     return 0;
> +}

Is there any exlusion between mmap / vmap and the device accessing
the data?  Without that you are going to run into a lot of coherency
problems.

Reply via email to