I just included this patch as-is, but here are a few comments:

On Thu, Jan 28, 2021 at 03:58:37PM +0100, Christoph Hellwig wrote:
> +static void uvc_urb_dma_sync(struct uvc_urb *uvc_urb, bool for_device)
> +{
> +     struct device *dma_dev = dma_dev = stream_to_dmadev(uvc_urb->stream);
> +
> +     if (for_device)
> +             dma_sync_sgtable_for_device(dma_dev, uvc_urb->sgt,
> +                                         DMA_FROM_DEVICE);
> +     else
> +             dma_sync_sgtable_for_cpu(dma_dev, uvc_urb->sgt,
> +                                      DMA_FROM_DEVICE);
> +}

Given that we vmap the addresses this also needs
flush_kernel_vmap_range / invalidate_kernel_vmap_range calls for
VIVT architectures.

Reply via email to