On Mon, Jul 22, 2019 at 09:09:25PM -0700, John Stultz wrote:
> On Thu, Jul 18, 2019 at 3:06 AM Christoph Hellwig <h...@infradead.org> wrote:
> >
> > > +void INIT_HEAP_HELPER_BUFFER(struct heap_helper_buffer *buffer,
> > > +                          void (*free)(struct heap_helper_buffer *))
> >
> > Please use a lower case naming following the naming scheme for the
> > rest of the file.
> 
> Yes! Apologies as this was a hold-over from when the initialization
> function was an inline function in the style of
> INIT_WORK/INIT_LIST_HEAD. No longer appropriate that its a function.
> I'll change it.
> 
> > > +static void *dma_heap_map_kernel(struct heap_helper_buffer *buffer)
> > > +{
> > > +     void *vaddr;
> > > +
> > > +     vaddr = vmap(buffer->pages, buffer->pagecount, VM_MAP, PAGE_KERNEL);
> > > +     if (!vaddr)
> > > +             return ERR_PTR(-ENOMEM);
> > > +
> > > +     return vaddr;
> > > +}
> >
> > Unless I'm misreading the patches this is used for the same pages that
> > also might be dma mapped.  In this case you need to use
> > flush_kernel_vmap_range and invalidate_kernel_vmap_range in the right
> > places to ensure coherency between the vmap and device view.  Please
> > also document the buffer ownership, as this really can get complicated.
> 
> Forgive me I wasn't familiar with those calls, but this seems like it
> would apply to all dma-buf exporters if so, and I don't see any
> similar flush_kernel_vmap_range calls there (both functions are
> seemingly only used by xfs, md and bio).
> 
> We do have the 
> dma_heap_dma_buf_begin_cpu_access()/dma_heap_dma_buf_end_cpu_access()
> hooks (see more on these below) which sync the buffers for each
> attachment (via dma_sync_sg_for_cpu/device), and are used around cpu
> access to the buffers. Are you suggesting the
> flush_kernel_vmap_range() call be added to those hooks or is the
> dma_sync_sg_for_* calls sufficient there?

dma_sync_sg_for_* only operates on the kernel direct mapping, that
is what you get from page_address/kmap* for the page.  But with vmap
your create another alias in kernel virtual space, which
dma_sync_sg_for_* can't know about.  Now for most CPUs caches are
indexed by physical addresses so this doesn't matter, but a significant
minority of CPUs (parisc, some arm, some mips) index by virtual address,
in which case we need non-empy flush_kernel_vmap_range and
invalidate_kernel_vmap_range helper to deal with that vmap alias.

Reply via email to