On Thu, 2025-03-20 at 17:30 +0000, Matthew Auld wrote: > If the memory is going to be accessed by the device, make sure we > mark > the pages accordingly such that the kernel knows this. This aligns > with > the xe-userptr code. > > Signed-off-by: Matthew Auld <matthew.a...@intel.com> > Cc: Thomas Hellström <thomas.hellst...@linux.intel.com> > Cc: Matthew Brost <matthew.br...@intel.com> > --- > drivers/gpu/drm/drm_gpusvm.c | 9 +++++++++ > 1 file changed, 9 insertions(+) > > diff --git a/drivers/gpu/drm/drm_gpusvm.c > b/drivers/gpu/drm/drm_gpusvm.c > index 7f1cf5492bba..5b4ecd36dff1 100644 > --- a/drivers/gpu/drm/drm_gpusvm.c > +++ b/drivers/gpu/drm/drm_gpusvm.c > @@ -1471,6 +1471,7 @@ int drm_gpusvm_range_get_pages(struct > drm_gpusvm *gpusvm, > pages[i] = page; > } else { > dma_addr_t addr; > + unsigned int k; > > if (is_zone_device_page(page) || zdd) { > err = -EOPNOTSUPP; > @@ -1489,6 +1490,14 @@ int drm_gpusvm_range_get_pages(struct > drm_gpusvm *gpusvm, > range->dma_addr[j] = > drm_pagemap_device_addr_encode > (addr, DRM_INTERCONNECT_SYSTEM, > order, > dma_dir); > + > + for (k = 0; k < 1u << order; k++) { > + if (!ctx->read_only) > + set_page_dirty_lock(page); > + > + mark_page_accessed(page); > + page++; > + }
Actually I think the userptr code did this unnecessarily. This is done in the CPU page-fault handler, which means it's taken care of during hmm_range_fault(). Now if the CPU PTE happens to be present and writeable there will be no fault, but that was done when the page was faulted in anyway. If there was a page cleaning event in between so the dirty flag was dropped, then my understanding is that in addition to an invalidation notifier, also the CPU PTE is zapped, so that it will be dirtied again on the next write access, either by the CPU faulting the page or hmm_range_fault() if there is a GPU page-fault. So I think we're good without this patch. /Thomas > } > i += 1 << order; > num_dma_mapped = i;