On 6/2/21 10:30 AM, Christian König wrote:
We discussed if that is really the right approach for quite a while now, but
digging deeper into a bug report on arm turned out that this is actually
horrible broken right now.

The reason for this is that vmf_insert_mixed_prot() always tries to grab
a reference to the underlaying page on architectures without
ARCH_HAS_PTE_SPECIAL and as far as I can see also enabled GUP.

So nuke using VM_MIXEDMAP here and use VM_PFNMAP instead.

Also set VM_SHARED, not 100% sure if that is needed with VM_PFNMAP, but better
save than sorry.

Signed-off-by: Christian König <christian.koe...@amd.com>
Bugs: https://gitlab.freedesktop.org/drm/amd/-/issues/1606#note_936174
---
  drivers/gpu/drm/ttm/ttm_bo_vm.c | 29 +++++++----------------------
  1 file changed, 7 insertions(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c
index 9bd15cb39145..bf86ae849340 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
@@ -359,12 +359,7 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
                 * at arbitrary times while the data is mmap'ed.
                 * See vmf_insert_mixed_prot() for a discussion.
                 */
-               if (vma->vm_flags & VM_MIXEDMAP)
-                       ret = vmf_insert_mixed_prot(vma, address,
-                                                   __pfn_to_pfn_t(pfn, 
PFN_DEV),
-                                                   prot);
-               else
-                       ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
+               ret = vmf_insert_pfn_prot(vma, address, pfn, prot);

I think vmwgfx still uses MIXEDMAP. (Which is ofc same bug and should be changed).

/* Never error on prefaulted PTEs */
                if (unlikely((ret & VM_FAULT_ERROR))) {
@@ -411,15 +406,9 @@ vm_fault_t ttm_bo_vm_dummy_page(struct vm_fault *vmf, 
pgprot_t prot)
        pfn = page_to_pfn(page);
/* Prefault the entire VMA range right away to avoid further faults */
-       for (address = vma->vm_start; address < vma->vm_end; address += 
PAGE_SIZE) {
-
-               if (vma->vm_flags & VM_MIXEDMAP)
-                       ret = vmf_insert_mixed_prot(vma, address,
-                                                   __pfn_to_pfn_t(pfn, 
PFN_DEV),
-                                                   prot);
-               else
-                       ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
-       }
+       for (address = vma->vm_start; address < vma->vm_end;
+            address += PAGE_SIZE)
+               ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
return ret;
  }
@@ -576,14 +565,10 @@ static void ttm_bo_mmap_vma_setup(struct 
ttm_buffer_object *bo, struct vm_area_s
vma->vm_private_data = bo; - /*
-        * We'd like to use VM_PFNMAP on shared mappings, where
-        * (vma->vm_flags & VM_SHARED) != 0, for performance reasons,
-        * but for some reason VM_PFNMAP + x86 PAT + write-combine is very
-        * bad for performance. Until that has been sorted out, use
-        * VM_MIXEDMAP on all mappings. See freedesktop.org bug #75719
+       /* Enforce VM_SHARED here since no driver backend actually supports COW
+        * on TTM buffer object mappings.

I think by default all TTM drivers support COW mappings in the sense that written data never makes it to the bo but stays in anonymous pages, although I can't find a single usecase. So comment should be changed to state that they are useless for us and that we can't support COW mappings with VM_PFNMAP.

         */
-       vma->vm_flags |= VM_MIXEDMAP;
+       vma->vm_flags |= VM_PFNMAP | VM_SHARED;

Hmm, shouldn't we refuse COW mappings instead, like my old patch on this subject did? In theory someone could be setting up what she thinks is a private mapping to a shared buffer object, and write sensitive data to it, which will immediately leak. It's a simple check, could open-code if necessary.

        vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
  }

/Thomas


Reply via email to