On 2/6/22 20:26, Alistair Popple wrote:
migrate_vma_setup() checks that a valid vma is passed so that the page
tables can be walked to find the pfns associated with a given address
range. However in some cases the pfns are already known, such as when
migrating device coherent pages during pin_user_pages() meaning a valid
vma isn't required.

Signed-off-by: Alistair Popple <apop...@nvidia.com>
Acked-by: Felix Kuehling <felix.kuehl...@amd.com>
---

Changes for v2:

  - Added Felix's Acked-by

  mm/migrate.c | 34 +++++++++++++++++-----------------
  1 file changed, 17 insertions(+), 17 deletions(-)


Hi Alistair,

Another late-breaking review question, below. :)

diff --git a/mm/migrate.c b/mm/migrate.c
index a9aed12..0d6570d 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2602,24 +2602,24 @@ int migrate_vma_setup(struct migrate_vma *args)
args->start &= PAGE_MASK;
        args->end &= PAGE_MASK;
-       if (!args->vma || is_vm_hugetlb_page(args->vma) ||
-           (args->vma->vm_flags & VM_SPECIAL) || vma_is_dax(args->vma))
-               return -EINVAL;
-       if (nr_pages <= 0)
-               return -EINVAL;

Was the above check left out intentionally? If so, then it needs a
commit description note. And maybe even a separate patch, because it
changes the behavior.

If you do want to change the behavior:

* The kerneldoc comment above this function supports such a change: it
requires returning 0 for the case of zero pages requested. So your
change would bring the comments into alignment with the code.

* I don't think memset deals properly with a zero length input arg, so
it's probably better to return 0, before that point.


thanks,
--
John Hubbard
NVIDIA

-       if (args->start < args->vma->vm_start ||
-           args->start >= args->vma->vm_end)
-               return -EINVAL;
-       if (args->end <= args->vma->vm_start || args->end > args->vma->vm_end)
-               return -EINVAL;
        if (!args->src || !args->dst)
                return -EINVAL;
-
-       memset(args->src, 0, sizeof(*args->src) * nr_pages);
-       args->cpages = 0;
-       args->npages = 0;
-
-       migrate_vma_collect(args);
+       if (args->vma) {
+               if (is_vm_hugetlb_page(args->vma) ||
+                       (args->vma->vm_flags & VM_SPECIAL) || 
vma_is_dax(args->vma))
+                       return -EINVAL;
+               if (args->start < args->vma->vm_start ||
+                       args->start >= args->vma->vm_end)
+                       return -EINVAL;
+               if (args->end <= args->vma->vm_start || args->end > 
args->vma->vm_end)
+                       return -EINVAL;
+
+               memset(args->src, 0, sizeof(*args->src) * nr_pages);
+               args->cpages = 0;
+               args->npages = 0;
+
+               migrate_vma_collect(args);
+       }
if (args->cpages)
                migrate_vma_unmap(args);
@@ -2804,7 +2804,7 @@ void migrate_vma_pages(struct migrate_vma *migrate)
                        continue;
                }
- if (!page) {
+               if (!page && migrate->vma) {
                        if (!(migrate->src[i] & MIGRATE_PFN_MIGRATE))
                                continue;
                        if (!notified) {

Reply via email to