On Fri, Dec 04, 2020 at 03:48:41PM +0530, Bharata B Rao wrote: > On Thu, Dec 03, 2020 at 04:08:12PM +1100, Alistair Popple wrote: > > migrate_vma_pages() may still clear MIGRATE_PFN_MIGRATE on pages which > > are not able to be migrated. Drivers may safely copy data prior to > > calling migrate_vma_pages() however a remote mapping must not be > > established until after migrate_vma_pages() has returned as the > > migration could still fail. > > > > UV_PAGE_IN_in both copies and maps the data page, therefore it should > > only be called after checking the results of migrate_vma_pages(). > > > > Signed-off-by: Alistair Popple <alist...@popple.id.au> > > --- > > arch/powerpc/kvm/book3s_hv_uvmem.c | 7 ++++--- > > 1 file changed, 4 insertions(+), 3 deletions(-) > > > > diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c > > b/arch/powerpc/kvm/book3s_hv_uvmem.c > > index 84e5a2dc8be5..08aa6a90c525 100644 > > --- a/arch/powerpc/kvm/book3s_hv_uvmem.c > > +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c > > @@ -762,7 +762,10 @@ static int kvmppc_svm_page_in(struct vm_area_struct > > *vma, > > goto out_finalize; > > } > > > > - if (pagein) { > > + *mig.dst = migrate_pfn(page_to_pfn(dpage)) | MIGRATE_PFN_LOCKED; > > + migrate_vma_pages(&mig); > > + > > + if ((*mig.src & MIGRATE_PFN_MIGRATE) && pagein) { > > pfn = *mig.src >> MIGRATE_PFN_SHIFT; > > spage = migrate_pfn_to_page(*mig.src); > > if (spage) { > > @@ -773,8 +776,6 @@ static int kvmppc_svm_page_in(struct vm_area_struct > > *vma, > > } > > } > > > > - *mig.dst = migrate_pfn(page_to_pfn(dpage)) | MIGRATE_PFN_LOCKED; > > - migrate_vma_pages(&mig); > > out_finalize: > > migrate_vma_finalize(&mig); > > return ret;
This patch certainly looks like the problem, that has been hurting us for a while. Let me run this patch through my SVM tests. Looks very promising. BTW: The code does a similar thing while paging out. It pages out from the UV, and then does the migration. Is there a bug there aswell? RP