> The conversion to use a structure for mmu_notifier_invalidate_range_*()
> unintentionally changed the usage in try_to_unmap_one() to init the
> 'struct mmu_notifier_range' with vma->vm_start instead of @address,
> i.e. it invalidates the wrong address range.  Revert to the correct
> address range.
> 
> Manifests as KVM use-after-free WARNINGs and subsequent "BUG: Bad page
> state in process X" errors when reclaiming from a KVM guest due to KVM
> removing the wrong pages from its own mappings.
> 
> Reported-by: leozinho29...@hotmail.com
> Reported-by: Mike Galbraith <efa...@gmx.de>
> Reported-by: Adam Borowski <kilob...@angband.pl>
> Cc: Jérôme Glisse <jgli...@redhat.com>
> Cc: Christian König <christian.koe...@amd.com>
> Cc: Jan Kara <j...@suse.cz>
> Cc: Matthew Wilcox <mawil...@microsoft.com>
> Cc: Ross Zwisler <zwis...@kernel.org>
> Cc: Dan Williams <dan.j.willi...@intel.com>
> Cc: Paolo Bonzini <pbonz...@redhat.com>
> Cc: Radim Krčmář <rkrc...@redhat.com>
> Cc: Michal Hocko <mho...@kernel.org>
> Cc: Felix Kuehling <felix.kuehl...@amd.com>
> Cc: Ralph Campbell <rcampb...@nvidia.com>
> Cc: John Hubbard <jhubb...@nvidia.com>
> Cc: Andrew Morton <a...@linux-foundation.org>
> Cc: Linus Torvalds <torva...@linux-foundation.org>
> Fixes: ac46d4f3c432 ("mm/mmu_notifier: use structure for
> invalidate_range_start/end calls v2")
> Signed-off-by: Sean Christopherson <sean.j.christopher...@intel.com>
> ---
> 
> FWIW, I looked through all other calls to mmu_notifier_range_init() in
> the patch and didn't spot any other unintentional functional changes.
> 
>  mm/rmap.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 68a1a5b869a5..0454ecc29537 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1371,8 +1371,8 @@ static bool try_to_unmap_one(struct page *page, struct
> vm_area_struct *vma,
>        * Note that the page can not be free in this function as call of
>        * try_to_unmap() must hold a reference on the page.
>        */
> -     mmu_notifier_range_init(&range, vma->vm_mm, vma->vm_start,
> -                             min(vma->vm_end, vma->vm_start +
> +     mmu_notifier_range_init(&range, vma->vm_mm, address,
> +                             min(vma->vm_end, address +
>                                   (PAGE_SIZE << compound_order(page))));
>       if (PageHuge(page)) {
>               /*
> --

I was suspecting this patch for some other issue. But could not spot this after 
in depth analyzing the changed "invalidate_range_start/end calls". 

Its indeed a good catch. 

Reviewed-by: Pankaj gupta <pagu...@redhat.com>

> 2.19.2
> 
> 

Reply via email to