Re: [PATCH v4] hugetlb: simplify hugetlb handling in follow_page_mask

2022-10-30 Thread Mike Kravetz
On 10/30/22 15:45, Peter Xu wrote: > On Fri, Oct 28, 2022 at 11:11:08AM -0700, Mike Kravetz wrote: > > + } else { > > + if (is_hugetlb_entry_migration(entry)) { > > + spin_unlock(ptl); > > + hugetlb_vma_unlock_read(vma); > > Just noticed it when pull

Re: [PATCH v4] hugetlb: simplify hugetlb handling in follow_page_mask

2022-10-30 Thread Peter Xu
On Fri, Oct 28, 2022 at 11:11:08AM -0700, Mike Kravetz wrote: > +struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, > + unsigned long address, unsigned int flags) > +{ > + struct hstate *h = hstate_vma(vma); > + struct mm_struct *mm = vma->vm_mm; >

Re: [PATCH v4] hugetlb: simplify hugetlb handling in follow_page_mask

2022-10-28 Thread Peter Xu
On Fri, Oct 28, 2022 at 11:11:08AM -0700, Mike Kravetz wrote: > v4 -Remove vma (pmd sharing) locking as this can be called with > FOLL_NOWAIT. Peter Thanks, Mike. For the gup safety on pmd unshare, I'll prepare a few small patches and post hopefully early next week (will be off-work on

[PATCH v4] hugetlb: simplify hugetlb handling in follow_page_mask

2022-10-28 Thread Mike Kravetz
During discussions of this series [1], it was suggested that hugetlb handling code in follow_page_mask could be simplified. At the beginning of follow_page_mask, there currently is a call to follow_huge_addr which 'may' handle hugetlb pages. ia64 is the only architecture which provides a follow_h