Initial Post (Thu, 18 Aug 2005) In preparation for hugetlb demand faulting, remove this get_user_pages() optimization. Since huge pages will no longer be prefaulted, we can't assume that the huge ptes are established and hence, calling follow_hugetlb_page() is not valid.
With the follow_hugetlb_page() call removed, the normal code path will be triggered. follow_page() will either use follow_huge_addr() or follow_huge_pmd() to check for a previously faulted "page" to return. When this fails (ie. with demand faults), __handle_mm_fault() gets called which invokes the hugetlb_fault() handler to instantiate the huge page. This patch doesn't make a lot of sense by itself, but I've broken it out to facilitate discussion on this specific element of the demand fault changes. While coding this up, I referenced previous discussion on this topic starting at http://lkml.org/lkml/2004/4/13/176 , which contains more opinions about the correctness of this approach. Diffed against 2.6.13-git6 Signed-off-by: Adam Litke <[EMAIL PROTECTED]> --- memory.c | 5 ----- 1 files changed, 5 deletions(-) diff -upN reference/mm/memory.c current/mm/memory.c --- reference/mm/memory.c +++ current/mm/memory.c @@ -949,11 +949,6 @@ int get_user_pages(struct task_struct *t || !(flags & vma->vm_flags)) return i ? : -EFAULT; - if (is_vm_hugetlb_page(vma)) { - i = follow_hugetlb_page(mm, vma, pages, vmas, - &start, &len, i); - continue; - } spin_lock(&mm->page_table_lock); do { int write_access = write; -- Adam Litke - (agl at us.ibm.com) IBM Linux Technology Center - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/