On Wed, 23 Feb 2005, Lee Revell wrote: > > Did something change recently in the VM that made copy_pte_range and > clear_page_range a lot more expensive? I noticed a reference in the > "Page Table Iterators" thread to excessive overhead introduced by > aggressive page freeing. That sure looks like what is going on in > trace2. trace1 and trace3 look like big fork latencies associated with > copy_pte_range.
I'm just about to test this patch below: please give it a try: thanks... Ingo's patch to reduce scheduling latencies, by checking for lockbreak in copy_page_range, was in the -VP and -mm patchsets some months ago; but got preempted by the 4level rework, and not reinstated since. Restore it now in copy_pte_range - which mercifully makes it easier. Signed-off-by: Hugh Dickins <[EMAIL PROTECTED]> --- 2.6.11-rc4-bk9/mm/memory.c 2005-02-21 11:32:19.000000000 +0000 +++ linux/mm/memory.c 2005-02-23 18:35:28.000000000 +0000 @@ -328,6 +328,7 @@ static int copy_pte_range(struct mm_stru pte_t *s, *d; unsigned long vm_flags = vma->vm_flags; +again: d = dst_pte = pte_alloc_map(dst_mm, dst_pmd, addr); if (!dst_pte) return -ENOMEM; @@ -338,11 +339,22 @@ static int copy_pte_range(struct mm_stru if (pte_none(*s)) continue; copy_one_pte(dst_mm, src_mm, d, s, vm_flags, addr); + /* + * We are holding two locks at this point - either of them + * could generate latencies in another task on another CPU. + */ + if (need_resched() || + need_lockbreak(&src_mm->page_table_lock) || + need_lockbreak(&dst_mm->page_table_lock)) + break; } pte_unmap_nested(src_pte); pte_unmap(dst_pte); spin_unlock(&src_mm->page_table_lock); + cond_resched_lock(&dst_mm->page_table_lock); + if (addr < end) + goto again; return 0; } - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/