On Tue, 17 Jul 2007, Andrew Morton wrote: > On Tue, 17 Jul 2007 18:26:14 +0100 (BST) Hugh Dickins <[EMAIL PROTECTED]> > wrote: > > > I think I like the lock ;) > > > > I hate to waste your time, but I'm still puzzled. Wasn't the race fixed > > by your changeover from use of "static int nid" throughout, to setting > > local "int nid" from "static int prev_nid", working with nid, then > > setting prev_nid from nid at the end? What does the lock add to that? > > There are still minor races without the lock - two CPUs will allocate > from the first node, and prev_nid can occasionally go backwards. > > I agree that they are sufficiently minor that we could remove the lock.
Phew, yes, that's what I meant - thanks. [PATCH] Remove nid_lock from alloc_fresh_huge_page The fix to the race in alloc_fresh_huge_page() which could give an illegal node ID did not need nid_lock at all: the fix was to replace static int nid by static int prev_nid and do the work on local int nid. nid_lock did make sure that racers strictly roundrobin the nodes, but that's not something we actually need to enforce strictly. Kill that nid_lock. Signed-off-by: Hugh Dickins <[EMAIL PROTECTED]> --- mm/hugetlb.c | 3 --- 1 file changed, 3 deletions(-) --- 2.6.22-git9/mm/hugetlb.c 2007-07-17 20:29:33.000000000 +0100 +++ linux/mm/hugetlb.c 2007-07-17 20:32:51.000000000 +0100 @@ -107,15 +107,12 @@ static int alloc_fresh_huge_page(void) { static int prev_nid; struct page *page; - static DEFINE_SPINLOCK(nid_lock); int nid; - spin_lock(&nid_lock); nid = next_node(prev_nid, node_online_map); if (nid == MAX_NUMNODES) nid = first_node(node_online_map); prev_nid = nid; - spin_unlock(&nid_lock); page = alloc_pages_node(nid, htlb_alloc_mask|__GFP_COMP|__GFP_NOWARN, HUGETLB_PAGE_ORDER); - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/