On Fri, 13 Jul 2007 10:45:07 +0800 Joe Jin <[EMAIL PROTECTED]> wrote:

Something like this?

--- a/mm/hugetlb.c~a
+++ a/mm/hugetlb.c
@@ -105,13 +105,20 @@ static void free_huge_page(struct page *
static int alloc_fresh_huge_page(void)
 {
-       static int nid = 0;
+       static int prev_nid;
+       static DEFINE_SPINLOCK(nid_lock);
        struct page *page;
-       page = alloc_pages_node(nid, htlb_alloc_mask|__GFP_COMP|__GFP_NOWARN,
-                                       HUGETLB_PAGE_ORDER);
-       nid = next_node(nid, node_online_map);
+       int nid;
+
+       spin_lock(&nid_lock);
+       nid = next_node(prev_nid, node_online_map);
        if (nid == MAX_NUMNODES)
                nid = first_node(node_online_map);
+       prev_nid = nid;
+       spin_unlock(&nid_lock);
+
+       page = alloc_pages_node(nid, htlb_alloc_mask|__GFP_COMP|__GFP_NOWARN,
+                                       HUGETLB_PAGE_ORDER);
        if (page) {
                set_compound_page_dtor(page, free_huge_page);
                spin_lock(&hugetlb_lock);
_

I think this will never get pages from node 0 ? Because nid = next_node(prev_node,node_online_map) and even if prev_node = 0, nid will become 1.

It'll start out at node 1.  But it will visit the final node (which is less
than MAX_NUMNODES) and will then advance onto the fist node (which can be
= 0).

This code needs a bit of thought and testing for the non-numa case too
please.  At the least, there might be optimisation opportunities.
I tested on non-numa machine. Your patch works fine.

Thanks,
-Guru


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to