Actually thanks for pointing that out Pekka.... Here is a patch that takes the regression almost completely away (Too much jetlag combined with flu seems to have impaired my thinking I should have seen this earlier). I still need to run my slab performance tests on this. But hackbench improves.
SLUB: Improve hackbench speed Increase the mininum number of partial slabs to keep around and put partial slabs to the end of the partial queue so that they can add more objects. Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]> --- mm/slub.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) Index: linux-2.6/mm/slub.c =================================================================== --- linux-2.6.orig/mm/slub.c 2007-12-21 14:30:11.096324462 -0800 +++ linux-2.6/mm/slub.c 2007-12-21 14:30:33.656441994 -0800 @@ -172,7 +172,7 @@ static inline void ClearSlabDebug(struct * Mininum number of partial slabs. These will be left on the partial * lists even if they are empty. kmem_cache_shrink may reclaim them. */ -#define MIN_PARTIAL 2 +#define MIN_PARTIAL 5 /* * Maximum number of desirable partial slabs. @@ -1616,7 +1616,7 @@ checks_ok: * then add it. */ if (unlikely(!prior)) - add_partial(get_node(s, page_to_nid(page)), page); + add_partial_tail(get_node(s, page_to_nid(page)), page); out_unlock: slab_unlock(page); -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/