2014-12-16 5:42 GMT+03:00 Joonsoo Kim <iamjoonsoo....@lge.com>: > On Mon, Dec 15, 2014 at 08:16:00AM -0600, Christoph Lameter wrote: >> On Mon, 15 Dec 2014, Joonsoo Kim wrote: >> >> > > +static bool same_slab_page(struct kmem_cache *s, struct page *page, >> > > void *p) >> > > +{ >> > > + long d = p - page->address; >> > > + >> > > + return d > 0 && d < (1 << MAX_ORDER) && d < (compound_order(page) << >> > > PAGE_SHIFT); >> > > +} >> > > + >> > >> > Somtimes, compound_order() induces one more cacheline access, because >> > compound_order() access second struct page in order to get order. Is there >> > any way to remove this? >> >> I already have code there to avoid the access if its within a MAX_ORDER >> page. We could probably go for a smaller setting there. PAGE_COSTLY_ORDER? > > That is the solution to avoid compound_order() call when slab of > object isn't matched with per cpu slab. > > What I'm asking is whether there is a way to avoid compound_order() call when > slab > of object is matched with per cpu slab or not. >
Can we use page->objects for that? Like this: return d > 0 && d < page->objects * s->size; -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/