On Mon, 26 Jan 2015, Vladimir Davydov wrote: > SLUB's kmem_cache_shrink not only removes empty slabs from the cache, > but also sorts slabs by the number of objects in-use to cope with > fragmentation. To achieve that, it tries to allocate a temporary array. > If it fails, it will abort the whole procedure.
I do not think its worth optimizing this. If we cannot allocate even a small object then the system is in an extremely bad state anyways. > @@ -3400,7 +3407,9 @@ int __kmem_cache_shrink(struct kmem_cache *s) > * list_lock. page->inuse here is the upper limit. > */ > list_for_each_entry_safe(page, t, &n->partial, lru) { > - list_move(&page->lru, slabs_by_inuse + page->inuse); > + if (page->inuse < objects) > + list_move(&page->lru, > + slabs_by_inuse + page->inuse); > if (!page->inuse) > n->nr_partial--; > } The condition is always true. A page that has page->inuse == objects would not be on the partial list. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/