On 02/05/2014 04:57 AM, David Rientjes wrote:
> On Tue, 4 Feb 2014, Christoph Lameter wrote:
>
>>> Although this cannot actually result in a race, because on cache
>>> destruction there should not be any concurrent frees or allocations from
>>> the cache, let's add spin_lock/unlock to free_partial(
On Tue, 4 Feb 2014, Christoph Lameter wrote:
> > Although this cannot actually result in a race, because on cache
> > destruction there should not be any concurrent frees or allocations from
> > the cache, let's add spin_lock/unlock to free_partial() just to keep
> > lockdep happy.
>
> Please add
On Tue, 4 Feb 2014, Vladimir Davydov wrote:
> Although this cannot actually result in a race, because on cache
> destruction there should not be any concurrent frees or allocations from
> the cache, let's add spin_lock/unlock to free_partial() just to keep
> lockdep happy.
Please add a comment th
Commit c65c1877bd68 ("slub: use lockdep_assert_held") requires
remove_partial() to be called with n->list_lock held, but free_partial()
called from kmem_cache_close() on cache destruction does not follow this
rule, leading to a warning:
WARNING: CPU: 0 PID: 2787 at mm/slub.c:1536
__kmem_cache_s
4 matches
Mail list logo