On Wed, May 21, 2014 at 09:45:54AM -0500, Christoph Lameter wrote: > On Wed, 21 May 2014, Vladimir Davydov wrote: > > > Seems I've found a better way to avoid this race, which does not involve > > messing up free hot paths. The idea is to explicitly zap each per-cpu > > partial list by setting it pointing to an invalid ptr. Since > > put_cpu_partial(), which is called from __slab_free(), uses atomic > > cmpxchg for adding a new partial slab to a per cpu partial list, it is > > enough to add a check if partials are zapped there and bail out if so. > > > > The patch doing the trick is attached. Could you please take a look at > > it once time permit? > > Well if you set s->cpu_partial = 0 then the slab should not be added to > the partial lists. Ok its put on there temporarily but then immediately > moved to the node partial list in put_cpu_partial().
Don't think so. AFAIU put_cpu_partial() first checks if the per-cpu partial list has more than s->cpu_partial objects draining it if so, but then it adds the newly frozen slab there anyway. Thanks. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/