On Wed, Jun 04, 2014 at 05:53:29PM +0900, Joonsoo Kim wrote:
> Consider __slab_free(). After put_cpu_partial() in __slab_free() is called,
> we attempt to update stat. There is possibility that this operation could be
> use-after-free with your solution. Until now, we have just stat operation, but
2014-06-03 17:16 GMT+09:00 Vladimir Davydov :
> On Mon, Jun 02, 2014 at 11:03:51PM +0900, Joonsoo Kim wrote:
>> 2014-06-02 20:47 GMT+09:00 Vladimir Davydov :
>> > Hi Joonsoo,
>> >
>> > On Mon, Jun 02, 2014 at 01:24:36PM +0900, Joonsoo Kim wrote:
>> >> On Sat, May 31, 2014 at 03:04:58PM +0400, Vladi
On Mon, Jun 02, 2014 at 11:03:51PM +0900, Joonsoo Kim wrote:
> 2014-06-02 20:47 GMT+09:00 Vladimir Davydov :
> > Hi Joonsoo,
> >
> > On Mon, Jun 02, 2014 at 01:24:36PM +0900, Joonsoo Kim wrote:
> >> On Sat, May 31, 2014 at 03:04:58PM +0400, Vladimir Davydov wrote:
> >> > On Fri, May 30, 2014 at 09:
On Mon, 2 Jun 2014, Joonsoo Kim wrote:
> Hmm... this is also a bit ugly.
> How about following change?
That looks much cleaner.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kern
2014-06-02 20:47 GMT+09:00 Vladimir Davydov :
> Hi Joonsoo,
>
> On Mon, Jun 02, 2014 at 01:24:36PM +0900, Joonsoo Kim wrote:
>> On Sat, May 31, 2014 at 03:04:58PM +0400, Vladimir Davydov wrote:
>> > On Fri, May 30, 2014 at 09:57:10AM -0500, Christoph Lameter wrote:
>> > > On Fri, 30 May 2014, Vladi
Hi Joonsoo,
On Mon, Jun 02, 2014 at 01:24:36PM +0900, Joonsoo Kim wrote:
> On Sat, May 31, 2014 at 03:04:58PM +0400, Vladimir Davydov wrote:
> > On Fri, May 30, 2014 at 09:57:10AM -0500, Christoph Lameter wrote:
> > > On Fri, 30 May 2014, Vladimir Davydov wrote:
> > >
> > > > (3) is a bit more di
On Sat, May 31, 2014 at 03:04:58PM +0400, Vladimir Davydov wrote:
> On Fri, May 30, 2014 at 09:57:10AM -0500, Christoph Lameter wrote:
> > On Fri, 30 May 2014, Vladimir Davydov wrote:
> >
> > > (3) is a bit more difficult, because slabs are added to per-cpu partial
> > > lists lock-less. Fortunate
On Fri, May 30, 2014 at 09:57:10AM -0500, Christoph Lameter wrote:
> On Fri, 30 May 2014, Vladimir Davydov wrote:
>
> > (3) is a bit more difficult, because slabs are added to per-cpu partial
> > lists lock-less. Fortunately, we only have to handle the __slab_free
> > case, because, as there shoul
On Fri, 30 May 2014, Vladimir Davydov wrote:
> (3) is a bit more difficult, because slabs are added to per-cpu partial
> lists lock-less. Fortunately, we only have to handle the __slab_free
> case, because, as there shouldn't be any allocation requests dispatched
> to a dead memcg cache, get_parti
To speed up further allocations, SLUB may keep some empty slabs on per
cpu/node partial lists. If the cache is dead, i.e. belongs to a memcg
that was turned offline, there is no need in that, because dead caches
are never allocated from.
What is worse, keeping empty slabs on the list will prevent
10 matches
Mail list logo