Kamezawa Hiroyuki <kamezawa.hir...@jp.fujitsu.com> writes:

>>>>>
>>>>> We test RES_USAGE before taking hugetlb_lock.  What prevents some other
>>>>> thread from increasing RES_USAGE after that test?
>>>>>
>>>>> After walking the list we test RES_USAGE after dropping hugetlb_lock.
>>>>> What prevents another thread from incrementing RES_USAGE before that
>>>>> test, triggering the BUG?
>>>>
>>>> IIUC core cgroup will prevent a new task getting added to the cgroup
>>>> when we are in pre_destroy. Since we already check that the cgroup doesn't
>>>> have any task, the RES_USAGE cannot increase in pre_destroy.
>>>>
>>>
>>>
>>> You're wrong here. We release cgroup_lock before calling pre_destroy and 
>>> retrieve
>>> the lock after that, so a task can be attached to the cgroup in this 
>>> interval.
>>>
>>
>> But that means rmdir can be racy right ? What happens if the task got
>> added, allocated few pages and then moved out ? We still would have task
>> count 0 but few pages, which we missed to to move to parent cgroup.
>>
>
> That's a problem even if it's verrrry unlikely.
> I'd like to look into it and fix the race in cgroup layer.
> But I'm sorry I'm a bit busy in these days...
>

How about moving that mutex_unlock(&cgroup_mutex) to memcg callback ? That
can be a patch for 3.5 ? 

-aneesh
 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to