>
>On 2018/10/08 15:14, Yong-Taek Lee wrote:
>>> On 2018/10/08 10:19, Yong-Taek Lee wrote:
>>>> @@ -1056,6 +1056,7 @@ static int __set_oom_adj(struct file *file, int 
>>>> oom_adj, bool legacy)
>>>>         struct mm_struct *mm = NULL;
>>>>         struct task_struct *task;
>>>>         int err = 0;
>>>> +       int mm_users = 0;
>>>>
>>>>         task = get_proc_task(file_inode(file));
>>>>         if (!task)
>>>> @@ -1092,7 +1093,8 @@ static int __set_oom_adj(struct file *file, int 
>>>> oom_adj, bool legacy)
>>>>                 struct task_struct *p = find_lock_task_mm(task);
>>>>
>>>>                 if (p) {
>>>> -                       if (atomic_read(&p->mm->mm_users) > 1) {
>>>> +                       mm_users = atomic_read(&p->mm->mm_users);
>>>> +                       if ((mm_users > 1) && (mm_users != 
>>>> get_nr_threads(p))) {
>>>
>>> How can this work (even before this patch)? When clone(CLONE_VM without 
>>> CLONE_THREAD/CLONE_SIGHAND)
>>> is requested, copy_process() calls copy_signal() in order to copy 
>>> sig->oom_score_adj and
>>> sig->oom_score_adj_min before calling copy_mm() in order to increment 
>>> mm->mm_users, doesn't it?
>>> Then, we will get two different "struct signal_struct" with different 
>>> oom_score_adj/oom_score_adj_min
>>> but one "struct mm_struct" shared by two thread groups.
>>>
>>
>> Are you talking about race between __set_oom_adj and copy_process?
>> If so, i agree with your opinion. It can not set oom_score_adj properly for 
>> copied process if __set_oom_adj
>> check mm_users before copy_process calls copy_mm after copy_signal. Please 
>> correct me if i misunderstood anything.
>
> You understand it correctly.
>
> Reversing copy_signal() and copy_mm() is not sufficient either. We need to 
> use a read/write lock
> (read lock for copy_process() and write lock for __set_oom_adj()) in order to 
> make sure that
> the thread created by clone() becomes reachable from for_each_process() path 
> in __set_oom_adj().
>

Thank you for your suggestion. But i think it would be better to seperate to 2 
issues. How about think these
issues separately because there are no dependency between race issue and my 
patch. As i already explained,
for_each_process path is meaningless if there is only one thread group with 
many threads(mm_users > 1 but 
no other thread group sharing same mm). Do you have any other idea to avoid 
meaningless loop ? 

>>
>>>>                                 mm = p->mm;
>>>>                                 atomic_inc(&mm->mm_count);
>>>>                         }
>>
>

Reply via email to