----- On Sep 9, 2019, at 7:00 AM, Oleg Nesterov [email protected] wrote:

> On 09/08, Mathieu Desnoyers wrote:
>>
>> +static void sync_runqueues_membarrier_state(struct mm_struct *mm)
>> +{
>> +    int membarrier_state = atomic_read(&mm->membarrier_state);
>> +    bool fallback = false;
>> +    cpumask_var_t tmpmask;
>> +    int cpu;
>> +
>> +    if (atomic_read(&mm->mm_users) == 1 || num_online_cpus() == 1) {
>> +            WRITE_ONCE(this_rq()->membarrier_state, membarrier_state);
> 
> This doesn't look safe, this caller can migrate to another CPU after
> it calculates the per-cpu ptr.
> 
> I think you need do disable preemption or simply use this_cpu_write().

Good point! I'll use this_cpu_write() there and within
membarrier_exec_mmap(), which seems to be affected by the same problem.

Thanks,

Mathieu


-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

Reply via email to