On 15.08.2022 14:16, Juergen Gross wrote:
> On 15.08.22 14:00, Jan Beulich wrote:
>> On 15.08.2022 13:55, Juergen Gross wrote:
>>> On 15.08.22 13:52, Jan Beulich wrote:
>>>> On 15.08.2022 13:04, Juergen Gross wrote:
>>>>> --- a/xen/common/sched/core.c
>>>>> +++ b/xen/common/sched/core.c
>>>>> @@ -3237,6 +3237,65 @@ out:
>>>>>        return ret;
>>>>>    }
>>>>>    
>>>>> +static struct cpu_rm_data *schedule_cpu_rm_alloc(unsigned int cpu)
>>>>> +{
>>>>> +    struct cpu_rm_data *data;
>>>>> +    const struct sched_resource *sr;
>>>>> +    unsigned int idx;
>>>>> +
>>>>> +    rcu_read_lock(&sched_res_rculock);
>>>>> +
>>>>> +    sr = get_sched_res(cpu);
>>>>> +    data = xmalloc_flex_struct(struct cpu_rm_data, sr, sr->granularity - 
>>>>> 1);
>>>>> +    if ( !data )
>>>>> +        goto out;
>>>>> +
>>>>> +    data->old_ops = sr->scheduler;
>>>>> +    data->vpriv_old = idle_vcpu[cpu]->sched_unit->priv;
>>>>> +    data->ppriv_old = sr->sched_priv;
>>>>
>>>> Repeating a v1 comment:
>>>>
>>>> "At least from an abstract perspective, doesn't reading fields from
>>>>    sr require the RCU lock to be held continuously (i.e. not dropping
>>>>    it at the end of this function and re-acquiring it in the caller)?"
>>>>
>>>> Initially I thought you did respond to this in some way, but when
>>>> looking for a matching reply I couldn't find one.
>>>
>>> Oh, sorry.
>>>
>>> The RCU lock is protecting only the sr, not any data pointers in the sr
>>> are referencing. So it is fine to drop the RCU lock after reading some
>>> of the fields from the sr and storing it in the cpu_rm_data memory.
>>
>> Hmm, interesting. "Protecting only the sr" then means what exactly?
>> Just its allocation, but not its contents?
> 
> Correct.
> 
>> Plus it's not just the pointers - sr->granularity also would better not
>> increase in the meantime ... Quite likely there's a reason why that also
>> cannot happen, yet even then I think a brief code comment might be
>> helpful here.
> 
> Okay, will add something like:
> 
> "Between schedule_cpu_rm_alloc() and the real cpu removal action the relevant
>   contents of struct sched_resource can't change, as the cpu in question is
>   locked against any other movement to or from cpupools, and the data copied
>   by schedule_cpu_rm_alloc() is cpupool specific."
> 
> Is that okay?

Well, I guess I need to leave this to the scheduler maintainers then. I
have to admit that it's not clear to me why all of sr->granularity,
sr->scheduler, or sr->sched_priv would be "cpupool specific". I may be
able to agree for sr->granularity, but the other two I thought was
scheduler data, not cpupool data. For sr->granularity in turn (but
perhaps also the other two fields) it's not obvious to me that pool
properties can't change in a racing manner.

Jan

Reply via email to