>>> On 15.02.19 at 14:51, <[email protected]> wrote:

> 
>> On Feb 15, 2019, at 1:47 PM, Andrew Cooper <[email protected]> wrote:
>> 
>> On 15/02/2019 13:37, George Dunlap wrote:
>>> 
>>>>> The one issue is that domain_pause_except_self() currently is actually a 
>>>>> deadlock risk if two different vcpus start it at the same time.  I think 
>>>>> the 
> 
>>>>> attached patch (compile-tested only) should fix this issue; after this 
>>>>> patch 
> 
>>>>> you should be able to use domain_pause_except_self() in 
>>>>> altp2m_set_domain_state instead.
>>>> There's one thing I don't really like here, which is a result of the
>>>> (necessary) re-use of the hypercall deadlock mutex: This
>>>> certainly poses the risk of getting called from a context where
>>>> the lock was already acquired. Therefore I'd like to suggest to
>>>> use this lock in a recursive way (here and elsewhere).
>> 
>> I can't think of a usecase were we would want to tolerate recursion on
>> the hypercall deadlock spinlock.
> 
> It sounds like Jan is specifically thinking that someone may (say) call 
> domctl_lock(), then afterwards call domain_pause_except_self().
> 
> Of course, that would deadlock immediately, so would probably get caught 
> before the patch even got to `git send-email`.

Indeed. The situation I'm worried about is if this was put on some
error path, which might never be hit in testing.

>  But it seems like a 
> reasonable thing someone might want to do.
> 
> OTOH, I’m fine leaving making it recursive until someone discovers that it 
> needs to be.

Considering Andrew's remark towards this "just" being a live lock
of a domain, I won't insist on the recursiveness, but I still think
it would be better as less error prone down the road. If left as is,
it would perhaps be a good idea to at least leave a remark in the
description, so that someone running into the problem and finding
this commit won't have to guess why it is the way it is.

Jan


_______________________________________________
Xen-devel mailing list
[email protected]
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to