On 22/05/2019 14:34, Jan Beulich wrote:
>>>> On 22.05.19 at 12:19, <jgr...@suse.com> wrote:
>> On 22/05/2019 12:10, Jan Beulich wrote:
>>>>>> On 22.05.19 at 11:45, <jgr...@suse.com> wrote:
>>>> --- a/xen/arch/x86/hvm/hvm.c
>>>> +++ b/xen/arch/x86/hvm/hvm.c
>>>> @@ -3185,22 +3185,6 @@ static enum hvm_translation_result __hvm_copy(
>>>>  
>>>>      ASSERT(is_hvm_vcpu(v));
>>>>  
>>>> -    /*
>>>> -     * XXX Disable for 4.1.0: PV-on-HVM drivers will do grant-table ops
>>>> -     * such as query_size. Grant-table code currently does 
>> copy_to/from_guest
>>>> -     * accesses under the big per-domain lock, which this test would 
>> disallow.
>>>> -     * The test is not needed until we implement sleeping-on-waitqueue 
>>>> when
>>>> -     * we access a paged-out frame, and that's post 4.1.0 now.
>>>> -     */
>>>> -#if 0
>>>> -    /*
>>>> -     * If the required guest memory is paged out, this function may sleep.
>>>> -     * Hence we bail immediately if called from atomic context.
>>>> -     */
>>>> -    if ( in_atomic() )
>>>> -        return HVMTRANS_unhandleable;
>>>> -#endif
>>>
>>> Dealing with this TODO item is of course much appreciated, but
>>> should it really be deleted altogether? The big-domain-lock issue
>>> is gone afair, in which case dropping the #if 0 would seem
>>> possible to me, even if it's not strictly needed without the sleep-
>>> on-waitqueue behavior mentioned.
>>
>> Question is whether it is worth to keep it resulting in the need to
>> keep preempt_count() as well.
> 
> Well, personally I think keeping it is a small price to pay. But seeing
> Andrew's R-b he clearly thinks different. And just to be clear - I
> don't really want to veto this change, as at the same time it's also
> easy enough to put back if need be. But I'd like this to be give a
> 2nd consideration at least.

Completely understandable.

I just stumbled over that when I needed to introduce rcu_read_lock()
usage in some hot paths for my core scheduling series and I wanted to
understand the performance implications for adding those calls.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to