Julien Grall writes:

> Hi Volodymyr,
>
> On 9/11/19 7:53 PM, Volodymyr Babchuk wrote:
>>
>> Julien Grall writes:
>>
>>> Hi Volodymyr,
>>>
>>> On 8/23/19 7:48 PM, Volodymyr Babchuk wrote:
>>>> Now we have limit for one shared buffer size, so we can be sure that
>>>> one call to free_optee_shm_buf() will not free all
>>>> MAX_TOTAL_SMH_BUF_PG pages at once. Thus, we now can check for
>>>> hypercall_preempt_check() in the loop inside
>>>> optee_relinquish_resources() and this will ensure that we are not
>>>> missing preemption.
>>>
>>> I am not sure to understand the correlation between the two
>>> sentences. Even if previously the guest could pin up to
>>> MAX_TOTAL_SHM_BUF_PG in one call, a well-behaved guest would result to
>>> do multiple calls and therefore preemption would have been useful.
>> Looks like now I don't understand you.
>>
>> I'm talking about shared buffers. We have limited shared buffer to some
>> reasonable size. There is bad- or well-behaving guests in this context,
>> because guest can't share one big buffer in multiple calls. In other
>> worlds, if guest *needs* to share 512MB buffer with OP-TEE, it will be
>> forced to do this in one call. But we are forbidding big buffers right
>> now.
>>
>> optee_relinquish_resources() is called during domain destruction. At
>> this time we can have a number of still living shared buffers, each of
>> one is no bigger than 512 pages. Thanks to this, we can call
>> hypercall_preempt_check() only in optee_relinquish_resources(), but not
>> in free_optee_shm_buf().
>
> I understand what you mean, however my point is that this patch does
> not dependent of the previous patch. Even if this patch goes alone,
> you will improve well-behaved guest. For ill-behaved guest, the
> problem will stay the same so no change.
>
Ah, I see now. Okay, I'll rework the commit description.

-- 
Volodymyr Babchuk at EPAM
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to