On 08/03/2019 11:15, Julien Grall wrote:
> Hi Juergen,
> 
> On 3/8/19 6:28 AM, Juergen Gross wrote:
>> On 07/03/2019 19:00, Julien Grall wrote:
>>> Hi Roger,
>>>
>>> On 07/03/2019 17:15, Roger Pau Monné wrote:
>>>> On Thu, Mar 07, 2019 at 04:36:59PM +0000, Julien Grall wrote:
>>>>> Hi Roger,
>>>>>
>>>>> Thank you for the answer.
>>>>>
>>>>> On 07/03/2019 16:16, Roger Pau Monné wrote:
>>>>>> On Thu, Mar 07, 2019 at 03:17:54PM +0000, Julien Grall wrote:
>>>>>>> Hi Andrii,
>>>>>>>
>>>>>>> On 07/03/2019 14:34, Andrii Anisov wrote:
>>>>>>>> On 07.03.19 16:02, Julien Grall wrote:
>>>>>>>>>>      - IMHO, this implementation is simpler and cleaner than
>>>>>>>>>> what I
>>>>>>>>>> have for runstate mapping on access.
>>>>>>>>>
>>>>>>>>> Did you implement it using access_guest_memory_by_ipa?
>>>>>>>> Not exactly, access_guest_memory_by_ipa() has no implementation
>>>>>>>> for x86.
>>>>>>>> But it is made around that code.
>>>>>>>
>>>>>>> For the HVM, the equivalent function is hvm_copy_to_guest_phys. I
>>>>>>> don't know
>>>>>>> what would be the interface for PV. Roger, any idea?
>>>>>>
>>>>>> For PV I think you will have to use get_page_from_gfn, check the
>>>>>> permissions, map it, write and unmap it. The same flow would also
>>>>>> work
>>>>>> for HVM, so I'm not sure if there's much point in using
>>>>>> hvm_copy_to_guest_phys. Or you can implement a generic
>>>>>> copy_to_guest_phys helper that works for both PV and HVM.
>>>>>>
>>>>>> Note that for translated guests you will have to walk the HAP page
>>>>>> tables for each vCPU for each context switch, which I think will be
>>>>>> expensive in terms of performance (I might be wrong however, since I
>>>>>> have no proof of this).
>>>>>
>>>>> AFAICT, we already walk the page-table with the current
>>>>> implementation. So
>>>>> this should be no different here, except we will not need to walk the
>>>>> guest-PT here. No?
>>>>
>>>> Yes, current implementation is even worse because it walks both the
>>>> guest page tables and the HAP page tables in the HVM case. It would be
>>>> interesting IMO if we could avoid walking any of those page tables.
>>>>
>>>> I see you have concerns about permanently mapping the runstate area,
>>>> so I'm not going to oppose, albeit even with only 1G of VA space you
>>>> can map plenty of runstate areas, and taking into account this is
>>>> 32bit hardware I'm not sure you will ever have that many vCPUs that
>>>> you will run out of VA space to map runstate areas.
>>>
>>> Actually the vmap is only 768MB. The vmap is at the moment used for
>>> mapping:
>>>      - MMIO devices (through ioremap)
>>>          - event channel pages
>>>
>>> As the runstate is far smaller than a page, this sounds like a waste of
>>> memory for a benefits that haven't not yet been shown. Indeed, number
>>> provided by Andrii either show worst performance or similar one.
>>>
>>> But TBH, I am not expecting that a really clear performance improvement
>>> on Arm as there are a lot to do in the context switch.
>>>
>>>>
>>>> That being said, if the implementation turns out to be more
>>>> complicated because of this permanent mapping, walking the guest HAP
>>>> page tables is certainly no worse than what's done ATM.
>>>
>>> To be honest I am not fully against always mapping the runstate in Xen.
>>> But I need data to show this is worth it. So far, the performance
>>> promised are not there and the implementation is not foolproof yet.
>>>
>>> If we want to keep the runstate mapped permanently, then we need to add
>>> either a lock or a refcounting. So the page does not disappear during
>>> context switch if we happen to update the runstate concurrently (via the
>>> hypercall).
>>>
>>> This may increase the complexity of the implementation (not sure by how
>>> much thought).
>>>
>>> Another solution is to prevent the runstate to be updated. But I think
>>> we will just add a bit more burden in the guest OS.
>>
>> Not sure about other systems, but current Linux kernel registers the
>> runstate area for other cpus only if those are not up. So there is no
>> way the runstate of that foreign vcpu could be updated during
>> registering it. Of course this would need to be tested (e.g. -EBUSY
>> for registering runstate of an active foreign vcpu).
> 
> I was under the impression that xen_vcpu_restore() (see
> arch/x86/xen/enlighten.c) may update the runstate while the vCPU is up
> (at least on pre-Xen 4.5). Did I miss anything?

Pre Xen 4.5: yes. But I don't think you want to introduce the new
semantics in Xen 4.4 and older?


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to