On 07.03.19 16:02, Julien Grall wrote:
So I assume you say about you preferences to not have runstate area mapped
because of consuming vmap space for arm64. Also, along that thread you
mentioned you that guest might change gva mapping, what is irrelevant to
registration with physical address.
My reasons to have that runstate mapped are following:
- Introducing the new interface we are not burden with legacy, so in charge
to impose requirements. In this case - to have runstate area not crossing a
page boundary
- Global mapping used here does not consume vmap space on arm64. It seems to
me x86 guys are ok with mapping as well, at least Roger suggested it from the
beginning. So it should be ok for them as well.
You left arm32 out of your equations here...
Yes, I left arm32 aside.
- In case domain is mapping runstate with physical address, it can not change
the mapping.
This is not entirely correct. The domain can not change the mapping under our
feet, but it can still change via the hypercall. There are nothing preventing
that with current hypercall and the one your propose.
Could you please describe the scenario with more details and the interface used
for it?
Also vcpu_info needs protections from it. Do you agree?
Well the number you showed in the other thread didn't show any improvement at
all... So please explain why we should call map_domain_page_global() here and
using more vmap on arm32
I'm not expecting vmap might be a practical problem for arm32 based system.
With the current implementation numbers are equal to those I have for runstate
mapping on access.
But I'm not sure my test setup able to distinguish the difference.
- IMHO, this implementation is simpler and cleaner than what I have for
runstate mapping on access.
Did you implement it using access_guest_memory_by_ipa?
Not exactly, access_guest_memory_by_ipa() has no implementation for x86. But it
is made around that code.
But I don't think the implementation you suggest will be that simpler once you
deal with the problem above.I missed that problem. Will look at it.
--
Sincerely,
Andrii Anisov.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel