On 01/04/2020 10:54, Bertrand Marquis wrote:
On 1 Apr 2020, at 09:30, Julien Grall <jul...@xen.org
<mailto:jul...@xen.org>> wrote:
On 01/04/2020 01:57, Stefano Stabellini wrote:
On Mon, 30 Mar 2020, Julien Grall wrote:
Hi Stefano,
On 30/03/2020 17:35, Stefano Stabellini wrote:
On Sat, 28 Mar 2020, Julien Grall wrote:
qHi Stefano,
On 27/03/2020 02:34, Stefano Stabellini wrote:
This is a simple implementation of GICD_ICACTIVER / GICD_ISACTIVER
reads. It doesn't take into account the latest state of interrupts on
other vCPUs. Only the current vCPU is up-to-date. A full solution is
not possible because it would require synchronization among all
vCPUs,
which would be very expensive in terms or latency.
Your sentence suggests you have number showing that correctly
emulating
the
registers would be too slow. Mind sharing them?
No, I don't have any numbers. Would you prefer a different wording or a
better explanation? I also realized there is a typo in there (or/of).
Let me start with I think correctness is more important than speed.
So I would have expected your commit message to contain some fact why
synchronization is going to be slow and why this is a problem.
To give you a concrete example, the implementation of set/way
instructions are
really slow (it could take a few seconds depending on the setup).
However,
this was fine because not implementing them correctly would have a
greater
impact on the guest (corruption) and they are not used often.
I don't think the performance in our case will be in same order
magnitude. It
is most likely to be in the range of milliseconds (if not less)
which I think
is acceptable for emulation (particularly for the vGIC) and the
current uses.
Writing on the mailing list some of our discussions today.
Correctness is not just in terms of compliance to a specification but it
is also about not breaking guests. Introducing latency in the range of
milliseconds, or hundreds of microseconds, would break any latency
sensitive workloads. We don't have numbers so we don't know for certain
the effect that your suggestion would have.
You missed part of the discussion. I don't disagree that latency is
important. However, if an implementation is only 95% reliable, then it
means 5% of the time your guest may break (corruption, crash,
deadlock...). At which point the latency is the last of your concern.
It would be interesting to have those numbers, and I'll add to my TODO
list to run the experiments you suggested, but I'll put it on the
back-burner (from a Xilinx perspective it is low priority as no
customers are affected.)
How about we get a correct implementation merge first and then discuss
about optimization? This would allow the community to check whether
there are actually noticeable latency in their workload.
Hi,
Hi,
I am not sure that pushing something with a performance impact to later
fix it is the right approach here.
The patch is an improvement compared to the current code and it can be
further improved later to handle more cases (other cores).
If you read my other answer on this patch you will see that this is
going to introduce a deadlock in the guest using multiple vCPUs. How is
it an improvement compare to today?
If we really have to sync all vCPUs here, this will cost a lot and the
result will still be the status in the past in fact because nothing will
make sure that at the point the guest gets back the value it is still valid.
I hope you are aware the problem is exactly the same in the hardware. As
soon as you read the ISACTIVER, then the value may not be correct
anymore. So I don't see the issue to have an outdated value here.
As I said to Stefano yesterday, I would be happy to compromise as long
as the implementation does not introduce an outright deadlock in the guest.
At the moment, I don't have a better approach than kick all the vCPUs.
Feel free to suggest a better one.
Cheers,
--
Julien Grall