On 09/25/2018 11:12 AM, Milan Boberic wrote:
Hello guys,
Hi Milan,
mapping on my system is:
dom0 have one vCPU and it is pinned on pCPU0
domU also have one vCPU and it's pinned for pCPU2
Your platform has 4 CPUs, right? What does the other do? Just sitting in
the idle loop?
I removed only vwfi=native and everything works fine. I can destroy
and create a guest as many times as I want with out any error (still
using sched=null).
Thank you for testing, this is quite helpful to know that removing
vwfi=native makes a difference.
These are xen bootargs in Xen-overlay.dtsi file:
xen,xen-bootargs = "console=dtuart dtuart=serial0 dom0_mem=768M
bootscrub=0 maxcpus=1 dom0_max_vcpus=1 dom0_vcpus_pin=true
timer_slop=0 core_parking=performance cpufreq=xen:performance
sched=null";
Where does this command line comes from?
There are options that does not make sense for Arm:
- maxvcpus=1
- core_parking=performance
- cpufreq=xen:performance
All of those options are not supported on Arm. The first one is quite
concerning because you request to limit the number of pCPUs used by Xen.
This is not available today. If we ever add support, you would end to
have only 1 pCPUs.
There is whole xen-overlay.dtsi file included in attachment.
Purpose of my work is implementing xen on UltraZed-EG board with
maximum performance (and lowest jitter, which is why I'm using null
scheduler and "hard" vcpu pinning) which is my master's thesis on
faculty. By removing vwfi=native I'm not getting the best performance,
right? vwfi=native should decrease interrupt latency by ~60% as I read
here:
https://blog.xenproject.org/author/stefano-stabellini/
IIRC the test was not done with NULL scheduler. So the interrupt latency
may slightly be better. However, there will still be a performance
impact as the scheduler may decide to switch to idle vCPU.
It is possible to reduce the overhead of switch to idle vCPU by
optimizing the context switch.
Anyway, vwfi=native should not affect destroying guest. This should
probably be fixed. I will answer on Dario's e-mail.
Cheers,
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel