Hi,
Recently I encounter a strange performance problem with APIC virtualization.
My host has Intel(R) Xeon(R) CPU E7-4890 v2 CPU installed which support
APIC virtualization and x2apic, and there are 4 socket * 15
cores_per_socket = 60 core available for VM. There is also a SSD disk on
host and the host support vt-d, so I can passsthrough this SSD to VM.
A VM created with 60 vcpus, 400G memory and SSD device assigned.
I pin these vcpus 1:1 to phisical cpu, and in this VM only keep 15 vcpus
online.
The problem is: when apicv turn on, a significant performace decrease
can be observed and it seems related to cpu topology.
I had test follow cases
apicv=1:
1) ONLINE VCPU {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14}
PIN TO
PCPU {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14}
2) ONLINE VCPU {0,5,9,13,17,21,25,29,33,37,41,45,49,53,57}
PIN TO
PCPU {0,5,9,13,17,21,25,29,33,37,41,45,49,53,57}
apicv=0:
3) ONLINE VCPU {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14}
PIN TO
PCPU {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14}
4) ONLINE VCPU {0,5,9,13,17,21,25,29,33,37,41,45,49,53,57}
PIN TO
PCPU {0,5,9,13,17,21,25,29,33,37,41,45,49,53,57}
the result is (the lower the better):
1) 989
2) 2209
3) 978
4) 1130
It is a database testcase running on suse11sp3 system in the VM, and I
had traced that "read" and "write" syscall get much slower in 2) case.
I have disabled NUMA in BIOS, so it seems apicv cause this bad
performance when using cpus in different nodes.
Can any one shed some light on this?
Btw, I am using xen 4.1.5 version with apicv backported, so I am not
sure whether something broken when backporting or just apicv behaves
this way.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel