ch
x86-put-l1e-foreign-flush.patch in
https://lists.xenproject.org/archives/html/xen-devel/2017-04/msg02945.html.
I observed no measurable difference between these builds with a guest RAM value
of 4G, 8G and 14G for the following operations:
- time xe vm-start
- time xe vm-shutdown
- vm downtime during "xe vm-migration" (as measured by pinging the vm during
migration and verifying for how long pings would fail when both domains are
paused)
- time xe vm-migrate # for HVM guests (eg. win7 and win10)
But I observed a difference for the duration of "time xe vm-migrate" for PV
guests (eg. centos68, debian70, ubuntu1204). For centos68, for instance, I
obtained the following values on a machine with a Intel E3-1281v3 3.7Ghz CPU,
averaged over 10 runs for each data point:
| Guest RAM | no patch | with patch | difference | diff/RAM |
| 14GB| 10.44s | 13.46s |3.02s |0.22s/GB|
|8GB|6.46s |8.28s |1.82s |0.23s/GB|
|4GB|3.85s |4.74s |0.89s |0.22s/GB|
From these numbers, if the patch is present, it looks like VM migration of a PV
guest would take an extra 1s for each extra 5GB of guest RAM. The VMs are
mostly idle during migration. At this point, it's not clear to me why this
difference is only visible on VM migration (as opposed to VM start for
example), and only on a PV guest (as opposed to an HVM).
Marcus
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
-
Anyway, even applying all of these patches would not alleviate Code 43.
To be more specific, all NVidia drivers up to 364.72 would BSOD on boot
(SYSTEM_SERVICE_EXCEPTION), and newer drivers (368.22+) would cause Code
43. This happens on both Windows 7 Pro and 8.1 VMs. Result on qemu-xen
and -
iops with single queue to 230K iops with 8
queues), and no regressions were visible in any measurement performed.
Marcus
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
e in (B), and they
cancel each other out if all block sizes are considered together. For
random reads, 8-page rings were similar or superior to 1-page rings in
all tested conditions.
All considered, we believe that the multi-page ring patches improve the
storage performance (apart from case (A))