After reports about degraded performance after a PV domU was migrated
from one dom0 to another it turned out that this issue happens with
every version of Xen and every version of domU kernel.

The used benchmark is 'sysbench memory'. I hacked it up to show how long
the actual work takes, and that loop takes longer to execute after the
domU is migrated. In my testing the loop (memory_execute_event) takes
about 1200ns, after migration it takes about 1500ns. It just writes 0 to
an array of memory. In total sysbench reports 6500 MiB/sec, after
migration its just 3350 MiB/sec.
The source of the modified test can be found here:
https://github.com/olafhering/sysbench/compare/master...pv

This happens on several hosts. NUMA or not makes no difference. CPU
pinning or not makes no difference. Pinning of the test pthreads makes
no difference. It was initially reported with xen-4.4, I see it with
staging too. The guest kernel makes no difference, several variants of
xenlinux or pvops based show the slowdown. Also live migration to
localhost is affected.

The domU.cfg looks like that:
name='pv'
memory=1024
vcpus=4
cpus=[ "4", "5", "6", "7" ]
disk=[ 'file:/disk0.raw,xvda,w', ]
vif=[ 'bridge=br0,type=netfront', ]
vfb=[ 'type=vnc,vncunused=1,keymap=de', ]
serial="pty"
kernel="/usr/lib/grub2/x86_64-xen/grub.xen"

Xen is booted with "console=com1 com1=115200 loglvl=all guest_loglvl=all
dom0_max_vcpus=2 dom0_vcpus_pin=on".

I wonder what the cause might be, and how to check where the time is
spent.


Olaf

Attachment: signature.asc
Description: PGP signature

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

Reply via email to