On (Mon) 01 Sep 2014 [20:38:20], Zhang Haoyu wrote:
> >> Hi, all
> >>
> >> I start a VM with virtio-serial (default ports number: 31), and found that
> >> virtio-blk performance degradation happened, about 25%, this problem can
> >> be reproduced 100%.
> >> without virtio-serial:
> >> 4k-read-random 1186 IOPS
> >> with virtio-serial:
> >> 4k-read-random 871 IOPS
> >>
> >> but if use max_ports=2 option to limit the max number of virio-serial
> >> ports, then the IO performance degradation is not so serious, about 5%.
> >>
> >> And, ide performance degradation does not happen with virtio-serial.
> >
> >Pretty sure it's related to MSI vectors in use. It's possible that
> >the virtio-serial device takes up all the avl vectors in the guests,
> >leaving old-style irqs for the virtio-blk device.
> >
> I don't think so,
> I use iometer to test 64k-read(or write)-sequence case, if I disable the
> virtio-serial dynamically via device manager->virtio-serial => disable,
> then the performance get promotion about 25% immediately, then I re-enable
> the virtio-serial via device manager->virtio-serial => enable,
> the performance got back again, very obvious.
> So, I think it has no business with legacy interrupt mode, right?
>
> I am going to observe the difference of perf top data on qemu and perf kvm
> stat data when disable/enable virtio-serial in guest,
> and the difference of perf top data on guest when disable/enable
> virtio-serial in guest,
> any ideas?
So it's a windows guest; it could be something windows driver
specific, then? Do you see the same on Linux guests too?
Amit
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html