Ben-Ami Yassour wrote:
(with higher CPU utilization).
How much higher?
Here are some numbers for running iperf -l 1M:
e1000 NIC (behind a PCI bridge)
Bandwidth (Mbit/sec) CPU utilization
Native OS 771 18%
Native OS with VT-d 760 18%
KVM VT-d 390 95%
KVM VT-d with direct mmio 770 84%
KVM emulated 57 100%
Comment: its not clear to me why the native linux can not get closer to 1G for
this NIC,
(I verified that its not external network issues). But clearly we shouldn't hope to
get more then the host does with a KVM guest (especially if the guest and host are the
same OS as in this case...).
e1000e NIC (onboard)
Bandwidth (Mbit/sec) CPU utilization
Native OS 915 18%
Native OS with VT-d 915 18%
KVM VT-d with direct mmio 914 98%
Clearly we need to try and improve the CPU utilization, but I think that this is good enough
for the first phase.
Agree; part of the higher utilization is of course not the fault of the
device assignment code, rather it is ordinary virtualization overhead.
We'll have to tune this.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html