No, the vhost_net module is not loaded.
Loading it does not help.

No, 'ifdown eth0; ifup eth0' in the guest does not reliably bring
networking back up. Sometimes it works, sometimes I have to reboot the
VM. The last times it did not work any more.

My problem might be the same as bug 997978.
But I only have it in conjunction with bridged bonding and I only have seen the 
effect after heavy load and not after time.
Maybe it could occur after a longer period of time on my system, too.

qemu-kvm in ppa:ubuntu-virt/backports and ppa:ubuntu-virt/kvm-network-
hang both work well, but I did no long time testing. It still could
occur after some time.

Now I am running my iperf-test with the kvm-network-hang version over
some hours, but I cannot test it infinitely long.

What are the differences between the official version of qemu-kvm and
the one in kvm-network-hang?

I really need to know if it is reliable in order to come to a decision,
wether to use it or not in production systems!

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to qemu-kvm in Ubuntu.
https://bugs.launchpad.net/bugs/1050934

Title:
  VM stops receiving packets on heavy load from virtio network interface
  briged to a bonded interface on kvm hypervisor

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1050934/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs

Reply via email to