On Fri, Feb 15, 2013 at 11:24:29AM +0100, Stefan Hajnoczi wrote: > On Thu, Feb 14, 2013 at 07:21:57PM +0100, Luigi Rizzo wrote: > > CCed Michael Tsirkin > > > virtio-style network devices (where the producer and consumer chase > > each other through a shared memory block) can enter into a > > bad operating regime when the consumer is too fast. > > > > I am hitting this case right now when virtio is used on top of the > > netmap/VALE backend that I posted a few weeks ago: what happens is that > > the backend is so fast that the io thread keeps re-enabling notifications > > every few packets. This results, on my test machine, in a throughput of > > 250-300Kpps (and extremely unstable, oscillating between 200 and 600Kpps). > > > > I'd like to get some feedback on the following trivial patch to have > > the thread keep spinning for a bounded amount of time when the producer > > is slower than the consumer. This gives a relatively stable throughput > > between 700 and 800 Kpps (we have something similar in our paravirtualized > > e1000 driver, which is slightly faster at 900-1100 Kpps). > > Did you experiment with tx timer instead of bh? It seems that > hw/virtio-net.c has two tx mitigation strategies - the bh approach that > you've tweaked and a true timer. > > It seems you don't really want tx batching but you do want to avoid > guest->host notifications?
One more thing I forgot: virtio-net does not use ioeventfd by default. ioeventfd changes the cost of guest->host notifications because the notification becomes an eventfd signal inside the kernel and kvm.ko then re-enters the guest. This means a guest->host notification becomes a light-weight exit and we don't return from ioctl(KVM_RUN). Perhaps -device virtio-blk-pci,ioeventfd=on will give similar results to your patch? Stefan