On Fri, Jul 11, 2014 at 01:05:30AM +0000, Wangkai (Kevin,C) wrote:
> When used a tap as net driver for vm, if too many packets was delivered to 
> the 
> guest os via tap interface, the guest os will be blocked on io events for a 
> long
> time, while tap driver was busying process packets.
> 
> kvm vcpu thread block on io lock call trace:
>   __lll_lock_wait
>   _L_lock_1004
>   __pthread_mutex_lock
>   qemu_mutex_lock
>   kvm_cpu_exec
>   qemu_kvm_cpu_thread_fn
>   start_thread
> 
> qemu io thread call trace:
>   ...
>   qemu_net_queue_send
>   tap_send
>   qemu_iohandler_poll
>   main_loop_wait
>   main_loop
>   
> 
> I think the qemu io lock time should be as small as possible, and the io work
> slice should be limited at a particular ration or time.
> 
> ---
> Signed-off-by: Wangkai <wangka...@huawei.com>

How many packets are you seeing in a single tap_send() call?

Have you profiled the tap_send() code path?  Maybe it is performing some
operation that is very slow.

By the way, if you want good performance you should use vhost_net
instead of userspace vhost_net.  Userspace virtio-net is not very
optimized.

Stefan

Attachment: pgpfFs1srA2wz.pgp
Description: PGP signature

Reply via email to