When used a tap as net driver for vm, if too many packets was delivered to the guest os via tap interface, the guest os will be blocked on io events for a long time, while tap driver was busying process packets.
kvm vcpu thread block on io lock call trace: __lll_lock_wait _L_lock_1004 __pthread_mutex_lock qemu_mutex_lock kvm_cpu_exec qemu_kvm_cpu_thread_fn start_thread qemu io thread call trace: ... qemu_net_queue_send tap_send qemu_iohandler_poll main_loop_wait main_loop I think the qemu io lock time should be as small as possible, and the io work slice should be limited at a particular ration or time. --- Signed-off-by: Wangkai <wangka...@huawei.com> diff --git a/net/tap.c b/net/tap.c index a40f7f0..df9a0eb 100644 --- a/net/tap.c +++ b/net/tap.c @@ -189,6 +189,7 @@ static void tap_send(void *opaque) { TAPState *s = opaque; int size; + int pkt = 0; while (qemu_can_send_packet(&s->nc)) { uint8_t *buf = s->buf; @@ -210,6 +211,11 @@ static void tap_send(void *opaque) } else if (size < 0) { break; } + + /* limit io block time slice for 50 packets */ + pkt++; + if (pkt >= 50) + break; } } --- 2.0.0