We find an issue when host mce trigger openvswitch(dpdk) restart in source host during guest migration, VM is still link down in frontend after migration, it cause the network in VM never be up again.
virtio_net_load_device: /* nc.link_down can't be migrated, so infer link_down according * to link status bit in n->status */ link_down = (n->status & VIRTIO_NET_S_LINK_UP) == 0; for (i = 0; i < n->max_queues; i++) { qemu_get_subqueue(n->nic, i)->link_down = link_down; } guset: migrate begin -------> vCPU pause --------> vmsate load ------->migrate finish ^ ^ ^ | | | openvswitch in source host: begin to restart restarting started ^ ^ ^ | | | nc in frontend in source: link down link down link down ^ ^ ^ | | | nc in frontend in destination: link up link up link down ^ ^ ^ | | | guset network: broken broken broken ^ ^ ^ | | | nc in backend in source: link down link down link up ^ ^ ^ | | | nc in backend in destination: link up link up link up The link_down of frontend was loaded from n->status, n->status is link down in source, so the link_down of frontend is true. The backend in destination host is link up, but the frontend in destination host is link down, it cause the network in gust never be up again until an guest cold reboot. Is there a way to auto fix the link status? or just abort the migration in virtio net device load?