On Mon, May 26, 2014 at 01:48:13PM +0200, Stefan Hajnoczi wrote: > On Wed, May 14, 2014 at 03:46:48PM +0300, Michael S. Tsirkin wrote: > > On Wed, May 14, 2014 at 02:30:26PM +0200, Stefan Hajnoczi wrote: > > > On Thu, May 08, 2014 at 12:51:05PM +0000, Zhanghailiang wrote: > > > > > If you implement this in the net layer then that problem is easy to > > > > > resolve since > > > > > we can flush all queues when the guest resumes to get packets flowing > > > > > again. > > > > > > > > > Do you mean we should also listen for VM runstate changes in net layer, > > > > and when detect runstate changes back to running , we flush all queues > > > > actively? Am I misunderstanding? > > > > Or we can do it *before* qemu (exactly when it check if it can send > > > > packets) send packets to guest again, this way will be simple, but it > > > > also need know the change of runstate. Any idea? > > > > > > When the runstate changes back to running, we definitely need to flush > > > queues to get packets flowing again. I think the simplest way of doing > > > that is in the net layer so individual NICs and netdevs don't have to > > > duplicate this code. > > > > > > Stefan > > > > That will help with networking but not other devices. > > The issue isn't limited to networking at all. > > How about we stop all io threads with the vm? > > > > That will address the issue in a generic way. > > I'm not sure if it works in all cases, for example iSCSI where we send > nop keepalives. > > Stefan
I am guessing that runs from the realtime clock? We definitely want to keep realtime clock going when VM is stopped, that's the definition. -- MST