Am 02.07.2012 11:42, schrieb Peter Crosthwaite: > On Mon, Jul 2, 2012 at 7:04 PM, Kevin Wolf <kw...@redhat.com> wrote: >> Am 02.07.2012 10:57, schrieb Peter Crosthwaite: >>> No conditional on the qemu_coroutine_create. So it will always create >>> a new coroutine for its work which will solve my problem. All I need >>> to do is pump events once at the end of machine model creation. Any my >>> coroutines will never yield or get queued by block/AIO. Sound like a >>> solution? >> >> If you don't need the read data in your initialisation code, > > definately not :) Just as long as the read data is there by the time > the machine goes live. Whats the current policy with bdrv_read()ing > from init functions anyway? Several devices in qemu have init > functions that read the entire storage into a buffer (then the guest > just talks to the buffer rather than the backing store).
Reading from block devices during device initialisation breaks migration, so I'd like to see it go away wherever possible. Reading in the whole image file doesn't sound like something for which a good excuse exists, you can do that as well during the first access. > Pflash (pflash_cfi_01.c) is the device that is causing me interference > here and it works exactly like this. If we make the bdrv_read() aio > though, how do you ensure that it has completed before the guest talks > to the device? Will this just happen at the end of machine_init > anyways? Can we put a one liner in the machine init framework that > pumps all AIO events then just mass covert all these bdrv_reads (in > init functions) to bdrv_aio_read with a nop completion callback? The initialisation function of the device can wait at its end for all AIOs to return. I wouldn't want to encourage more block layer use during the initialisation phase by supporting it in the infrastructure. > then yes, >> that would work. bdrv_aio_* will always create a new coroutine. I just >> assumed that you wanted to use the data right away, and then using the >> AIO functions wouldn't have made much sense. > > You'd get a small performance increase no? Your machine init continues > on while your I/O happens rather than being synchronous so there is > motivation beyond my situation. Yeah, as long as the next statement isn't "while (!returned) qemu_aio_wait();", which it is in the common case. Kevin