Write requests in writethrough mode mean that QEMU sends a separate
flush request (i.e. fdatasync) after each completed write request.
This is unnecessary overhead when we can just pass a flag for the write
request that gives us the desired FUA semantics.
Unfortunately, this made a problem in the
On Fri, Feb 07, 2025 at 02:37:24PM +, David Woodhouse wrote:
> From: David Woodhouse
>
> Block devices don't work in PV Grub (0.9x) if there is no mode specified. It
> complains: "Error ENOENT when reading the mode"
>
> Signed-off-by: David Woodhouse
Reviewed-by: Anthony PERARD
Thanks,
Applied, thanks.
Please update the changelog at https://wiki.qemu.org/ChangeLog/10.0 for any
user-visible changes.
signature.asc
Description: PGP signature
As a preparation for having multiple adaptive polling states per
AioContext, move the 'ns' field into a separate struct.
Signed-off-by: Kevin Wolf
---
include/block/aio.h | 6 +-
util/aio-posix.c| 31 ---
util/async.c| 3 ++-
3 files changed, 23 inse
Until now, FUA was always emulated with a separate flush after the write
for file-posix. The overhead of processing a second request can reduce
performance significantly for a guest disk that has disabled the write
cache, especially if the host disk is already write through, too, and
the flush isn'
For block drivers that don't advertise FUA support, we already call
bdrv_co_flush(), which considers BDRV_O_NO_FLUSH. However, drivers that
do support FUA still see the FUA flag with BDRV_O_NO_FLUSH and get the
associated performance penalty that cache.no-flush=on was supposed to
avoid.
Clear FUA
Signed-off-by: Kevin Wolf
---
util/aio-posix.c | 77 ++--
1 file changed, 41 insertions(+), 36 deletions(-)
diff --git a/util/aio-posix.c b/util/aio-posix.c
index 95bddb9e4b..259827c7ad 100644
--- a/util/aio-posix.c
+++ b/util/aio-posix.c
@@ -600,6 +60
Adaptive polling has a big problem: It doesn't consider that an event
loop can wait for many different events that may have very different
typical latencies.
For example, think of a guest that tends to send a new I/O request soon
after the previous I/O request completes, but the storage on the hos