[PATCH v2] migration: Ensure vmstate_save() sets errp

2024-10-15 Thread Hanna Czenczek
ng the state from a virtio-fs back-end (virtiofsd) fails. Signed-off-by: Hanna Czenczek --- v2: As suggested by Peter, after vmsd->post_save(), change the condition from `if (!ret)` to `if (!ret && ps_ret)` so we will not create an error object in case of success (that would the

Re: [PATCH] migration: Ensure vmstate_save() sets errp

2024-10-15 Thread Hanna Czenczek
On 15.10.24 18:06, Peter Xu wrote: On Tue, Oct 15, 2024 at 04:15:15PM +0200, Hanna Czenczek wrote: migration/savevm.c contains some calls to vmstate_save() that are followed by migrate_set_error() if the integer return value indicates an error. migrate_set_error() requires that the `Error

[PATCH] migration: Ensure vmstate_save() sets errp

2024-10-15 Thread Hanna Czenczek
ng the state from a virtio-fs back-end (virtiofsd) fails. Signed-off-by: Hanna Czenczek --- migration/vmstate.c | 11 +++ 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/migration/vmstate.c b/migration/vmstate.c index ff5d589a6d..13532f2807 100644 --- a/migration/vmstate.c

Re: [PATCH] raw-format: Fix error message for invalid offset/size

2024-08-30 Thread Hanna Czenczek
rmat: Split raw_read_options()') Signed-off-by: Kevin Wolf --- block/raw-format.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) Reviewed-by: Hanna Czenczek

[PATCH v2 0/2] virtio: Always reset vhost devices

2024-07-23 Thread Hanna Czenczek
https://gitlab.com/qemu-project/qemu/-/pipelines, I think that’s expected. v2: Added patch 1, left patch 2 unchanged. Hanna Czenczek (2): virtio: Allow .get_vhost() without vhost_started virtio: Always reset vhost devices include/hw/virtio/virtio.h | 1 + hw/display/vhost-user-gpu.c | 2

[PATCH v2 2/2] virtio: Always reset vhost devices

2024-07-23 Thread Hanna Czenczek
ed-by: Michael S. Tsirkin Signed-off-by: Hanna Czenczek --- hw/virtio/virtio.c | 8 ++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c index 583a224163..35dfc01074 100644 --- a/hw/virtio/virtio.c +++ b/hw/virtio/virtio.c @@ -2150,8 +215

[PATCH v2 1/2] virtio: Allow .get_vhost() without vhost_started

2024-07-23 Thread Hanna Czenczek
ons dereference some pointers (or return offsets from them) that are probably guaranteed to be non-NULL when vhost_started is true, but not necessarily otherwise. This patch makes all such implementations check all such pointers, returning NULL if any is NULL. Signed-off-by: Hanna Czenczek --- include

Re: [PULL 00/63] virtio,pci,pc: features,fixes

2024-07-23 Thread Hanna Czenczek
On 23.07.24 12:45, Michael S. Tsirkin wrote: On Tue, Jul 23, 2024 at 12:18:48PM +0200, Hanna Czenczek wrote: On 22.07.24 23:32, Richard Henderson wrote: On 7/22/24 10:16, Michael S. Tsirkin wrote: A couple of fixes are outstanding, will merge later. The following changes since commit

Re: [PULL 00/63] virtio,pci,pc: features,fixes

2024-07-23 Thread Hanna Czenczek
On 22.07.24 23:32, Richard Henderson wrote: On 7/22/24 10:16, Michael S. Tsirkin wrote: A couple of fixes are outstanding, will merge later. The following changes since commit a87a7c449e532130d4fa8faa391ff7e1f04ed660:    Merge tag 'pull-loongarch-20240719' ofhttps://gitlab.com/gaosong/qemu

Re: [PATCH] virtio: Always reset vhost devices

2024-07-11 Thread Hanna Czenczek
On 10.07.24 18:28, Stefan Hajnoczi wrote: On Wed, 10 Jul 2024 at 13:25, Hanna Czenczek wrote: Requiring `vhost_started` to be true for resetting vhost devices in `virtio_reset()` seems like the wrong condition: Most importantly, the preceding `virtio_set_status(vdev, 0)` call will (for vhost

Re: [PATCH] virtio: Always reset vhost devices

2024-07-11 Thread Hanna Czenczek
On 10.07.24 15:39, Matias Ezequiel Vara Larsen wrote: Hello Hanna, On Wed, Jul 10, 2024 at 01:23:10PM +0200, Hanna Czenczek wrote: Requiring `vhost_started` to be true for resetting vhost devices in `virtio_reset()` seems like the wrong condition: Most importantly, the preceding

Re: [PATCH v2 2/2] qcow2: don't allow discard-no-unref when discard is not enabled

2024-07-10 Thread Hanna Czenczek
On 05.06.24 15:25, Jean-Louis Dupond wrote: When discard is not set to unmap/on, we should not allow setting discard-no-unref. Is this important?  Technically, it’s an incompatible change, and would require a deprecation warning first. (I can imagine people setting this option indiscriminate

Re: [PATCH v2 1/2] qcow2: handle discard-no-unref in measure

2024-07-10 Thread Hanna Czenczek
On 05.06.24 15:25, Jean-Louis Dupond wrote: When doing a measure on an image with a backing file and discard-no-unref is enabled, the code should take this into account. That doesn’t make sense to me.  As far as I understand, 'measure' is supposed to report how much space you need for a given

[PATCH] virtio: Always reset vhost devices

2024-07-10 Thread Hanna Czenczek
that we can indeed send a reset to this vhost device, by not just checking `k->get_vhost != NULL` (introduced by commit 95e1019a4a9), but also that the vhost back-end is connected (`hdev = k->get_vhost(); hdev != NULL && hdev->vhost_ops != NULL`). Signed-off-by: Hanna Czenczek --

Re: [PATCH for-9.0?] usb-storage: Fix BlockConf defaults

2024-04-16 Thread Hanna Czenczek
breaks installing Windows from USB hw/usb/dev-storage-classic.c | 9 - 1 file changed, 9 deletions(-) Reviewed-by: Hanna Czenczek

Re: [PATCH 0/2] block: Allow concurrent BB context changes

2024-02-12 Thread Hanna Czenczek
On 10.02.24 09:46, Michael Tokarev wrote: 09.02.2024 19:51, Hanna Czenczek : On 09.02.24 15:08, Michael Tokarev wrote: 02.02.2024 17:47, Hanna Czenczek : Hi, Without the AioContext lock, a BB's context may kind of change at any time (unless it has a root node, and I/O requests are pe

Re: [PATCH v2 3/3] virtio-blk: Use ioeventfd_attach in start_ioeventfd

2024-02-09 Thread Hanna Czenczek
On 09.02.24 15:38, Michael Tokarev wrote: 02.02.2024 18:31, Hanna Czenczek : Commit d3f6f294aeadd5f88caf0155e4360808c95b3146 ("virtio-blk: always set ioeventfd during startup") has made virtio_blk_start_ioeventfd() always kick the virtqueue (set the ioeventfd), regardless of whether

Re: [PATCH 0/2] block: Allow concurrent BB context changes

2024-02-09 Thread Hanna Czenczek
On 09.02.24 15:08, Michael Tokarev wrote: 02.02.2024 17:47, Hanna Czenczek : Hi, Without the AioContext lock, a BB's context may kind of change at any time (unless it has a root node, and I/O requests are pending). That also means that its own context (BlockBackend.ctx) and that of its

Re: [PATCH 0/2] block: Allow concurrent BB context changes

2024-02-07 Thread Hanna Czenczek
On 06.02.24 17:53, Stefan Hajnoczi wrote: On Fri, Feb 02, 2024 at 03:47:53PM +0100, Hanna Czenczek wrote: Hi, Without the AioContext lock, a BB's context may kind of change at any time (unless it has a root node, and I/O requests are pending). That also means that its own co

Re: [PATCH] virtio-blk: do not use C99 mixed declarations

2024-02-06 Thread Hanna Czenczek
On 06.02.24 15:04, Stefan Hajnoczi wrote: QEMU's coding style generally forbids C99 mixed declarations. Signed-off-by: Stefan Hajnoczi --- hw/block/virtio-blk.c | 25 ++--- 1 file changed, 14 insertions(+), 11 deletions(-) Reviewed-by: Hanna Czenczek

Re: [PATCH 5/5] monitor: use aio_co_reschedule_self()

2024-02-06 Thread Hanna Czenczek
there is no race. Suggested-by: Hanna Reitz Signed-off-by: Stefan Hajnoczi --- qapi/qmp-dispatch.c | 7 ++- 1 file changed, 2 insertions(+), 5 deletions(-) Reviewed-by: Hanna Czenczek

Re: [PATCH 4/5] virtio-blk: declare VirtIOBlock::rq with a type

2024-02-06 Thread Hanna Czenczek
On 05.02.24 18:26, Stefan Hajnoczi wrote: The VirtIOBlock::rq field has had the type void * since its introduction in commit 869a5c6df19a ("Stop VM on error in virtio-blk. (Gleb Natapov)"). Perhaps this was done to avoid the forward declaration of VirtIOBlockReq. Hanna Czenczek p

Re: [PATCH 3/5] virtio-blk: add vq_rq[] bounds check in virtio_blk_dma_restart_cb()

2024-02-06 Thread Hanna Czenczek
On 05.02.24 18:26, Stefan Hajnoczi wrote: Hanna Czenczek noted that the array index in virtio_blk_dma_restart_cb() is not bounds-checked: g_autofree VirtIOBlockReq **vq_rq = g_new0(VirtIOBlockReq *, num_queues); ... while (rq) { VirtIOBlockReq *next = rq->next; uint1

Re: [PATCH 2/5] virtio-blk: clarify that there is at least 1 virtqueue

2024-02-06 Thread Hanna Czenczek
} Later on we access s->vq_aio_context[0] under the assumption that there is as least one virtqueue. Hanna Czenczek noted that it would help to show that the array index is already valid. Add an assertion to document that s->vq_aio_context[0] is always safe...and catch future code c

Re: [PATCH 1/5] virtio-blk: enforce iothread-vq-mapping validation

2024-02-06 Thread Hanna Czenczek
On 05.02.24 18:26, Stefan Hajnoczi wrote: Hanna Czenczek noticed that the safety of `vq_aio_context[vq->value] = ctx;` with user-defined vq->value inputs is not obvious. The code is structured in validate() + apply() steps so input validation is there, but it happens way earlier and th

[PATCH v2 2/3] virtio: Re-enable notifications after drain

2024-02-02 Thread Hanna Czenczek
the notifiers. Buglink: https://issues.redhat.com/browse/RHEL-3934 Signed-off-by: Hanna Czenczek --- include/block/aio.h | 7 ++- hw/virtio/virtio.c | 42 ++ 2 files changed, 48 insertions(+), 1 deletion(-) diff --git a/include/block/aio.h b/include

[PATCH v2 0/3] virtio: Re-enable notifications after drain

2024-02-02 Thread Hanna Czenczek
is version (v1 too) just ensures the notifier is enabled after the drain, regardless of its state before. - Use event_notifier_set() instead of virtio_queue_notify() in patch 2 - Added patch 3 Hanna Czenczek (3): virtio-scsi: Attach event vq notifier with no_poll virtio: Re-enable not

[PATCH v2 1/3] virtio-scsi: Attach event vq notifier with no_poll

2024-02-02 Thread Hanna Czenczek
d771c36fd126 ("virtio-scsi: implement BlockDevOps->drained_begin()") Reviewed-by: Stefan Hajnoczi Tested-by: Fiona Ebner Reviewed-by: Fiona Ebner Signed-off-by: Hanna Czenczek --- hw/scsi/virtio-scsi.c | 7 ++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git

[PATCH v2 3/3] virtio-blk: Use ioeventfd_attach in start_ioeventfd

2024-02-02 Thread Hanna Czenczek
reuse that function. Signed-off-by: Hanna Czenczek --- hw/block/virtio-blk.c | 21 ++--- 1 file changed, 10 insertions(+), 11 deletions(-) diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index 227d83569f..22b8eef69b 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/vi

[PATCH 1/2] block-backend: Allow concurrent context changes

2024-02-02 Thread Hanna Czenczek
to that effect. In addition, because the context can be set and queried from different threads concurrently, it has to be accessed with atomic operations. Buglink: https://issues.redhat.com/browse/RHEL-19381 Suggested-by: Kevin Wolf Signed-off-by: Hanna Czenczek --- block/block

[PATCH 2/2] scsi: Await request purging

2024-02-02 Thread Hanna Czenczek
ne through bdrv_try_change_aio_context(), which creates a drained section. With this patch, we keep the BB in-flight counter elevated throughout, so we know the BB's context cannot change. Signed-off-by: Hanna Czenczek --- hw/scsi/scsi-bus.c | 30 +- 1 file changed

[PATCH 0/2] block: Allow concurrent BB context changes

2024-02-02 Thread Hanna Czenczek
The fact that this prevents the BB AioContext from changing while the BH is scheduled/running then is just a nice side effect. Hanna Czenczek (2): block-backend: Allow concurrent context changes scsi: Await request purging block/block-backend.c | 22 +++--- hw/scsi/scsi-bu

Re: [PULL 11/33] scsi: only access SCSIDevice->requests from one thread

2024-02-02 Thread Hanna Czenczek
On 01.02.24 16:25, Hanna Czenczek wrote: On 01.02.24 15:28, Stefan Hajnoczi wrote: [...] Did you find a scenario where the virtio-scsi AioContext is different from the scsi-hd BB's Aiocontext? Technically, that’s the reason for this thread, specifically that virtio_scsi_hotu

Re: [PULL 11/33] scsi: only access SCSIDevice->requests from one thread

2024-02-01 Thread Hanna Czenczek
On 01.02.24 16:25, Hanna Czenczek wrote: [...] It just seems simpler to me to not rely on the BB's context at all. Hm, I now see the problem is that the processing (and scheduling) is largely done in generic SCSI code, which doesn’t have access to virtio-scsi’s context, only to that o

Re: [PULL 11/33] scsi: only access SCSIDevice->requests from one thread

2024-02-01 Thread Hanna Czenczek
On 01.02.24 15:28, Stefan Hajnoczi wrote: On Thu, Feb 01, 2024 at 03:10:12PM +0100, Hanna Czenczek wrote: On 31.01.24 21:35, Stefan Hajnoczi wrote: On Fri, Jan 26, 2024 at 04:24:49PM +0100, Hanna Czenczek wrote: On 26.01.24 14:18, Kevin Wolf wrote: Am 25.01.2024 um 18:32 hat Hanna Czenczek

Re: [PULL 11/33] scsi: only access SCSIDevice->requests from one thread

2024-02-01 Thread Hanna Czenczek
On 31.01.24 21:35, Stefan Hajnoczi wrote: On Fri, Jan 26, 2024 at 04:24:49PM +0100, Hanna Czenczek wrote: On 26.01.24 14:18, Kevin Wolf wrote: Am 25.01.2024 um 18:32 hat Hanna Czenczek geschrieben: On 23.01.24 18:10, Kevin Wolf wrote: Am 23.01.2024 um 17:40 hat Hanna Czenczek geschrieben

Re: [PULL 11/33] scsi: only access SCSIDevice->requests from one thread

2024-02-01 Thread Hanna Czenczek
On 01.02.24 11:21, Kevin Wolf wrote: Am 01.02.2024 um 10:43 hat Hanna Czenczek geschrieben: On 31.01.24 11:17, Kevin Wolf wrote: Am 29.01.2024 um 17:30 hat Hanna Czenczek geschrieben: I don’t like using drain as a form of lock specifically against AioContext changes, but maybe Stefan is right

Re: [PULL 11/33] scsi: only access SCSIDevice->requests from one thread

2024-02-01 Thread Hanna Czenczek
On 31.01.24 11:17, Kevin Wolf wrote: Am 29.01.2024 um 17:30 hat Hanna Czenczek geschrieben: I don’t like using drain as a form of lock specifically against AioContext changes, but maybe Stefan is right, and we should use it in this specific case to get just the single problem fixed.  (Though

Re: [PULL 11/33] scsi: only access SCSIDevice->requests from one thread

2024-01-29 Thread Hanna Czenczek
On 23.01.24 18:10, Kevin Wolf wrote: Am 23.01.2024 um 17:40 hat Hanna Czenczek geschrieben: On 21.12.23 22:23, Kevin Wolf wrote: From: Stefan Hajnoczi Stop depending on the AioContext lock and instead access SCSIDevice->requests from only one thread at a time: - When the VM is running o

Re: [PULL 11/33] scsi: only access SCSIDevice->requests from one thread

2024-01-26 Thread Hanna Czenczek
On 26.01.24 14:18, Kevin Wolf wrote: Am 25.01.2024 um 18:32 hat Hanna Czenczek geschrieben: On 23.01.24 18:10, Kevin Wolf wrote: Am 23.01.2024 um 17:40 hat Hanna Czenczek geschrieben: On 21.12.23 22:23, Kevin Wolf wrote: From: Stefan Hajnoczi Stop depending on the AioContext lock and

Re: [PATCH 2/2] virtio: Keep notifications disabled during drain

2024-01-25 Thread Hanna Czenczek
On 25.01.24 19:18, Hanna Czenczek wrote: On 25.01.24 19:03, Stefan Hajnoczi wrote: On Wed, Jan 24, 2024 at 06:38:30PM +0100, Hanna Czenczek wrote: [...] @@ -3563,6 +3574,13 @@ void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx)   aio_set_event_notifier_poll(ctx

Re: [PATCH 2/2] virtio: Keep notifications disabled during drain

2024-01-25 Thread Hanna Czenczek
On 25.01.24 19:03, Stefan Hajnoczi wrote: On Wed, Jan 24, 2024 at 06:38:30PM +0100, Hanna Czenczek wrote: During drain, we do not care about virtqueue notifications, which is why we remove the handlers on it. When removing those handlers, whether vq notifications are enabled or not depends on

Re: [PULL 11/33] scsi: only access SCSIDevice->requests from one thread

2024-01-25 Thread Hanna Czenczek
On 23.01.24 18:10, Kevin Wolf wrote: Am 23.01.2024 um 17:40 hat Hanna Czenczek geschrieben: On 21.12.23 22:23, Kevin Wolf wrote: From: Stefan Hajnoczi Stop depending on the AioContext lock and instead access SCSIDevice->requests from only one thread at a time: - When the VM is running o

Re: [PULL 11/33] scsi: only access SCSIDevice->requests from one thread

2024-01-25 Thread Hanna Czenczek
On 24.01.24 22:53, Stefan Hajnoczi wrote: On Wed, Jan 24, 2024 at 01:12:47PM +0100, Hanna Czenczek wrote: On 23.01.24 18:10, Kevin Wolf wrote: Am 23.01.2024 um 17:40 hat Hanna Czenczek geschrieben: On 21.12.23 22:23, Kevin Wolf wrote: From: Stefan Hajnoczi Stop depending on the AioContext

[PATCH 2/2] virtio: Keep notifications disabled during drain

2024-01-24 Thread Hanna Czenczek
the notifiers. Buglink: https://issues.redhat.com/browse/RHEL-3934 Signed-off-by: Hanna Czenczek --- include/block/aio.h | 7 ++- hw/virtio/virtio.c | 42 ++ 2 files changed, 48 insertions(+), 1 deletion(-) diff --git a/include/block/aio.h b/include

[PATCH 1/2] virtio-scsi: Attach event vq notifier with no_poll

2024-01-24 Thread Hanna Czenczek
d771c36fd126 ("virtio-scsi: implement BlockDevOps->drained_begin()") Signed-off-by: Hanna Czenczek --- hw/scsi/virtio-scsi.c | 7 ++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c index 690aceec45..9f02ceea09 10

[PATCH 0/2] virtio: Keep notifications disabled during drain

2024-01-24 Thread Hanna Czenczek
specific case of virtio-scsi hot-plugging and -unplugging, you can use this patch: https://czenczek.de/0001-DONTMERGE-Fix-crash-on-scsi-unplug.patch [1] https://lists.nongnu.org/archive/html/qemu-block/2024-01/msg00317.html Hanna Czenczek (2): virtio-scsi: Attach event vq notifier with no_poll

Re: [PULL 11/33] scsi: only access SCSIDevice->requests from one thread

2024-01-24 Thread Hanna Czenczek
On 23.01.24 18:10, Kevin Wolf wrote: Am 23.01.2024 um 17:40 hat Hanna Czenczek geschrieben: On 21.12.23 22:23, Kevin Wolf wrote: From: Stefan Hajnoczi Stop depending on the AioContext lock and instead access SCSIDevice->requests from only one thread at a time: - When the VM is running o

Re: [PULL 11/33] scsi: only access SCSIDevice->requests from one thread

2024-01-23 Thread Hanna Czenczek
On 23.01.24 18:10, Kevin Wolf wrote: Am 23.01.2024 um 17:40 hat Hanna Czenczek geschrieben: On 21.12.23 22:23, Kevin Wolf wrote: From: Stefan Hajnoczi Stop depending on the AioContext lock and instead access SCSIDevice->requests from only one thread at a time: - When the VM is running o

Re: [PULL 11/33] scsi: only access SCSIDevice->requests from one thread

2024-01-23 Thread Hanna Czenczek
On 23.01.24 17:40, Hanna Czenczek wrote: On 21.12.23 22:23, Kevin Wolf wrote: From: Stefan Hajnoczi Stop depending on the AioContext lock and instead access SCSIDevice->requests from only one thread at a time: - When the VM is running only the BlockBackend's AioContext may acces

Re: [PULL 11/33] scsi: only access SCSIDevice->requests from one thread

2024-01-23 Thread Hanna Czenczek
On 21.12.23 22:23, Kevin Wolf wrote: From: Stefan Hajnoczi Stop depending on the AioContext lock and instead access SCSIDevice->requests from only one thread at a time: - When the VM is running only the BlockBackend's AioContext may access the requests list. - When the VM is stopped only the

Re: [RFC 0/3] aio-posix: call ->poll_end() when removing AioHandler

2024-01-23 Thread Hanna Czenczek
On 02.01.24 16:24, Hanna Czenczek wrote: [...] I’ve attached the preliminary patch that I didn’t get to send (or test much) last year.  Not sure if it has the same CPU-usage-spike issue Fiona was seeing, the only functional difference is that I notify the vq after attaching the notifiers

Re: [RFC 0/3] aio-posix: call ->poll_end() when removing AioHandler

2024-01-23 Thread Hanna Czenczek
On 23.01.24 12:12, Fiona Ebner wrote: [...] I noticed poll_set_started() is not called, because ctx->fdmon_ops->need_wait(ctx) was true, i.e. ctx->poll_disable_cnt was positive (I'm using fdmon_poll). I then found this is because of the notifier for the event vq, being attached with virtio_qu

Re: [RFC 0/3] aio-posix: call ->poll_end() when removing AioHandler

2024-01-23 Thread Hanna Czenczek
On 22.01.24 18:52, Hanna Czenczek wrote: On 22.01.24 18:41, Hanna Czenczek wrote: On 05.01.24 15:30, Fiona Ebner wrote: Am 05.01.24 um 14:43 schrieb Fiona Ebner: Am 03.01.24 um 14:35 schrieb Paolo Bonzini: On 1/3/24 12:40, Fiona Ebner wrote: I'm happy to report that I cannot reproduc

Re: [RFC 0/3] aio-posix: call ->poll_end() when removing AioHandler

2024-01-22 Thread Hanna Czenczek
On 22.01.24 18:41, Hanna Czenczek wrote: On 05.01.24 15:30, Fiona Ebner wrote: Am 05.01.24 um 14:43 schrieb Fiona Ebner: Am 03.01.24 um 14:35 schrieb Paolo Bonzini: On 1/3/24 12:40, Fiona Ebner wrote: I'm happy to report that I cannot reproduce the CPU-usage-spike issue with the patch,

Re: [RFC 0/3] aio-posix: call ->poll_end() when removing AioHandler

2024-01-22 Thread Hanna Czenczek
On 05.01.24 15:30, Fiona Ebner wrote: Am 05.01.24 um 14:43 schrieb Fiona Ebner: Am 03.01.24 um 14:35 schrieb Paolo Bonzini: On 1/3/24 12:40, Fiona Ebner wrote: I'm happy to report that I cannot reproduce the CPU-usage-spike issue with the patch, but I did run into an assertion failure when try

Re: [RFC 0/3] aio-posix: call ->poll_end() when removing AioHandler

2024-01-02 Thread Hanna Czenczek
On 02.01.24 16:53, Paolo Bonzini wrote: On Tue, Jan 2, 2024 at 4:24 PM Hanna Czenczek wrote: I’ve attached the preliminary patch that I didn’t get to send (or test much) last year. Not sure if it has the same CPU-usage-spike issue Fiona was seeing, the only functional difference is that I

Re: [RFC 0/3] aio-posix: call ->poll_end() when removing AioHandler

2024-01-02 Thread Hanna Czenczek
t I notify the vq after attaching the notifiers instead of before. HannaFrom 451aae74fc19a6ea5cd6381247cd9202571651e8 Mon Sep 17 00:00:00 2001 From: Hanna Czenczek Date: Wed, 6 Dec 2023 18:24:55 +0100 Subject: [PATCH] Keep notifications disabled during drain Preliminary patch with a p

[PULL 2/3] block/file-posix: fix update_zones_wp() caller

2023-11-06 Thread Hanna Czenczek
Message-Id: <20230825040556.4217-1-faithilike...@gmail.com> Reviewed-by: Stefan Hajnoczi [hreitz: Rebased and fixed comment spelling] Signed-off-by: Hanna Czenczek --- block/file-posix.c | 7 +-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/block/file-posix.c b/block/file-p

[PULL 3/3] file-posix: fix over-writing of returning zone_append offset

2023-11-06 Thread Hanna Czenczek
t through s->offset. Also, remove "offset" from BDRVRawState as there is no usage anymore. Fixes: 4751d09adcc3 ("block: introduce zone append write for zoned devices") Signed-off-by: Naohiro Aota Message-Id: <20231030073853.2601162-1-naohiro.a...@wdc.com> Reviewed-

[PULL 0/3] Block patches

2023-11-06 Thread Hanna Czenczek
The following changes since commit 3e01f1147a16ca566694b97eafc941d62fa1e8d8: Merge tag 'pull-sp-20231105' of https://gitlab.com/rth7680/qemu into staging (2023-11-06 09:34:22 +0800) are available in the Git repository at: https://gitlab.com/hreitz/qemu.git tags/pull-block-2023-11-06 for yo

[PULL 1/3] qcow2: keep reference on zeroize with discard-no-unref enabled

2023-11-06 Thread Hanna Czenczek
-off-by: Jean-Louis Dupond Message-Id: <20231003125236.216473-2-jean-lo...@dupond.be> [hreitz: Made the documentation change more verbose, as discussed on-list] Signed-off-by: Hanna Czenczek --- qapi/block-core.json | 24 ++-- block/qcow2-cluster.

Re: [PATCH] file-posix: fix over-writing of returning zone_append offset

2023-11-06 Thread Hanna Czenczek
On 30.10.23 08:38, Naohiro Aota wrote: raw_co_zone_append() sets "s->offset" where "BDRVRawState *s". This pointer is used later at raw_co_prw() to save the block address where the data is written. When multiple IOs are on-going at the same time, a later IO's raw_co_zone_append() call over-write

Re: [PATCH v2 09/10] block: Convert qmp_query_block() to coroutine_fn

2023-11-06 Thread Hanna Czenczek
On 09.06.23 22:19, Fabiano Rosas wrote: This is another caller of bdrv_get_allocated_file_size() that needs to be converted to a coroutine because that function will be made asynchronous when called (indirectly) from the QMP dispatcher. This QMP command is a candidate because it calls bdrv_do_qu

Re: [PATCH v2 10/10] block: Add a thread-pool version of fstat

2023-11-06 Thread Hanna Czenczek
| 4 +++- 2 files changed, 40 insertions(+), 4 deletions(-) Reviewed-by: Hanna Czenczek

Re: [PATCH v2 05/10] block: Convert bdrv_query_block_graph_info to coroutine

2023-11-06 Thread Hanna Czenczek
- 3 files changed, 12 insertions(+), 8 deletions(-) Reviewed-by: Hanna Czenczek

Re: [PATCH v2 04/10] block: Temporarily mark bdrv_co_get_allocated_file_size as mixed

2023-11-06 Thread Hanna Czenczek
coroutine. Signed-off-by: Fabiano Rosas Reviewed-by: Eric Blake --- include/block/block-io.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) Reviewed-by: Hanna Czenczek

Re: [PATCH v2 03/10] block: Allow the wrapper script to see functions declared in qapi.h

2023-11-06 Thread Hanna Czenczek
-coroutine-wrapper.py | 1 + 2 files changed, 2 insertions(+) Reviewed-by: Hanna Czenczek

Re: [PATCH v2 09/10] block: Convert qmp_query_block() to coroutine_fn

2023-11-06 Thread Hanna Czenczek
On 09.06.23 22:19, Fabiano Rosas wrote: This is another caller of bdrv_get_allocated_file_size() that needs to be converted to a coroutine because that function will be made asynchronous when called (indirectly) from the QMP dispatcher. This QMP command is a candidate because it calls bdrv_do_qu

Re: [PATCH v2 08/10] block: Don't query all block devices at hmp_nbd_server_start

2023-11-06 Thread Hanna Czenczek
On 09.06.23 22:19, Fabiano Rosas wrote: We're currently doing a full query-block just to enumerate the devices for qmp_nbd_server_add and then discarding the BlockInfoList afterwards. Alter hmp_nbd_server_start to instead iterate explicitly over the block_backends list. This allows the removal o

Re: [PATCH v2 06/10] block: Convert bdrv_block_device_info into co_wrapper

2023-11-06 Thread Hanna Czenczek
On 09.06.23 22:19, Fabiano Rosas wrote: We're converting callers of bdrv_get_allocated_file_size() to run in coroutines because that function will be made asynchronous when called (indirectly) from the QMP dispatcher. This function is a candidate because it calls bdrv_query_image_info() -> bdrv_

Re: [PATCH v2 07/10] block: Convert qmp_query_named_block_nodes to coroutine

2023-11-06 Thread Hanna Czenczek
On 09.06.23 22:19, Fabiano Rosas wrote: From: Lin Ma We're converting callers of bdrv_get_allocated_file_size() to run in coroutines because that function will be made asynchronous when called (indirectly) from the QMP dispatcher. This QMP command is a candidate because it indirectly calls bdr

Re: [PATCH 7/7] iotests/271: check disk usage on subcluster-based discard/unmap

2023-11-03 Thread Hanna Czenczek
On 03.11.23 16:51, Hanna Czenczek wrote: On 20.10.23 23:56, Andrey Drobyshev wrote: [...] @@ -528,6 +543,14 @@ for use_backing_file in yes no; do   else   _make_test_img -o extended_l2=on 1M   fi +    # Write cluster #0 and discard its subclusters #0-#3 +    $QEMU_IO -c

Re: [PATCH 4/7] qcow2: make subclusters discardable

2023-11-03 Thread Hanna Czenczek
On 20.10.23 23:56, Andrey Drobyshev wrote: This commit makes the discard operation work on the subcluster level rather than cluster level. It introduces discard_l2_subclusters() function and makes use of it in qcow2 discard implementation, much like it's done with zero_in_l2_slice() / zero_l2_su

Re: [PATCH 7/7] iotests/271: check disk usage on subcluster-based discard/unmap

2023-11-03 Thread Hanna Czenczek
On 20.10.23 23:56, Andrey Drobyshev wrote: Add _verify_du_delta() checker which is used to check that real disk usage delta meets the expectations. For now we use it for checking that subcluster-based discard/unmap operations lead to actual disk usage decrease (i.e. PUNCH_HOLE operation is perfo

Re: [PATCH 6/7] iotests/common.rc: add disk_usage function

2023-11-03 Thread Hanna Czenczek
On 20.10.23 23:56, Andrey Drobyshev wrote: Move the definition from iotests/250 to common.rc. This is used to detect real disk usage of sparse files. In particular, we want to use it for checking subclusters-based discards. Signed-off-by: Andrey Drobyshev --- tests/qemu-iotests/250 |

Re: [PATCH 5/7] qcow2: zero_l2_subclusters: fall through to discard operation when requested

2023-11-03 Thread Hanna Czenczek
On 20.10.23 23:56, Andrey Drobyshev wrote: When zeroizing subclusters within single cluster, detect usage of the BDRV_REQ_MAY_UNMAP flag and fall through to the subcluster-based discard operation, much like it's done with the cluster-based discards. That way subcluster-aligned operations "qemu-i

Re: [PATCH v5 0/7] vhost-user: Back-end state migration

2023-11-02 Thread Hanna Czenczek
On 16.10.23 15:42, Hanna Czenczek wrote: Based-on: <20231004014532.1228637-1-stefa...@redhat.com> ([PATCH v2 0/3] vhost: clean up device reset) Based-on: <20231016083201.23736-1-hre...@redhat.com> ([PATCH] vhost-user: Fix protocol feature bit conflict)

Re: [PATCH] block-jobs: add final flush

2023-11-02 Thread Hanna Czenczek
On 01.11.23 20:53, Vladimir Sementsov-Ogievskiy wrote: On 31.10.23 17:05, Hanna Czenczek wrote: On 04.10.23 15:56, Vladimir Sementsov-Ogievskiy wrote: From: Vladimir Sementsov-Ogievskiy Actually block job is not completed without the final flush. It's rather unexpected to have broken t

Re: [PATCH 4/7] qcow2: make subclusters discardable

2023-10-31 Thread Hanna Czenczek
(Sorry, opened another reply window, forgot I already had one open...) On 20.10.23 23:56, Andrey Drobyshev wrote: This commit makes the discard operation work on the subcluster level rather than cluster level. It introduces discard_l2_subclusters() function and makes use of it in qcow2 discard

Re: [PATCH 4/7] qcow2: make subclusters discardable

2023-10-31 Thread Hanna Czenczek
On 20.10.23 23:56, Andrey Drobyshev wrote: This commit makes the discard operation work on the subcluster level rather than cluster level. It introduces discard_l2_subclusters() function and makes use of it in qcow2 discard implementation, much like it's done with zero_in_l2_slice() / zero_l2_su

Re: [PATCH 3/7] qcow2: zeroize the entire cluster when there're no non-zero subclusters

2023-10-31 Thread Hanna Czenczek
ndrey Drobyshev --- block/qcow2-cluster.c | 18 +++--- 1 file changed, 15 insertions(+), 3 deletions(-) Reviewed-by: Hanna Czenczek

Re: [PATCH 2/7] qcow2: add get_sc_range_info() helper for working with subcluster ranges

2023-10-31 Thread Hanna Czenczek
On 20.10.23 23:56, Andrey Drobyshev wrote: This helper simply obtains the l2 table parameters of the cluster which contains the given subclusters range. Right now this info is being obtained and used by zero_l2_subclusters(). As we're about to introduce the subclusters discard operation, this h

Re: [PATCH 1/7] qcow2: make function update_refcount_discard() global

2023-10-31 Thread Hanna Czenczek
ed, 6 insertions(+), 4 deletions(-) Reviewed-by: Hanna Czenczek

Re: [PATCH v2 1/1] qemu-img: do not erase destination file in qemu-img dd command

2023-10-31 Thread Hanna Czenczek
On 01.10.23 22:46, Denis V. Lunev wrote: Can you please not top-post. This makes the discussion complex. This approach is followed in this mailing list and in other similar lists like LKML. On 10/1/23 19:08, Mike Maslenkin wrote: I thought about "conv=notrunc", but my main concern is changed vi

Re: [PATCH] block-jobs: add final flush

2023-10-31 Thread Hanna Czenczek
On 04.10.23 15:56, Vladimir Sementsov-Ogievskiy wrote: From: Vladimir Sementsov-Ogievskiy Actually block job is not completed without the final flush. It's rather unexpected to have broken target when job was successfully completed long ago and now we fail to flush or process just crashed/kille

Re: [PATCH 2/2] iotests: Test media change with iothreads

2023-10-31 Thread Hanna Czenczek
-- 1 file changed, 4 insertions(+), 2 deletions(-) Reviewed-by: Hanna Czenczek

Re: [PATCH 1/2] block: Fix locking in media change monitor commands

2023-10-31 Thread Hanna Czenczek
d ask. In any case, this change here is necessary, so: Reviewed-by: Hanna Czenczek

Re: [PATCH v2] block/file-posix: fix update_zones_wp() caller

2023-10-31 Thread Hanna Czenczek
On 25.08.23 06:05, Sam Li wrote: When the zoned request fail, it needs to update only the wp of the target zones for not disrupting the in-flight writes on these other zones. The wp is updated successfully after the request completes. Fixed the callers with right offset and nr_zones. Signed-off

Re: [PATCH v3] qcow2: keep reference on zeroize with discard-no-unref enabled

2023-10-30 Thread Hanna Czenczek
On 03.10.23 14:52, Jean-Louis Dupond wrote: When the discard-no-unref flag is enabled, we keep the reference for normal discard requests. But when a discard is executed on a snapshot/qcow2 image with backing, the discards are saved as zero clusters in the snapshot image. When committing the snap

Re: [PATCH v3] qcow2: keep reference on zeroize with discard-no-unref enabled

2023-10-27 Thread Hanna Czenczek
On 03.10.23 14:52, Jean-Louis Dupond wrote: When the discard-no-unref flag is enabled, we keep the reference for normal discard requests. But when a discard is executed on a snapshot/qcow2 image with backing, the discards are saved as zero clusters in the snapshot image. When committing the snap

Re: [PATCH v8 1/5] qemu-iotests: Filter warnings about block migration being deprecated

2023-10-24 Thread Hanna Czenczek
: Hanna Czenczek

Re: [PATCH v4 3/8] vhost-user.rst: Clarify enabling/disabling vrings

2023-10-18 Thread Hanna Czenczek
On 18.10.23 14:14, Michael S. Tsirkin wrote: On Wed, Oct 04, 2023 at 02:58:59PM +0200, Hanna Czenczek wrote: Currently, the vhost-user documentation says that rings are to be initialized in a disabled state when VHOST_USER_F_PROTOCOL_FEATURES is negotiated. However, by the time of feature

Re: [PATCH] vhost-user: Fix protocol feature bit conflict

2023-10-17 Thread Hanna Czenczek
On 17.10.23 09:53, Viresh Kumar wrote: On 17-10-23, 09:51, Hanna Czenczek wrote: Not that I’m really opposed to that, but I don’t see the problem with just doing that in the same work that makes qemu actually use this flag, exactly because it’s just a -1/+1 change. I can send a v2, but should

Re: [Virtio-fs] (no subject)

2023-10-17 Thread Hanna Czenczek
On 17.10.23 09:49, Viresh Kumar wrote: On 13-10-23, 20:02, Hanna Czenczek wrote: On 10.10.23 16:35, Alex Bennée wrote: I was going to say there is also the rust-vmm vhost-user-master crates which we've imported: https://github.com/vireshk/vhost for the Xen Vhost Frontend:

Re: [PATCH] vhost-user: Fix protocol feature bit conflict

2023-10-17 Thread Hanna Czenczek
On 17.10.23 07:36, Viresh Kumar wrote: On 16-10-23, 12:40, Alex Bennée wrote: Viresh Kumar writes: On 16-10-23, 11:45, Manos Pitsidianakis wrote: On Mon, 16 Oct 2023 11:32, Hanna Czenczek wrote: diff --git a/include/hw/virtio/vhost-user.h b/include/hw/virtio/vhost-user.h index 9f9ddf878d

[PATCH v5 5/7] vhost-user: Interface for migration state transfer

2023-10-16 Thread Hanna Czenczek
Add the interface for transferring the back-end's state during migration as defined previously in vhost-user.rst. Reviewed-by: Stefan Hajnoczi Signed-off-by: Hanna Czenczek --- include/hw/virtio/vhost-backend.h | 24 + include/hw/virtio/vhost-user.h| 1 + include/hw/virtio/vh

[PATCH v5 4/7] vhost-user.rst: Migrating back-end-internal state

2023-10-16 Thread Hanna Czenczek
or success via CHECK_DEVICE_STATE, which on the destination side includes checking for integrity (i.e. errors during deserialization). Reviewed-by: Stefan Hajnoczi Signed-off-by: Hanna Czenczek --- docs/interop/vhost-user.rst | 172 1 file changed, 172

[PATCH v5 2/7] vhost-user.rst: Clarify enabling/disabling vrings

2023-10-16 Thread Hanna Czenczek
aking it explicit that the enabled/disabled state is tracked even while the vring is stopped. Every vring is initialized in a disabled state, and SET_FEATURES without VHOST_USER_F_PROTOCOL_FEATURES simply becomes one way to enable all vrings. Reviewed-by: Stefan Hajnoczi Signed-off-by: Hanna Cze

[PATCH v5 3/7] vhost-user.rst: Introduce suspended state

2023-10-16 Thread Hanna Czenczek
completely stopped, i.e. all vrings are stopped, the back-end should cease to modify any state relating to the guest. Do this by calling it "suspended". Suggested-by: Stefan Hajnoczi Reviewed-by: Stefan Hajnoczi Signed-off-by: Hanna Czenczek --- docs/interop/vhost-use

  1   2   3   4   5   >