ng the state
from a virtio-fs back-end (virtiofsd) fails.
Signed-off-by: Hanna Czenczek
---
v2: As suggested by Peter, after vmsd->post_save(), change the condition
from `if (!ret)` to `if (!ret && ps_ret)` so we will not create an
error object in case of success (that would the
On 15.10.24 18:06, Peter Xu wrote:
On Tue, Oct 15, 2024 at 04:15:15PM +0200, Hanna Czenczek wrote:
migration/savevm.c contains some calls to vmstate_save() that are
followed by migrate_set_error() if the integer return value indicates an
error. migrate_set_error() requires that the `Error
ng the state
from a virtio-fs back-end (virtiofsd) fails.
Signed-off-by: Hanna Czenczek
---
migration/vmstate.c | 11 +++
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/migration/vmstate.c b/migration/vmstate.c
index ff5d589a6d..13532f2807 100644
--- a/migration/vmstate.c
rmat: Split raw_read_options()')
Signed-off-by: Kevin Wolf
---
block/raw-format.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
Reviewed-by: Hanna Czenczek
https://gitlab.com/qemu-project/qemu/-/pipelines, I
think that’s expected.
v2: Added patch 1, left patch 2 unchanged.
Hanna Czenczek (2):
virtio: Allow .get_vhost() without vhost_started
virtio: Always reset vhost devices
include/hw/virtio/virtio.h | 1 +
hw/display/vhost-user-gpu.c | 2
ed-by: Michael S. Tsirkin
Signed-off-by: Hanna Czenczek
---
hw/virtio/virtio.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 583a224163..35dfc01074 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -2150,8 +215
ons dereference some pointers (or return
offsets from them) that are probably guaranteed to be non-NULL when
vhost_started is true, but not necessarily otherwise. This patch makes
all such implementations check all such pointers, returning NULL if any
is NULL.
Signed-off-by: Hanna Czenczek
---
include
On 23.07.24 12:45, Michael S. Tsirkin wrote:
On Tue, Jul 23, 2024 at 12:18:48PM +0200, Hanna Czenczek wrote:
On 22.07.24 23:32, Richard Henderson wrote:
On 7/22/24 10:16, Michael S. Tsirkin wrote:
A couple of fixes are outstanding, will merge later.
The following changes since commit
On 22.07.24 23:32, Richard Henderson wrote:
On 7/22/24 10:16, Michael S. Tsirkin wrote:
A couple of fixes are outstanding, will merge later.
The following changes since commit
a87a7c449e532130d4fa8faa391ff7e1f04ed660:
Merge tag 'pull-loongarch-20240719'
ofhttps://gitlab.com/gaosong/qemu
On 10.07.24 18:28, Stefan Hajnoczi wrote:
On Wed, 10 Jul 2024 at 13:25, Hanna Czenczek wrote:
Requiring `vhost_started` to be true for resetting vhost devices in
`virtio_reset()` seems like the wrong condition: Most importantly, the
preceding `virtio_set_status(vdev, 0)` call will (for vhost
On 10.07.24 15:39, Matias Ezequiel Vara Larsen wrote:
Hello Hanna,
On Wed, Jul 10, 2024 at 01:23:10PM +0200, Hanna Czenczek wrote:
Requiring `vhost_started` to be true for resetting vhost devices in
`virtio_reset()` seems like the wrong condition: Most importantly, the
preceding
On 05.06.24 15:25, Jean-Louis Dupond wrote:
When discard is not set to unmap/on, we should not allow setting
discard-no-unref.
Is this important? Technically, it’s an incompatible change, and would
require a deprecation warning first.
(I can imagine people setting this option indiscriminate
On 05.06.24 15:25, Jean-Louis Dupond wrote:
When doing a measure on an image with a backing file and
discard-no-unref is enabled, the code should take this into account.
That doesn’t make sense to me. As far as I understand, 'measure' is
supposed to report how much space you need for a given
that we can indeed send a reset to this
vhost device, by not just checking `k->get_vhost != NULL` (introduced by
commit 95e1019a4a9), but also that the vhost back-end is connected
(`hdev = k->get_vhost(); hdev != NULL && hdev->vhost_ops != NULL`).
Signed-off-by: Hanna Czenczek
--
breaks installing Windows from USB
hw/usb/dev-storage-classic.c | 9 -
1 file changed, 9 deletions(-)
Reviewed-by: Hanna Czenczek
On 10.02.24 09:46, Michael Tokarev wrote:
09.02.2024 19:51, Hanna Czenczek :
On 09.02.24 15:08, Michael Tokarev wrote:
02.02.2024 17:47, Hanna Czenczek :
Hi,
Without the AioContext lock, a BB's context may kind of change at any
time (unless it has a root node, and I/O requests are pe
On 09.02.24 15:38, Michael Tokarev wrote:
02.02.2024 18:31, Hanna Czenczek :
Commit d3f6f294aeadd5f88caf0155e4360808c95b3146 ("virtio-blk: always set
ioeventfd during startup") has made virtio_blk_start_ioeventfd() always
kick the virtqueue (set the ioeventfd), regardless of whether
On 09.02.24 15:08, Michael Tokarev wrote:
02.02.2024 17:47, Hanna Czenczek :
Hi,
Without the AioContext lock, a BB's context may kind of change at any
time (unless it has a root node, and I/O requests are pending). That
also means that its own context (BlockBackend.ctx) and that of its
On 06.02.24 17:53, Stefan Hajnoczi wrote:
On Fri, Feb 02, 2024 at 03:47:53PM +0100, Hanna Czenczek wrote:
Hi,
Without the AioContext lock, a BB's context may kind of change at any
time (unless it has a root node, and I/O requests are pending). That
also means that its own co
On 06.02.24 15:04, Stefan Hajnoczi wrote:
QEMU's coding style generally forbids C99 mixed declarations.
Signed-off-by: Stefan Hajnoczi
---
hw/block/virtio-blk.c | 25 ++---
1 file changed, 14 insertions(+), 11 deletions(-)
Reviewed-by: Hanna Czenczek
there is no race.
Suggested-by: Hanna Reitz
Signed-off-by: Stefan Hajnoczi
---
qapi/qmp-dispatch.c | 7 ++-
1 file changed, 2 insertions(+), 5 deletions(-)
Reviewed-by: Hanna Czenczek
On 05.02.24 18:26, Stefan Hajnoczi wrote:
The VirtIOBlock::rq field has had the type void * since its introduction
in commit 869a5c6df19a ("Stop VM on error in virtio-blk. (Gleb
Natapov)").
Perhaps this was done to avoid the forward declaration of
VirtIOBlockReq.
Hanna Czenczek p
On 05.02.24 18:26, Stefan Hajnoczi wrote:
Hanna Czenczek noted that the array index in
virtio_blk_dma_restart_cb() is not bounds-checked:
g_autofree VirtIOBlockReq **vq_rq = g_new0(VirtIOBlockReq *, num_queues);
...
while (rq) {
VirtIOBlockReq *next = rq->next;
uint1
}
Later on we access s->vq_aio_context[0] under the assumption that there
is as least one virtqueue. Hanna Czenczek noted that
it would help to show that the array index is already valid.
Add an assertion to document that s->vq_aio_context[0] is always
safe...and catch future code c
On 05.02.24 18:26, Stefan Hajnoczi wrote:
Hanna Czenczek noticed that the safety of
`vq_aio_context[vq->value] = ctx;` with user-defined vq->value inputs is
not obvious.
The code is structured in validate() + apply() steps so input validation
is there, but it happens way earlier and th
the notifiers.
Buglink: https://issues.redhat.com/browse/RHEL-3934
Signed-off-by: Hanna Czenczek
---
include/block/aio.h | 7 ++-
hw/virtio/virtio.c | 42 ++
2 files changed, 48 insertions(+), 1 deletion(-)
diff --git a/include/block/aio.h b/include
is version (v1 too) just ensures the notifier
is enabled after the drain, regardless of its state before.
- Use event_notifier_set() instead of virtio_queue_notify() in patch 2
- Added patch 3
Hanna Czenczek (3):
virtio-scsi: Attach event vq notifier with no_poll
virtio: Re-enable not
d771c36fd126
("virtio-scsi: implement BlockDevOps->drained_begin()")
Reviewed-by: Stefan Hajnoczi
Tested-by: Fiona Ebner
Reviewed-by: Fiona Ebner
Signed-off-by: Hanna Czenczek
---
hw/scsi/virtio-scsi.c | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git
reuse that function.
Signed-off-by: Hanna Czenczek
---
hw/block/virtio-blk.c | 21 ++---
1 file changed, 10 insertions(+), 11 deletions(-)
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index 227d83569f..22b8eef69b 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/vi
to that
effect.
In addition, because the context can be set and queried from different
threads concurrently, it has to be accessed with atomic operations.
Buglink: https://issues.redhat.com/browse/RHEL-19381
Suggested-by: Kevin Wolf
Signed-off-by: Hanna Czenczek
---
block/block
ne through bdrv_try_change_aio_context(), which
creates a drained section. With this patch, we keep the BB in-flight
counter elevated throughout, so we know the BB's context cannot change.
Signed-off-by: Hanna Czenczek
---
hw/scsi/scsi-bus.c | 30 +-
1 file changed
The fact that this prevents the BB AioContext from changing while the BH
is scheduled/running then is just a nice side effect.
Hanna Czenczek (2):
block-backend: Allow concurrent context changes
scsi: Await request purging
block/block-backend.c | 22 +++---
hw/scsi/scsi-bu
On 01.02.24 16:25, Hanna Czenczek wrote:
On 01.02.24 15:28, Stefan Hajnoczi wrote:
[...]
Did you find a scenario where the virtio-scsi AioContext is different
from the scsi-hd BB's Aiocontext?
Technically, that’s the reason for this thread, specifically that
virtio_scsi_hotu
On 01.02.24 16:25, Hanna Czenczek wrote:
[...]
It just seems simpler to me to not rely on the BB's context at all.
Hm, I now see the problem is that the processing (and scheduling) is
largely done in generic SCSI code, which doesn’t have access to
virtio-scsi’s context, only to that o
On 01.02.24 15:28, Stefan Hajnoczi wrote:
On Thu, Feb 01, 2024 at 03:10:12PM +0100, Hanna Czenczek wrote:
On 31.01.24 21:35, Stefan Hajnoczi wrote:
On Fri, Jan 26, 2024 at 04:24:49PM +0100, Hanna Czenczek wrote:
On 26.01.24 14:18, Kevin Wolf wrote:
Am 25.01.2024 um 18:32 hat Hanna Czenczek
On 31.01.24 21:35, Stefan Hajnoczi wrote:
On Fri, Jan 26, 2024 at 04:24:49PM +0100, Hanna Czenczek wrote:
On 26.01.24 14:18, Kevin Wolf wrote:
Am 25.01.2024 um 18:32 hat Hanna Czenczek geschrieben:
On 23.01.24 18:10, Kevin Wolf wrote:
Am 23.01.2024 um 17:40 hat Hanna Czenczek geschrieben
On 01.02.24 11:21, Kevin Wolf wrote:
Am 01.02.2024 um 10:43 hat Hanna Czenczek geschrieben:
On 31.01.24 11:17, Kevin Wolf wrote:
Am 29.01.2024 um 17:30 hat Hanna Czenczek geschrieben:
I don’t like using drain as a form of lock specifically against AioContext
changes, but maybe Stefan is right
On 31.01.24 11:17, Kevin Wolf wrote:
Am 29.01.2024 um 17:30 hat Hanna Czenczek geschrieben:
I don’t like using drain as a form of lock specifically against AioContext
changes, but maybe Stefan is right, and we should use it in this specific
case to get just the single problem fixed. (Though
On 23.01.24 18:10, Kevin Wolf wrote:
Am 23.01.2024 um 17:40 hat Hanna Czenczek geschrieben:
On 21.12.23 22:23, Kevin Wolf wrote:
From: Stefan Hajnoczi
Stop depending on the AioContext lock and instead access
SCSIDevice->requests from only one thread at a time:
- When the VM is running o
On 26.01.24 14:18, Kevin Wolf wrote:
Am 25.01.2024 um 18:32 hat Hanna Czenczek geschrieben:
On 23.01.24 18:10, Kevin Wolf wrote:
Am 23.01.2024 um 17:40 hat Hanna Czenczek geschrieben:
On 21.12.23 22:23, Kevin Wolf wrote:
From: Stefan Hajnoczi
Stop depending on the AioContext lock and
On 25.01.24 19:18, Hanna Czenczek wrote:
On 25.01.24 19:03, Stefan Hajnoczi wrote:
On Wed, Jan 24, 2024 at 06:38:30PM +0100, Hanna Czenczek wrote:
[...]
@@ -3563,6 +3574,13 @@ void
virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx)
aio_set_event_notifier_poll(ctx
On 25.01.24 19:03, Stefan Hajnoczi wrote:
On Wed, Jan 24, 2024 at 06:38:30PM +0100, Hanna Czenczek wrote:
During drain, we do not care about virtqueue notifications, which is why
we remove the handlers on it. When removing those handlers, whether vq
notifications are enabled or not depends on
On 23.01.24 18:10, Kevin Wolf wrote:
Am 23.01.2024 um 17:40 hat Hanna Czenczek geschrieben:
On 21.12.23 22:23, Kevin Wolf wrote:
From: Stefan Hajnoczi
Stop depending on the AioContext lock and instead access
SCSIDevice->requests from only one thread at a time:
- When the VM is running o
On 24.01.24 22:53, Stefan Hajnoczi wrote:
On Wed, Jan 24, 2024 at 01:12:47PM +0100, Hanna Czenczek wrote:
On 23.01.24 18:10, Kevin Wolf wrote:
Am 23.01.2024 um 17:40 hat Hanna Czenczek geschrieben:
On 21.12.23 22:23, Kevin Wolf wrote:
From: Stefan Hajnoczi
Stop depending on the AioContext
the notifiers.
Buglink: https://issues.redhat.com/browse/RHEL-3934
Signed-off-by: Hanna Czenczek
---
include/block/aio.h | 7 ++-
hw/virtio/virtio.c | 42 ++
2 files changed, 48 insertions(+), 1 deletion(-)
diff --git a/include/block/aio.h b/include
d771c36fd126
("virtio-scsi: implement BlockDevOps->drained_begin()")
Signed-off-by: Hanna Czenczek
---
hw/scsi/virtio-scsi.c | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index 690aceec45..9f02ceea09 10
specific case of
virtio-scsi hot-plugging and -unplugging, you can use this patch:
https://czenczek.de/0001-DONTMERGE-Fix-crash-on-scsi-unplug.patch
[1] https://lists.nongnu.org/archive/html/qemu-block/2024-01/msg00317.html
Hanna Czenczek (2):
virtio-scsi: Attach event vq notifier with no_poll
On 23.01.24 18:10, Kevin Wolf wrote:
Am 23.01.2024 um 17:40 hat Hanna Czenczek geschrieben:
On 21.12.23 22:23, Kevin Wolf wrote:
From: Stefan Hajnoczi
Stop depending on the AioContext lock and instead access
SCSIDevice->requests from only one thread at a time:
- When the VM is running o
On 23.01.24 18:10, Kevin Wolf wrote:
Am 23.01.2024 um 17:40 hat Hanna Czenczek geschrieben:
On 21.12.23 22:23, Kevin Wolf wrote:
From: Stefan Hajnoczi
Stop depending on the AioContext lock and instead access
SCSIDevice->requests from only one thread at a time:
- When the VM is running o
On 23.01.24 17:40, Hanna Czenczek wrote:
On 21.12.23 22:23, Kevin Wolf wrote:
From: Stefan Hajnoczi
Stop depending on the AioContext lock and instead access
SCSIDevice->requests from only one thread at a time:
- When the VM is running only the BlockBackend's AioContext may acces
On 21.12.23 22:23, Kevin Wolf wrote:
From: Stefan Hajnoczi
Stop depending on the AioContext lock and instead access
SCSIDevice->requests from only one thread at a time:
- When the VM is running only the BlockBackend's AioContext may access
the requests list.
- When the VM is stopped only the
On 02.01.24 16:24, Hanna Czenczek wrote:
[...]
I’ve attached the preliminary patch that I didn’t get to send (or test
much) last year. Not sure if it has the same CPU-usage-spike issue
Fiona was seeing, the only functional difference is that I notify the
vq after attaching the notifiers
On 23.01.24 12:12, Fiona Ebner wrote:
[...]
I noticed poll_set_started() is not called, because
ctx->fdmon_ops->need_wait(ctx) was true, i.e. ctx->poll_disable_cnt was
positive (I'm using fdmon_poll). I then found this is because of the
notifier for the event vq, being attached with
virtio_qu
On 22.01.24 18:52, Hanna Czenczek wrote:
On 22.01.24 18:41, Hanna Czenczek wrote:
On 05.01.24 15:30, Fiona Ebner wrote:
Am 05.01.24 um 14:43 schrieb Fiona Ebner:
Am 03.01.24 um 14:35 schrieb Paolo Bonzini:
On 1/3/24 12:40, Fiona Ebner wrote:
I'm happy to report that I cannot reproduc
On 22.01.24 18:41, Hanna Czenczek wrote:
On 05.01.24 15:30, Fiona Ebner wrote:
Am 05.01.24 um 14:43 schrieb Fiona Ebner:
Am 03.01.24 um 14:35 schrieb Paolo Bonzini:
On 1/3/24 12:40, Fiona Ebner wrote:
I'm happy to report that I cannot reproduce the CPU-usage-spike issue
with the patch,
On 05.01.24 15:30, Fiona Ebner wrote:
Am 05.01.24 um 14:43 schrieb Fiona Ebner:
Am 03.01.24 um 14:35 schrieb Paolo Bonzini:
On 1/3/24 12:40, Fiona Ebner wrote:
I'm happy to report that I cannot reproduce the CPU-usage-spike issue
with the patch, but I did run into an assertion failure when try
On 02.01.24 16:53, Paolo Bonzini wrote:
On Tue, Jan 2, 2024 at 4:24 PM Hanna Czenczek wrote:
I’ve attached the preliminary patch that I didn’t get to send (or test
much) last year. Not sure if it has the same CPU-usage-spike issue
Fiona was seeing, the only functional difference is that I
t I notify the vq
after attaching the notifiers instead of before.
HannaFrom 451aae74fc19a6ea5cd6381247cd9202571651e8 Mon Sep 17 00:00:00 2001
From: Hanna Czenczek
Date: Wed, 6 Dec 2023 18:24:55 +0100
Subject: [PATCH] Keep notifications disabled during drain
Preliminary patch with a p
Message-Id: <20230825040556.4217-1-faithilike...@gmail.com>
Reviewed-by: Stefan Hajnoczi
[hreitz: Rebased and fixed comment spelling]
Signed-off-by: Hanna Czenczek
---
block/file-posix.c | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/block/file-posix.c b/block/file-p
t through s->offset. Also, remove "offset" from BDRVRawState as
there is no usage anymore.
Fixes: 4751d09adcc3 ("block: introduce zone append write for zoned devices")
Signed-off-by: Naohiro Aota
Message-Id: <20231030073853.2601162-1-naohiro.a...@wdc.com>
Reviewed-
The following changes since commit 3e01f1147a16ca566694b97eafc941d62fa1e8d8:
Merge tag 'pull-sp-20231105' of https://gitlab.com/rth7680/qemu into staging
(2023-11-06 09:34:22 +0800)
are available in the Git repository at:
https://gitlab.com/hreitz/qemu.git tags/pull-block-2023-11-06
for yo
-off-by: Jean-Louis Dupond
Message-Id: <20231003125236.216473-2-jean-lo...@dupond.be>
[hreitz: Made the documentation change more verbose, as discussed
on-list]
Signed-off-by: Hanna Czenczek
---
qapi/block-core.json | 24 ++--
block/qcow2-cluster.
On 30.10.23 08:38, Naohiro Aota wrote:
raw_co_zone_append() sets "s->offset" where "BDRVRawState *s". This pointer
is used later at raw_co_prw() to save the block address where the data is
written.
When multiple IOs are on-going at the same time, a later IO's
raw_co_zone_append() call over-write
On 09.06.23 22:19, Fabiano Rosas wrote:
This is another caller of bdrv_get_allocated_file_size() that needs to
be converted to a coroutine because that function will be made
asynchronous when called (indirectly) from the QMP dispatcher.
This QMP command is a candidate because it calls bdrv_do_qu
| 4 +++-
2 files changed, 40 insertions(+), 4 deletions(-)
Reviewed-by: Hanna Czenczek
-
3 files changed, 12 insertions(+), 8 deletions(-)
Reviewed-by: Hanna Czenczek
coroutine.
Signed-off-by: Fabiano Rosas
Reviewed-by: Eric Blake
---
include/block/block-io.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Reviewed-by: Hanna Czenczek
-coroutine-wrapper.py | 1 +
2 files changed, 2 insertions(+)
Reviewed-by: Hanna Czenczek
On 09.06.23 22:19, Fabiano Rosas wrote:
This is another caller of bdrv_get_allocated_file_size() that needs to
be converted to a coroutine because that function will be made
asynchronous when called (indirectly) from the QMP dispatcher.
This QMP command is a candidate because it calls bdrv_do_qu
On 09.06.23 22:19, Fabiano Rosas wrote:
We're currently doing a full query-block just to enumerate the devices
for qmp_nbd_server_add and then discarding the BlockInfoList
afterwards. Alter hmp_nbd_server_start to instead iterate explicitly
over the block_backends list.
This allows the removal o
On 09.06.23 22:19, Fabiano Rosas wrote:
We're converting callers of bdrv_get_allocated_file_size() to run in
coroutines because that function will be made asynchronous when called
(indirectly) from the QMP dispatcher.
This function is a candidate because it calls bdrv_query_image_info()
-> bdrv_
On 09.06.23 22:19, Fabiano Rosas wrote:
From: Lin Ma
We're converting callers of bdrv_get_allocated_file_size() to run in
coroutines because that function will be made asynchronous when called
(indirectly) from the QMP dispatcher.
This QMP command is a candidate because it indirectly calls
bdr
On 03.11.23 16:51, Hanna Czenczek wrote:
On 20.10.23 23:56, Andrey Drobyshev wrote:
[...]
@@ -528,6 +543,14 @@ for use_backing_file in yes no; do
else
_make_test_img -o extended_l2=on 1M
fi
+ # Write cluster #0 and discard its subclusters #0-#3
+ $QEMU_IO -c
On 20.10.23 23:56, Andrey Drobyshev wrote:
This commit makes the discard operation work on the subcluster level
rather than cluster level. It introduces discard_l2_subclusters()
function and makes use of it in qcow2 discard implementation, much like
it's done with zero_in_l2_slice() / zero_l2_su
On 20.10.23 23:56, Andrey Drobyshev wrote:
Add _verify_du_delta() checker which is used to check that real disk
usage delta meets the expectations. For now we use it for checking that
subcluster-based discard/unmap operations lead to actual disk usage
decrease (i.e. PUNCH_HOLE operation is perfo
On 20.10.23 23:56, Andrey Drobyshev wrote:
Move the definition from iotests/250 to common.rc. This is used to
detect real disk usage of sparse files. In particular, we want to use
it for checking subclusters-based discards.
Signed-off-by: Andrey Drobyshev
---
tests/qemu-iotests/250 |
On 20.10.23 23:56, Andrey Drobyshev wrote:
When zeroizing subclusters within single cluster, detect usage of the
BDRV_REQ_MAY_UNMAP flag and fall through to the subcluster-based discard
operation, much like it's done with the cluster-based discards. That
way subcluster-aligned operations "qemu-i
On 16.10.23 15:42, Hanna Czenczek wrote:
Based-on: <20231004014532.1228637-1-stefa...@redhat.com>
([PATCH v2 0/3] vhost: clean up device reset)
Based-on: <20231016083201.23736-1-hre...@redhat.com>
([PATCH] vhost-user: Fix protocol feature bit conflict)
On 01.11.23 20:53, Vladimir Sementsov-Ogievskiy wrote:
On 31.10.23 17:05, Hanna Czenczek wrote:
On 04.10.23 15:56, Vladimir Sementsov-Ogievskiy wrote:
From: Vladimir Sementsov-Ogievskiy
Actually block job is not completed without the final flush. It's
rather unexpected to have broken t
(Sorry, opened another reply window, forgot I already had one open...)
On 20.10.23 23:56, Andrey Drobyshev wrote:
This commit makes the discard operation work on the subcluster level
rather than cluster level. It introduces discard_l2_subclusters()
function and makes use of it in qcow2 discard
On 20.10.23 23:56, Andrey Drobyshev wrote:
This commit makes the discard operation work on the subcluster level
rather than cluster level. It introduces discard_l2_subclusters()
function and makes use of it in qcow2 discard implementation, much like
it's done with zero_in_l2_slice() / zero_l2_su
ndrey Drobyshev
---
block/qcow2-cluster.c | 18 +++---
1 file changed, 15 insertions(+), 3 deletions(-)
Reviewed-by: Hanna Czenczek
On 20.10.23 23:56, Andrey Drobyshev wrote:
This helper simply obtains the l2 table parameters of the cluster which
contains the given subclusters range. Right now this info is being
obtained and used by zero_l2_subclusters(). As we're about to introduce
the subclusters discard operation, this h
ed, 6 insertions(+), 4 deletions(-)
Reviewed-by: Hanna Czenczek
On 01.10.23 22:46, Denis V. Lunev wrote:
Can you please not top-post. This makes the discussion complex. This
approach is followed in this mailing list and in other similar lists
like LKML.
On 10/1/23 19:08, Mike Maslenkin wrote:
I thought about "conv=notrunc", but my main concern is changed vi
On 04.10.23 15:56, Vladimir Sementsov-Ogievskiy wrote:
From: Vladimir Sementsov-Ogievskiy
Actually block job is not completed without the final flush. It's
rather unexpected to have broken target when job was successfully
completed long ago and now we fail to flush or process just
crashed/kille
--
1 file changed, 4 insertions(+), 2 deletions(-)
Reviewed-by: Hanna Czenczek
d ask.
In any case, this change here is necessary, so:
Reviewed-by: Hanna Czenczek
On 25.08.23 06:05, Sam Li wrote:
When the zoned request fail, it needs to update only the wp of
the target zones for not disrupting the in-flight writes on
these other zones. The wp is updated successfully after the
request completes.
Fixed the callers with right offset and nr_zones.
Signed-off
On 03.10.23 14:52, Jean-Louis Dupond wrote:
When the discard-no-unref flag is enabled, we keep the reference for
normal discard requests.
But when a discard is executed on a snapshot/qcow2 image with backing,
the discards are saved as zero clusters in the snapshot image.
When committing the snap
On 03.10.23 14:52, Jean-Louis Dupond wrote:
When the discard-no-unref flag is enabled, we keep the reference for
normal discard requests.
But when a discard is executed on a snapshot/qcow2 image with backing,
the discards are saved as zero clusters in the snapshot image.
When committing the snap
: Hanna Czenczek
On 18.10.23 14:14, Michael S. Tsirkin wrote:
On Wed, Oct 04, 2023 at 02:58:59PM +0200, Hanna Czenczek wrote:
Currently, the vhost-user documentation says that rings are to be
initialized in a disabled state when VHOST_USER_F_PROTOCOL_FEATURES is
negotiated. However, by the time of feature
On 17.10.23 09:53, Viresh Kumar wrote:
On 17-10-23, 09:51, Hanna Czenczek wrote:
Not that I’m really opposed to that, but I don’t see the problem with just
doing that in the same work that makes qemu actually use this flag, exactly
because it’s just a -1/+1 change.
I can send a v2, but should
On 17.10.23 09:49, Viresh Kumar wrote:
On 13-10-23, 20:02, Hanna Czenczek wrote:
On 10.10.23 16:35, Alex Bennée wrote:
I was going to say there is also the rust-vmm vhost-user-master crates
which we've imported:
https://github.com/vireshk/vhost
for the Xen Vhost Frontend:
On 17.10.23 07:36, Viresh Kumar wrote:
On 16-10-23, 12:40, Alex Bennée wrote:
Viresh Kumar writes:
On 16-10-23, 11:45, Manos Pitsidianakis wrote:
On Mon, 16 Oct 2023 11:32, Hanna Czenczek wrote:
diff --git a/include/hw/virtio/vhost-user.h
b/include/hw/virtio/vhost-user.h
index 9f9ddf878d
Add the interface for transferring the back-end's state during migration
as defined previously in vhost-user.rst.
Reviewed-by: Stefan Hajnoczi
Signed-off-by: Hanna Czenczek
---
include/hw/virtio/vhost-backend.h | 24 +
include/hw/virtio/vhost-user.h| 1 +
include/hw/virtio/vh
or
success via CHECK_DEVICE_STATE, which on the destination side includes
checking for integrity (i.e. errors during deserialization).
Reviewed-by: Stefan Hajnoczi
Signed-off-by: Hanna Czenczek
---
docs/interop/vhost-user.rst | 172
1 file changed, 172
aking it explicit that the
enabled/disabled state is tracked even while the vring is stopped.
Every vring is initialized in a disabled state, and SET_FEATURES without
VHOST_USER_F_PROTOCOL_FEATURES simply becomes one way to enable all
vrings.
Reviewed-by: Stefan Hajnoczi
Signed-off-by: Hanna Cze
completely stopped,
i.e. all vrings are stopped, the back-end should cease to modify any
state relating to the guest. Do this by calling it "suspended".
Suggested-by: Stefan Hajnoczi
Reviewed-by: Stefan Hajnoczi
Signed-off-by: Hanna Czenczek
---
docs/interop/vhost-use
1 - 100 of 401 matches
Mail list logo