20.01.2021 13:20, Max Reitz wrote:
Right now, this does not change anything, because backup ignores
max-chunk and max-workers. However, as soon as backup is switched over
to block-copy for the background copying process, we will need it to
keep 129 passing.
Signed-off-by: Max Reitz <mre...@redhat.com>
---
Hi Vladimir, would you be OK with this?
Yes, thanks!
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsement...@virtuozzo.com>
Hmm, interesting, what's going on with defaults:
we issue 64 requests by 1M, and 63 are throttled. On stop we do drain, probably
throttling filter is deactivated somehow (otherwise drain will hang).. So 63
requests should finish and backup should be paused itself.
Then after drain throttling should work again? Probably, job is resumed earlier
than throttling is activated again, so new 64 requests are issued prior to
throttling activation.. Or probably something is still wrong. I don't want to
care too much :)
---
tests/qemu-iotests/129 | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/tests/qemu-iotests/129 b/tests/qemu-iotests/129
index 9a56217bf8..2ac7e7a24d 100755
--- a/tests/qemu-iotests/129
+++ b/tests/qemu-iotests/129
@@ -70,9 +70,14 @@ class TestStopWithBlockJob(iotests.QMPTestCase):
sync="full", buf_size=65536)
def test_drive_backup(self):
+ # Limit max-chunk and max-workers so that block-copy will not
+ # launch so many workers working on so much data each that
+ # stop's bdrv_drain_all() would finish the job
self.do_test_stop("drive-backup", device="drive0",
target=self.target_img, format=iotests.imgfmt,
- sync="full")
+ sync="full",
+ x_perf={ 'max-chunk': 65536,
+ 'max-workers': 8 })
def test_block_commit(self):
# Add overlay above the source node so that we actually use a
--
Best regards,
Vladimir