On 26.10.20 18:18, Vladimir Sementsov-Ogievskiy wrote:
Further commit will add a benchmark
(scripts/simplebench/bench-backup.py), which will show that backup
works better with async parallel requests (previous commit) and
disabled copy_range. So, let's disable copy_range by default.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsement...@virtuozzo.com>
---
qapi/block-core.json | 2 +-
blockdev.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
Uh, well, er, then why did you add it as true in patch 2?
Do you mean this patch as the basis for a discussion whether it should
be true or false by default? I don’t have anything to contribute,
though, ergo I don’t mind either way.
Do you have an idea why copy offloading isn’t better?
Max
diff --git a/qapi/block-core.json b/qapi/block-core.json
index 5a21c24c1d..58eb2bcb86 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -1376,7 +1376,7 @@
# Optional parameters for backup. These parameters don't affect
# functionality, but may significantly affect performance.
#
-# @use-copy-range: Use copy offloading. Default true.
+# @use-copy-range: Use copy offloading. Default false.
#
# @max-workers: Maximum number of parallel requests for the sustained
background
# copying process. Doesn't influence copy-before-write
operations.
diff --git a/blockdev.c b/blockdev.c
index 0ed390abe0..1ac64d8ee2 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -2788,7 +2788,7 @@ static BlockJob *do_backup_common(BackupCommon *backup,
{
BlockJob *job = NULL;
BdrvDirtyBitmap *bmap = NULL;
- BackupPerf perf = { .use_copy_range = true, .max_workers = 64 };
+ BackupPerf perf = { .max_workers = 64 };
int job_flags = JOB_DEFAULT;
if (!backup->has_speed) {