If the backup target is a slow device like ceph rbd, the backup process will affect guest BLK write IO performance seriously, it's cause by the drawback of COW mechanism, if guest overwrite the backup BLK area, the IO can only be processed after the data has been written to backup target. The impact can be relieved by buffering data read from backup source and writing to backup target later, so the guest BLK write IO can be processed in time. Data area with no overwrite will be process like before without buffering, in most case, we don't need a very large buffer.
An fio test was done when the backup was going on, the test resut show a obvious performance improvement by buffering. Test result(1GB buffer): ======================== fio setting: [random-writers] ioengine=libaio iodepth=8 rw=randwrite bs=32k direct=1 size=1G numjobs=1 result: IOPS AVG latency no backup: 19389 410 us backup: 1402 5702 us backup w/ buffer: 8684 918 us ============================================== Cc: John Snow <js...@redhat.com> Cc: Kevin Wolf <kw...@redhat.com> Cc: Max Reitz <mre...@redhat.com> Cc: Wen Congyang <wencongya...@huawei.com> Cc: Xie Changlong <xiechanglon...@gmail.com> Cc: Markus Armbruster <arm...@redhat.com> Cc: Eric Blake <ebl...@redhat.com> Cc: Fam Zheng <f...@euphon.net> Liang Li (2): backup: buffer COW request and delay the write operation qapi: add interface for setting backup cow buffer size block/backup.c | 118 +++++++++++++++++++++++++++++++++++++++++----- block/replication.c | 2 +- blockdev.c | 5 ++ include/block/block_int.h | 2 + qapi/block-core.json | 5 ++ 5 files changed, 118 insertions(+), 14 deletions(-) -- 2.14.1