> I agree, limiting IO from the VM during backup can have advantages.
> On the flip side loosing 50% of the IO
This 50% loose has nothing to do with the new backup algorithm, because
your test does not involve any writes. So it is more likely a bug in the
AIO code. I will dig deeper next week.
I
Besides, live backup uses the same IO thread as KVM, so it looks like using
one thread (with aio) perform less than using 2 thread.
But this can also be an advantage if you run more than one VM. Or you can backup
multiple VM at same time.
I agree, limiting IO from the VM during backup can have
Sure, I will investigate further. How large is the VM disk? What
backup speed do you get MB/s?
Guest was debian wheezy, the OS disk was not used for testing and marked
as no backup.
The 2nd disk used for testing backups was 32GB, virtio cache=none
I filled that disk with data from /dev/urando
> Live backup had such a significant impact on sequential read inside the VM it
> seemed appropriate to post those results so others can also investigate this.
We also need to define what data the image contains - large zero regions?
Maybe it is better to fill everything with real data - somethin
> >> No, it took dd 120 seconds to read 8GB of data when using live backup
> >> and only took 55 seconds when using LVM snapshot backup.
> > OK.
> >
> > But your test dose not issue a single write?
> >
> Right, I mentioned that I had not tested writes yet.
>
> Live backup had such a significant im
> > But your test dose not issue a single write?
> >
> Right, I mentioned that I had not tested writes yet.
>
> Live backup had such a significant impact on sequential read inside the VM it
> seemed appropriate to post those results so others can also investigate this.
Sure, I will investigate fu
No, it took dd 120 seconds to read 8GB of data when using live backup and only
took 55 seconds when using LVM snapshot backup.
OK.
But your test dose not issue a single write?
Right, I mentioned that I had not tested writes yet.
Live backup had such a significant impact on sequential read ins
> No, it took dd 120 seconds to read 8GB of data when using live backup and only
> took 55 seconds when using LVM snapshot backup.
OK.
But your test dose not issue a single write?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.
KVM Live Backup: 120 seconds or more
LVM Snapshot backup: 55 seconds
With no backup: 45 seconds
Why does that show a "decrease in IO read performance"?
I guess the dd inside the VM is much faster with live backup?
No, it took dd 120 seconds to read 8GB of data when using live backup
and only t
> In a thread on the proxmox forum discussing performance of cheph
> (http://forum.proxmox.com/threads/16715-ceph-perfomance-and-latency)
> Dietmar replies: "A VM is only a single IO thread"
> Could this influence the performance when doing the new KVM live backup since
> this backup occurs inside
> I have identified one use-case where KVM Live Backup causes a significant
> decrease in IO read performance.
>
> Start a KVM Live Backup
> Inside the VM immediately run:
> dd if=/dev/disk_being_backed_up of=/dev/null bs=1M count=8192
>
> Repeated same test but used LVM snapshot and vmtar:
> lvc
On Fri, 22 Nov 2013 11:41:43 -0500
Eric Blevins wrote:
> I have identified one use-case where KVM Live Backup causes a significant
> decrease in IO read performance.
>
> Start a KVM Live Backup
> Inside the VM immediately run:
> dd if=/dev/disk_being_backed_up of=/dev/null bs=1M count=8192
>
>
I have identified one use-case where KVM Live Backup causes a
significant decrease in IO read performance.
Start a KVM Live Backup
Inside the VM immediately run:
dd if=/dev/disk_being_backed_up of=/dev/null bs=1M count=8192
Repeated same test but used LVM snapshot and vmtar:
lvcreate -L33000M -
13 matches
Mail list logo