> http://forum.proxmox.com/threads/18011-Proxmox-VE-3-2-
> released!?p=92037#post92037
>
> Any idea for fix?
Do you have any information about the error? What does not work? Backup log?
How to reproduce?
___
pve-devel mailing list
pve-devel@pve.proxmox
[pve-devel] KVM Live Backup performance
Eric Blevins eric at netwalk.com
Thu Dec 5 18:57:19 CET 2013
I just uploaded the qemu 1.7 package with new backup patches:
You should be able to install with:
# wget
ftp://download.proxmox.com/tmp/pve-libspice-server1_0.12.4-3_amd64.deb
# wget ftp
ilable storage...
- Mail original -
De: "Cesar Peschiera"
À: pve-devel@pve.proxmox.com
Envoyé: Mercredi 29 Janvier 2014 05:41:33
Objet: Re: [pve-devel] KVM Live Backup performance
Thanks Alexander for your answers (You are the Master of Masters), but the
questions are ba
Backup" is running?
Best regards
Cesar
- Original Message -
From: "Alexandre DERUMIER"
To: "Cesar Peschiera"
Cc:
Sent: Wednesday, January 29, 2014 12:38 AM
Subject: Re: [pve-devel] KVM Live Backup performance
1- But as the buffer need RAM free, what are th
iginal -
De: "Cesar Peschiera"
À: pve-devel@pve.proxmox.com
Envoyé: Mardi 28 Janvier 2014 19:14:22
Objet: Re: [pve-devel] KVM Live Backup performance
Thanks for your nice answer Eric
@Dietmar or anyone that can answer, please let me to do a questions:
Note:
These questio
t regards
Cesar
- Original Message -
From: Eric Blevins
To: pve-devel@pve.proxmox.com
Sent: Tuesday, January 28, 2014 11:12 AM
Subject: Re: [pve-devel] KVM Live Backup performance
Anyways, I will try to upgrade KVM to 1.7 first (many backup related
changes).
We can then tes
on1 is negative. ie "KVM Live Backup" don't sync the
writes to both disks, how works "KVM Live Backup" in this case?
Best regards
Cesar
- Original Message -
From: "Dietmar Maurer"
To: "Alessandro Briosi" ; "Cesar Peschiera"
;
Sen
Anyways, I will try to upgrade KVM to 1.7 first (many backup related
changes).
We can then test again and try to optimize further.
Cesar, from my testing KVM 1.7 fixed the backup related performance issues.
See archive:
http://pve.proxmox.com/pipermail/pve-devel/2013-December/009296.html
> He is complaining that the new code enables write cache during backup.
> If there's a VM which is running a database in an HA scenario, and for some
> reasons the VM/host crashes during the backup, the database would be
> inconsistent when started on another host, cause of the write cache.
AFAIK
uot;
To: "Dietmar Maurer" ; "Cesar Peschiera"
;
Sent: Tuesday, January 28, 2014 4:28 AM
Subject: Re: [pve-devel] KVM Live Backup performance
Il 28/01/2014 06:52, Dietmar Maurer ha scritto:
If is possible without lose performance into this VM, the write cache
for "KV
eschiera" ;
Sent: Tuesday, January 28, 2014 2:52 AM
Subject: RE: [pve-devel] KVM Live Backup performance
> If is possible without lose performance into this VM, the write cache for "KVM
> Live Backup" not must to execute it. In this mode the "KVM Live Backup" w
Il 28/01/2014 06:52, Dietmar Maurer ha scritto:
>> If is possible without lose performance into this VM, the write cache for
>> "KVM
>> Live Backup" not must to execute it. In this mode the "KVM Live Backup" will
>> be
>> fantastic.
>
> Sorry, but I do not really understand that question?
>
> W
> If is possible without lose performance into this VM, the write cache for "KVM
> Live Backup" not must to execute it. In this mode the "KVM Live Backup" will
> be
> fantastic.
Sorry, but I do not really understand that question?
We have done many improvement on backup code, so you should first
Hi Developers
Only want to say a detail that will be a big problem with this strategy of
change on the code of "KVM Live Backup", and please thinks that most people
schedule their backups at night when the most people is sleeping:
If I have HA for my VM that have a data base, and the "KVM Live
I just uploaded the qemu 1.7 package with new backup patches:
You should be able to install with:
# wget ftp://download.proxmox.com/tmp/pve-libspice-server1_0.12.4-3_amd64.deb
# wget ftp://download.proxmox.com/tmp/pve-qemu-kvm_1.7-2_amd64.deb
# dpkg -i pve-libspice-server1_0.12.4-3_amd64.deb pv
> Let me know if you need something tested.
I just uploaded the qemu 1.7 package with new backup patches:
You should be able to install with:
# wget ftp://download.proxmox.com/tmp/pve-libspice-server1_0.12.4-3_amd64.deb
# wget ftp://download.proxmox.com/tmp/pve-qemu-kvm_1.7-2_amd64.deb
# dpkg -
No, it is hard coded and quite small.
But that mbuffer looks promising - maybe we can use much larger
buffers (same size as LVM snapshot size), maybe mmap'ed?
To clarify, are you are suggesting to make the existing hard coded
buffer larger/configurable?
If so, I like this idea. It seems like t
> > That is how it works already.
> Is the size of the buffer configurable?
> I would like to use 4-8G of RAM
No, it is hard coded and quite small.
But that mbuffer looks promising - maybe we can use much larger
buffers (same size as LVM snapshot size), maybe mmap'ed?
Would be great if you can r
That is how it works already.
Is the size of the buffer configurable?
I would like to use 4-8G of RAM
Anyways, I will try to upgrade KVM to 1.7 first (many backup related changes).
We can then test again and try to optimize further.
Sounds like a plan
_
> I have a suggestion that would help alleviate the read and write downsides to
> this.
>
> Create a memory buffer where the reads/writes from the VM are placed.
> When buffer is over a certain percentage, stop the backup read operations and
> flush the buffer.
> The VM can perform IO up to the li
There is also a small possibility that we have a bug ;-) I will debug that
when I update that code for 1.7.
Looking at the code, it seems that we also backup read blocks immediately. That
way we
can avoid re-reads.
I am not sure if that is good or bad.
This would explain the degraded read p
MIER" , "Eric Blevins"
Cc: pve-devel@pve.proxmox.com
Envoyé: Mardi 26 Novembre 2013 08:25:18
Objet: RE: [pve-devel] KVM Live Backup performance
> Hi, this is because with new backup,
>
> each new write in the vm during the backup, is copied to backup storage and
> >>The tests form Eric only do reads (there is no single write involved).
> Oh, I miss that.
>
> I think it should be a qemu problem, as only difference is that
>
> with lvm snapshot backup,
>
> backup reads are done directly from disk
>
> and with qemu backup
>
> reads are done through qemu
MIER" , "Eric Blevins"
Cc: pve-devel@pve.proxmox.com
Envoyé: Mardi 26 Novembre 2013 08:25:18
Objet: RE: [pve-devel] KVM Live Backup performance
> Hi, this is because with new backup,
>
> each new write in the vm during the backup, is copied to backup storage and
MIER" , "Eric Blevins"
Cc: pve-devel@pve.proxmox.com
Envoyé: Mardi 26 Novembre 2013 08:25:18
Objet: RE: [pve-devel] KVM Live Backup performance
> Hi, this is because with new backup,
>
> each new write in the vm during the backup, is copied to backup storage and
> Hi, this is because with new backup,
>
> each new write in the vm during the backup, is copied to backup storage and to
> the vm.
The tests form Eric only do reads (there is no single write involved).
___
pve-devel mailing list
pve-devel@pve.proxmox.c
.
- Mail original -
De: "Eric Blevins"
À: pve-devel@pve.proxmox.com
Envoyé: Lundi 25 Novembre 2013 17:31:20
Objet: Re: [pve-devel] KVM Live Backup performance
> I am unable to reproduce that - for me LVM and Live backup are about the same
> speed.
>
> Can you see
I am unable to reproduce that - for me LVM and Live backup are about the same
speed.
Can you see the effect if you dump backup output directly to /dev/null?
# /usr/lib/qemu-server/vmtar '/etc/pve/qemu-server/108.conf' 'qemu-server.conf'
'/dev/vmdisks/test-snapshot' 'vm-disk'>/dev/null
# vzdu
I have not tested writes yet and doubt I will have time to get to that this
week.
To show drawbacks of LVM snapshots, you can use something like:
# dd if=/dev/urandom of=tmp.raw bs=1M
inside the VM during backup.
LVM snapshot will most likely run full, and is very slow:
This is not the probl
> I have identified one use-case where KVM Live Backup causes a significant
> decrease in IO read performance.
>
> Start a KVM Live Backup
> Inside the VM immediately run:
> dd if=/dev/disk_being_backed_up of=/dev/null bs=1M count=8192
>
> Repeated same test but used LVM snapshot and vmtar:
> lvc
> I have not tested writes yet and doubt I will have time to get to that this
> week.
To show drawbacks of LVM snapshots, you can use something like:
# dd if=/dev/urandom of=tmp.raw bs=1M
inside the VM during backup.
LVM snapshot will most likely run full, and is very slow:
# time vmtar
> Guest was debian wheezy, the OS disk was not used for testing and marked as
> no backup.
> The 2nd disk used for testing backups was 32GB, virtio cache=none I filled
> that
> disk with data from /dev/urandom before performing any backup tests
You said LVM backup take only 58 secs?
32000MB/58se
> you need to mount the snapshot, then backup the VM image instead.
Oh, ignore me, your test is also valid.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> Repeated same test but used LVM snapshot and vmtar:
> lvcreate -L33000M -s -n test-snapshot /dev/vmdisks/vm-108-disk-2
> /usr/lib/qemu-server/vmtar '/etc/pve/qemu-server/108.conf'
> 'qemu-server.conf' '/dev/vmdisks/test-snapshot' 'vm-disk'|lzop -o
> /backup1/dump/backup.tar.lzop
you need to mou
> Am 23.11.2013 um 14:28 schrieb Michael Rasmussen :
>
> On Sat, 23 Nov 2013 07:16:28 +
> Dietmar Maurer wrote:
>
>>> I agree, limiting IO from the VM during backup can have advantages.
>>> On the flip side loosing 50% of the IO
>>
>> This 50% loose has nothing to do with the new backup al
On Sat, 23 Nov 2013 07:16:28 +
Dietmar Maurer wrote:
> > I agree, limiting IO from the VM during backup can have advantages.
> > On the flip side loosing 50% of the IO
>
> This 50% loose has nothing to do with the new backup algorithm, because
> your test does not involve any writes. So it i
> I agree, limiting IO from the VM during backup can have advantages.
> On the flip side loosing 50% of the IO
This 50% loose has nothing to do with the new backup algorithm, because
your test does not involve any writes. So it is more likely a bug in the
AIO code. I will dig deeper next week.
I
Besides, live backup uses the same IO thread as KVM, so it looks like using
one thread (with aio) perform less than using 2 thread.
But this can also be an advantage if you run more than one VM. Or you can backup
multiple VM at same time.
I agree, limiting IO from the VM during backup can have
Sure, I will investigate further. How large is the VM disk? What
backup speed do you get MB/s?
Guest was debian wheezy, the OS disk was not used for testing and marked
as no backup.
The 2nd disk used for testing backups was 32GB, virtio cache=none
I filled that disk with data from /dev/urando
> Live backup had such a significant impact on sequential read inside the VM it
> seemed appropriate to post those results so others can also investigate this.
We also need to define what data the image contains - large zero regions?
Maybe it is better to fill everything with real data - somethin
> >> No, it took dd 120 seconds to read 8GB of data when using live backup
> >> and only took 55 seconds when using LVM snapshot backup.
> > OK.
> >
> > But your test dose not issue a single write?
> >
> Right, I mentioned that I had not tested writes yet.
>
> Live backup had such a significant im
> > But your test dose not issue a single write?
> >
> Right, I mentioned that I had not tested writes yet.
>
> Live backup had such a significant impact on sequential read inside the VM it
> seemed appropriate to post those results so others can also investigate this.
Sure, I will investigate fu
No, it took dd 120 seconds to read 8GB of data when using live backup and only
took 55 seconds when using LVM snapshot backup.
OK.
But your test dose not issue a single write?
Right, I mentioned that I had not tested writes yet.
Live backup had such a significant impact on sequential read ins
> No, it took dd 120 seconds to read 8GB of data when using live backup and only
> took 55 seconds when using LVM snapshot backup.
OK.
But your test dose not issue a single write?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.
KVM Live Backup: 120 seconds or more
LVM Snapshot backup: 55 seconds
With no backup: 45 seconds
Why does that show a "decrease in IO read performance"?
I guess the dd inside the VM is much faster with live backup?
No, it took dd 120 seconds to read 8GB of data when using live backup
and only t
> In a thread on the proxmox forum discussing performance of cheph
> (http://forum.proxmox.com/threads/16715-ceph-perfomance-and-latency)
> Dietmar replies: "A VM is only a single IO thread"
> Could this influence the performance when doing the new KVM live backup since
> this backup occurs inside
> I have identified one use-case where KVM Live Backup causes a significant
> decrease in IO read performance.
>
> Start a KVM Live Backup
> Inside the VM immediately run:
> dd if=/dev/disk_being_backed_up of=/dev/null bs=1M count=8192
>
> Repeated same test but used LVM snapshot and vmtar:
> lvc
On Fri, 22 Nov 2013 11:41:43 -0500
Eric Blevins wrote:
> I have identified one use-case where KVM Live Backup causes a significant
> decrease in IO read performance.
>
> Start a KVM Live Backup
> Inside the VM immediately run:
> dd if=/dev/disk_being_backed_up of=/dev/null bs=1M count=8192
>
>
I have identified one use-case where KVM Live Backup causes a
significant decrease in IO read performance.
Start a KVM Live Backup
Inside the VM immediately run:
dd if=/dev/disk_being_backed_up of=/dev/null bs=1M count=8192
Repeated same test but used LVM snapshot and vmtar:
lvcreate -L33000M -
> >> Since all of the LVM Snapshot code was removed I am unable to perform
> >> the above benchmarks, anyone have a suggestion how we could perform
> >> such tests easily?
> > Simpyl make a lvm snapshot manually - that is quite easy.
> Sure I can make an LVM Snapshot manually (suspend -> snapshot -
On the forum there are a number of people who are complaining about high
load averages on the host and/or in the VM being backed up when using
the new KVM Live Backup feature.
My suspicion is that having the KVM process move the backup data around
the performance of the VM is negatively affect
You said that you have some VMs which behave badly with new backup? May I ask
what you run inside those VMs?
Windows 2008 R2 servers running MSSQL
Windows 2003 servers running MSSQL and a Java based application
I doubt this is a Windows problem, loosing performance of SQL is more
noticeable tha
> I would like to perform some benchmarks where CPU/IO/RAM intensive tasks
> are run inside the VM while performing a LVM Snapshot backup and then a KVM
> Live Backup. Comparing the completion times of the CPU/IO/RAM tasks would
> allow us to assess what subsystems are affected, good or bad, by KV
53 matches
Mail list logo