without doing any flush.
I remember in past to use cache=unsafe with qemu, and a scsi drive, I was able
to write gibabytes of datas in host memory
without any flush occuring.
- Mail original -
De: "Eneko Lacunza"
À: "pve-devel"
Envoyé: Mardi 26 Juillet 2016 13:1
Hi,
El 26/07/16 a las 10:32, Alexandre DERUMIER escribió:
There is no reason to flush a restored disk until just the end, really.
Issuing flushes every x MB could hurt other storages without need.
I'm curious to see host memory usage of a big local file storage restore
(100GB), with writeback
original -
De: "Eneko Lacunza"
À: "pve-devel"
Envoyé: Mardi 26 Juillet 2016 10:13:59
Objet: Re: [pve-devel] Speed up PVE Backup
Hi,
El 26/07/16 a las 10:04, Alexandre DERUMIER escribió:
>>> I think qmrestore isn't issuing any flush request (until maybe the e
Hi,
El 26/07/16 a las 10:04, Alexandre DERUMIER escribió:
I think qmrestore isn't issuing any flush request (until maybe the end),
Need to be checked! (but if I think we open restore block storage with
writeback, so I hope we send flush)
so for ceph storage backend we should set
rbd_cache_wr
"Eneko Lacunza"
À: "dietmar" , "pve-devel"
Envoyé: Jeudi 21 Juillet 2016 13:19:10
Objet: Re: [pve-devel] Speed up PVE Backup
Hi,
El 21/07/16 a las 09:34, Dietmar Maurer escribió:
>
>>> But you can try to assemble larger blocks, and write them once y
But I'm not sure (don't remember exactly, need to be verifed) it's working fine
with current backup restore or offline disk cloning.
(maybe they are some fsync each 64k block)
- Mail original -
De: "dietmar"
À: "pve-devel" , "Eneko Lacunza"
Envo
Hi,
El 21/07/16 a las 09:34, Dietmar Maurer escribió:
But you can try to assemble larger blocks, and write them once you get
an out of order block...
Yes, this is the plan.
I always thought the ceph libraries does (or should do) that anyways?
(write combining)
Reading the docs:
http://docs.
> But I suppose they're mostly ordered?
Yes - depends how much writes happens during backup...
> > But you can try to assemble larger blocks, and write them once you get
> > an out of order block...
> Yes, this is the plan.
> > I always thought the ceph libraries does (or should do) that anyways?
El 20/07/16 a las 17:46, Dietmar Maurer escribió:
This is called from restore_extents, where a comment precisely says "try
to write whole clusters to speedup restore", so this means we're writing
64KB-8Byte chunks, which is giving a hard time to Ceph-RBD because this
means lots of ~64KB IOPS.
So
On 20/07/2016 4:24 PM, Eneko Lacunza wrote:
Yesterday our 9-osd 3-node cluster restored a backup at 6MB/s... it
was very boring, painfull and expensive to wait for it
One of the reasons we migrated away from ceph - snapshot and backup
restores were unusably slow.
--
Lindsay Mathieson
_
> This is called from restore_extents, where a comment precisely says "try
> to write whole clusters to speedup restore", so this means we're writing
> 64KB-8Byte chunks, which is giving a hard time to Ceph-RBD because this
> means lots of ~64KB IOPS.
>
> So, I suggest the following solution to
Hi again,
I've been looking around the backup/restore code a bit. I'm focused on
restore acceleration on Ceph RBD right know.
Sorry if I have something mistaken, I have never developed for Proxmox/Qemu.
I see in line 563 of file
https://git.proxmox.com/?p=pve-qemu-kvm.git;a=blob;f=debian/patc
Hi all,
El 16/02/16 a las 15:52, Stefan Priebe - Profihost AG escribió:
Am 16.02.2016 um 15:50 schrieb Dmitry Petuhov:
16.02.2016 13:20, Dietmar Maurer wrote:
Storage Backend is ceph using 2x 10Gbit/s and i'm able to read from it
with 500-1500MB/s. See below for an example.
The backup process
;http://www.viadeo.com/fr/company/odiso>
> <https://www.facebook.com/monsiteestlent>
>
> MonSiteEstLent.com <http://www.monsiteestlent.com/> - Blog dédié à la
> webperformance et la gestion de pics de trafic
>
>
> -------------
20 68 90 88
Fax : 03 20 68 90 81
45 Bvd du Général Leclerc 59100 Roubaix
12 rue Marivaux 75002 Paris
MonSiteEstLent.com - Blog dédié à la webperformance et la gestion de pics de
trafic
De: "Stefan Priebe"
À: "pve-devel"
Envoyé: Mercredi 2 Mars 2016 08:28:52
Objet: R
vel" , "dietmar"
> Envoyé: Mardi 1 Mars 2016 11:55:21
> Objet: Re: [pve-devel] Speed up PVE Backup
>
> )
>
> Am 01.03.2016 um 11:03 schrieb Alexandre DERUMIER:
>> Hi, qemu devs have send patches to configure backup cluster size:
>>
>> http://li
tersize(s) ?
- Mail original -
De: "Stefan Priebe"
À: "pve-devel" , "dietmar"
Envoyé: Mardi 1 Mars 2016 11:55:21
Objet: Re: [pve-devel] Speed up PVE Backup
)
Am 01.03.2016 um 11:03 schrieb Alexandre DERUMIER:
> Hi, qemu devs have send patches to configure
;
> À: "aderumier" , "pve-devel"
> Envoyé: Vendredi 19 Février 2016 09:17:14
> Objet: Re: [pve-devel] Speed up PVE Backup
>
>> I wonder how perform the native qemu backup blockjob vs proxmox vma backup
>> format ?
>
> We use the qemu backup blo
--
De: "dietmar"
À: "aderumier" , "pve-devel"
Envoyé: Vendredi 19 Février 2016 09:17:14
Objet: Re: [pve-devel] Speed up PVE Backup
> I wonder how perform the native qemu backup blockjob vs proxmox vma backup
> format ?
We use the qem
> I wonder how perform the native qemu backup blockjob vs proxmox vma backup
> format ?
We use the qemu backup blockjob, just slightly modified...
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-d
> Any reason to use 64k blocks and not something bigger? Backups should be
> sequential on all storage backends so something bigger shouldn't hurt.
Maybe you can raise that question on the qemu devel list?
___
pve-devel mailing list
pve-devel@pve.proxm
> Any reason to use 64k blocks and not something bigger?
Again, take a look at the qemu backup code. I guess it is
possible, but not trivial.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
dahead disable after bytes=0
- Mail original -
De: "Stefan Priebe"
À: "dietmar" , "pve-devel"
Envoyé: Jeudi 18 Février 2016 21:26:13
Objet: Re: [pve-devel] Speed up PVE Backup
Hello Dietmar,
Am 16.02.2016 um 14:55 schrieb Stefan Priebe - Profihost AG:
> Am 16.
e/html/qemu-devel/2013-03/msg00387.html
(Don't have read all the discussion)
I wonder how perform the native qemu backup blockjob vs proxmox vma backup
format ?
- Mail original -
De: "Stefan Priebe"
À: "dietmar" , "pve-devel"
Envoyé: Jeudi 18 Févri
Hello Dietmar,
Am 16.02.2016 um 14:55 schrieb Stefan Priebe - Profihost AG:
Am 16.02.2016 um 12:58 schrieb Stefan Priebe - Profihost AG:
Am 16.02.2016 um 11:55 schrieb Dietmar Maurer:
Is it enough to just change these:
The whole backup algorithm is based on 64KB blocksize, so it
is not trivia
Am 16.02.2016 um 15:50 schrieb Dmitry Petuhov:
> 16.02.2016 13:20, Dietmar Maurer wrote:
>>> Storage Backend is ceph using 2x 10Gbit/s and i'm able to read from it
>>> with 500-1500MB/s. See below for an example.
>> The backup process reads 64KB blocks, and it seems this slows down ceph.
>> This i
16.02.2016 13:20, Dietmar Maurer wrote:
Storage Backend is ceph using 2x 10Gbit/s and i'm able to read from it
with 500-1500MB/s. See below for an example.
The backup process reads 64KB blocks, and it seems this slows down ceph.
This is a known behavior, but I found no solution to speed it up.
J
Am 16.02.2016 um 12:58 schrieb Stefan Priebe - Profihost AG:
> Am 16.02.2016 um 11:55 schrieb Dietmar Maurer:
>>> Is it enough to just change these:
>>
>> The whole backup algorithm is based on 64KB blocksize, so it
>> is not trivial (or impossible?) to change that.
>>
>> Besides, I do not underst
Am 16.02.2016 um 11:55 schrieb Dietmar Maurer:
>> Is it enough to just change these:
>
> The whole backup algorithm is based on 64KB blocksize, so it
> is not trivial (or impossible?) to change that.
>
> Besides, I do not understand why reading 64KB is slow - ceph libraries
> should have/use a re
On Tue, 16 Feb 2016 11:55:07 +0100 (CET)
Dietmar Maurer wrote:
>
> Besides, I do not understand why reading 64KB is slow - ceph libraries
> should have/use a reasonable readahead cache to make it fast?
>
Due to the nature of the operation that reading is considered random
block read by ceph so
> Is it enough to just change these:
The whole backup algorithm is based on 64KB blocksize, so it
is not trivial (or impossible?) to change that.
Besides, I do not understand why reading 64KB is slow - ceph libraries
should have/use a reasonable readahead cache to make it fast?
_
Am 16.02.2016 um 11:20 schrieb Dietmar Maurer:
>> Storage Backend is ceph using 2x 10Gbit/s and i'm able to read from it
>> with 500-1500MB/s. See below for an example.
>
> The backup process reads 64KB blocks, and it seems this slows down ceph.
> This is a known behavior, but I found no solution
Am 16.02.2016 um 11:20 schrieb Dietmar Maurer:
>> Storage Backend is ceph using 2x 10Gbit/s and i'm able to read from it
>> with 500-1500MB/s. See below for an example.
>
> The backup process reads 64KB blocks, and it seems this slows down ceph.
> This is a known behavior, but I found no solution
> Storage Backend is ceph using 2x 10Gbit/s and i'm able to read from it
> with 500-1500MB/s. See below for an example.
The backup process reads 64KB blocks, and it seems this slows down ceph.
This is a known behavior, but I found no solution to speed it up.
__
Am 16.02.2016 um 11:02 schrieb Martin Waschbüsch:
>
>> Am 16.02.2016 um 10:32 schrieb Stefan Priebe - Profihost AG
>> :
>>
>> Am 16.02.2016 um 09:57 schrieb Martin Waschbüsch:
>>> Hi Stefan,
>>
This is PVE 3.4 running Qemu 2.4
>>>
>>> To me this looks like the compression is the limiting fac
Stefan,
> The output after 15 minutes is:
> INFO: starting new backup job: vzdump 132 --remove 0 --mode snapshot
> --storage vmbackup --node 1234
> INFO: Starting Backup of VM 132 (qemu)
> INFO: status = running
> INFO: update VM 132: -lock backup
> INFO: backup mode: snapshot
> INFO: ionice prior
If you have restrictions cgroups on your vm, this vm backups with this
restrictions.
2016-02-16 12:02 GMT+02:00 Martin Waschbüsch :
>
> > Am 16.02.2016 um 10:32 schrieb Stefan Priebe - Profihost AG <
> s.pri...@profihost.ag>:
> >
> > Am 16.02.2016 um 09:57 schrieb Martin Waschbüsch:
> >> Hi Stef
> Am 16.02.2016 um 10:32 schrieb Stefan Priebe - Profihost AG
> :
>
> Am 16.02.2016 um 09:57 schrieb Martin Waschbüsch:
>> Hi Stefan,
>
>>> This is PVE 3.4 running Qemu 2.4
>>
>> To me this looks like the compression is the limiting factor? What speed do
>> you get for this NFS mount when jus
Am 16.02.2016 um 09:57 schrieb Martin Waschbüsch:
> Hi Stefan,
>> This is PVE 3.4 running Qemu 2.4
>
> To me this looks like the compression is the limiting factor? What speed do
> you get for this NFS mount when just copying an existing file?
Which compression? There is only FS compression on
Am 16.02.2016 um 09:54 schrieb Andreas Steinel:
> Hi Stefan,
>
> That's really slow.
Yes
> I use a similar setup, but with ZFS and I backup 6 nodes in parallel to
> the storage and saturate the 1 GBit network connection.
Currently vzdump / qemu is only uses around 100kb/s of the 10Gbit/s
connec
Hi Stefan,
> Am 16.02.2016 um 09:22 schrieb Stefan Priebe - Profihost AG
> :
>
> Hi,
>
> is there any way to speed up PVE Backups?
>
> I'm trying to evaluate the optimal method doing backups but they took
> very long.
>
> I'm trying to use vzdump on top of nfs on top of btrfs using zlib
> com
Hi Stefan,
That's really slow.
I use a similar setup, but with ZFS and I backup 6 nodes in parallel to the
storage and saturate the 1 GBit network connection.
I use LZOP on the Proxmox-side as best tradeoff between size and
online-compression speed.
On Tue, Feb 16, 2016 at 9:22 AM, Stefan Prie
Hi,
is there any way to speed up PVE Backups?
I'm trying to evaluate the optimal method doing backups but they took
very long.
I'm trying to use vzdump on top of nfs on top of btrfs using zlib
compression.
The target FS it totally idle but the backup is running at a very low speed.
The output
43 matches
Mail list logo