I'm just going to stick with the qemu-img option change for RBD for
now (which should cut snapshot time down drastically), and look
forward to this in the future.  I'd be happy to help get this moving,
but I'm not enough of a developer to lead the charge.

As far as renaming goes, I agree that maybe backups isn't the right
word.  That being said calling a full-sized copy of a volume a
"snapshot" also isn't the right word.  Maybe "image" would be better?

I've also got my reservations about "accounts" vs "users" (I think
"departments" and "accounts or users" respectively is less confusing),
but that's a different thread.

Thank You,

Logan Barfield
Tranquil Hosting


On Mon, Feb 16, 2015 at 10:04 AM, Wido den Hollander <w...@widodh.nl> wrote:
>
>
> On 16-02-15 15:38, Logan Barfield wrote:
>> I like this idea a lot for Ceph RBD.  I do think there should still be
>> support for copying snapshots to secondary storage as needed (for
>> transfers between zones, etc.).  I really think that this could be
>> part of a larger move to clarify the naming conventions used for disk
>> operations.  Currently "Volume Snapshots" should probably really be
>> called "Backups".  So having "snapshot" functionality, and a "convert
>> snapshot to backup/template" would be a good move.
>>
>
> I fully agree that this would be a very great addition.
>
> I won't be able to work on this any time soon though.
>
> Wido
>
>> Thank You,
>>
>> Logan Barfield
>> Tranquil Hosting
>>
>>
>> On Mon, Feb 16, 2015 at 9:16 AM, Andrija Panic <andrija.pa...@gmail.com> 
>> wrote:
>>> BIG +1
>>>
>>> My team should submit some patch to ACS for better KVM snapshots, including
>>> whole VM snapshot etc...but it's too early to give details...
>>> best
>>>
>>> On 16 February 2015 at 13:01, Andrei Mikhailovsky <and...@arhont.com> wrote:
>>>
>>>> Hello guys,
>>>>
>>>> I was hoping to have some feedback from the community on the subject of
>>>> having an ability to keep snapshots on the primary storage where it is
>>>> supported by the storage backend.
>>>>
>>>> The idea behind this functionality is to improve how snapshots are
>>>> currently handled on KVM hypervisors with Ceph primary storage. At the
>>>> moment, the snapshots are taken on the primary storage and being copied to
>>>> the secondary storage. This method is very slow and inefficient even on
>>>> small infrastructure. Even on medium deployments using snapshots in KVM
>>>> becomes nearly impossible. If you have tens or hundreds concurrent
>>>> snapshots taking place you will have a bunch of timeouts and errors, your
>>>> network becomes clogged, etc. In addition, using these snapshots for
>>>> creating new volumes or reverting back vms also slow and inefficient. As
>>>> above, when you have tens or hundreds concurrent operations it will not
>>>> succeed and you will have a majority of tasks with errors or timeouts.
>>>>
>>>> At the moment, taking a single snapshot of relatively small volumes (200GB
>>>> or 500GB for instance) takes tens if not hundreds of minutes. Taking a
>>>> snapshot of the same volume on ceph primary storage takes a few seconds at
>>>> most! Similarly, converting a snapshot to a volume takes tens if not
>>>> hundreds of minutes when secondary storage is involved; compared with
>>>> seconds if done directly on the primary storage.
>>>>
>>>> I suggest that the CloudStack should have the ability to keep volume
>>>> snapshots on the primary storage where this is supported by the storage.
>>>> Perhaps having a per primary storage setting that enables this
>>>> functionality. This will be beneficial for Ceph primary storage on KVM
>>>> hypervisors and perhaps on XenServer when Ceph will be supported in a near
>>>> future.
>>>>
>>>> This will greatly speed up the process of using snapshots on KVM and users
>>>> will actually start using snapshotting rather than giving up with
>>>> frustration.
>>>>
>>>> I have opened the ticket CLOUDSTACK-8256, so please cast your vote if you
>>>> are in agreement.
>>>>
>>>> Thanks for your input
>>>>
>>>> Andrei
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>>
>>> Andrija Panić

Reply via email to