-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1


On 05/28/2015 04:48 PM, Logan Barfield wrote:
> Hi Star,
> 
> I'll +1 this.  I would like to see support for RBD snapshots as
> well, and maybe have a method to "backup" the snapshots to
> secondary storage.  Right now for large volumes it can take an hour
> or more to finish the snapshot.
> 

I fully agree that I would love to see this in CloudStack, but it's a
matter of resources that I have.

My main focus currently is implementing proper IPv6 support into Basic
Networking. That will take quite some time. After that I'll look into
improving the RBD support even further.

I however also have a $dayjob which requires time. So it will probably
take a while before I can have a look at this.

If somebody else wants to take a look at it, go ahead. I'm more then
happy to review any Pull Requests.

Wido

> I have already discussed this with Wido, and was able to determine 
> that even without using native RBD snapshots we could improve the
> copy time by saving the snaps as thin volumes instead of full raw
> files. Right now the snapshot code when using RBD specifically
> converts the volumes to a full raw file, when saving as a qcow2
> image would use less space.  When restoring a snapshot the code
> current specifically indicates the source image as being a raw
> file, but if we change the code to not indicate the source image
> type qemu-img should automatically detect it.  We just need to see
> if that's the case with all of the supported versions of
> libvirt/qemu before submitting a pull request.
> 
> Thank You,
> 
> Logan Barfield Tranquil Hosting
> 
> 
> On Wed, May 27, 2015 at 9:18 PM, Star Guo <st...@ceph.me> wrote:
>> Hi everyone,
>> 
>> Since I have test cloudstack 4.4.2 + kvm + rbd, deploy an
>> instance is so fast apart from the first deployment because copy
>> template from secondary storage (NFS) to primary storage (RBD).
>> That is no problem. However, when I do some volume operation,
>> such as create snapshot, create template, template deploy ect, it
>> also take some time to finish because copy data between primary
>> storage and secondary storage. So I think that if we support the
>> same rbd as secondary storage, and use ceph COW feature, it may
>> reduce the time and just some seconds. (OpenStack can make glance
>> and cinder as the same rbd)
>> 
>> Best Regards, Star Guo
>> 
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQIcBAEBAgAGBQJVZ4Y3AAoJEAGbWC3bPspC/vUQALetW3f0JPPraRNCzMfnQy0g
rIZEtZxfvTOUArfkLvQe3KQcchw7ucUHK/nQ9cQCwFvshliiDlGJi9MKUKygxcJ7
U2L44oDFJ4ndwoDK00h3Eve4cg4Xy3vki0e6xe0e/6oiyhHbmhIzzhQ/9+mTXhOF
RKlgXJalUGK17P19KISTgWKV7RHJM9z5OJ85yv38cM18hx64N4AZcM9ElLZrCVsD
OuqVbQeju2/bpRmtMUJRp8jp0SujKC3B+C3wuqfrn2KxcMWw3yeVMoBU3zAZPk9s
B5Bz/bArBLhbMTotIX4masS+obqQT8+JfFT/0HE5a/4yZ623Noc7yRceRlN21gmP
FQNtv+k2kLXuJxNdCThcKOvoyBXlVnBmvW82FbKaISNW8EoEqSZVARHSC+i8BeGi
LzGN+JLg3qw9u7Hlx/7kbputDYftCLxnz22EGev7UoRD0/crmjgs1ENyBHSnnYJS
hCuorDrZP6RqM2cRuJrMZs60m12sSLfXThZPnAQj+TiZcfXvdMh019FHBVjgFqzs
Io2isu71EJkOZn7r2RblD2uoLMwACKyHL57FFpY/2VYrdZJ8YkDMoXnbSy3H4GMs
JnyyHoHBd03C3NWzxoTd/CLrEi2bQM2L11nTuI3UShTvbJsctwPCuNDqO7sozYkA
BJ5wE2f5NVmY7x/NEGkK
=wiiL
-----END PGP SIGNATURE-----

Reply via email to