Wido, Simon and Andrija – thanks a lot for your feedback, much appreciated.

Andrija – thanks, I may take you up on that offer, will contact you offline.

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

On 27/07/2018, 20:13, "Andrija Panic" <andrija.pa...@gmail.com> wrote:

    Hi Dag,
    
    I'm a bit too late to really share something not mentioned already above -
    but in general most things work - (VM snapshots not...) - but all else is
    there (resizing volumes, downloading volumes, templates, snapshots, etc),
    live migration, etc - we use this with Ubuntu 14 stock libraries (recently
    upgraded to versions from 16.04) - some features were originally missing
    (volume resize of root vs data volumes), and hopefully the RAW vs QCOW2
    format inside DB (what a mess... :) ), is also solved (we did back in the
    days a lot of small internal patches, that were never validated with
    community unfortunately - don't ask me why - beside proper snapshots
    lifecycle (part of 4.8 an onwards) (i.e. keep really only last 1 snap on
    CEPH, instead of 50+ garbage snapshots), etc.
    
    If you need more precise info - I could even organize a small demo account
    for you with private CEPH offering for VM and DATA (don't tell anyone :) )
    - so you can actually see it yourself...
    
    Cheers
    Andrija
    
    
dag.sonst...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 

On Fri, 27 Jul 2018 at 16:28, Simon Weller <swel...@ena.com.invalid> wrote:
    
    > They're volume based snapshots at this point. We've looked at what it
    > would take to support VMsnapshots, but we're not there yet, as the memory
    > would need to be stored outside of the actual volume.
    >
    > Primary snapshots work well. We still need to reintroduce the code that
    > allows for disabling primary to secondary coping of snapshots should an
    > organization not want to do that.
    >
    >
    > Templates are also pre-cached into Ceph to speed up deployment of VMs as
    > Wido indicates below. This greatly reduced the secondary to primary 
copying
    > of template images.
    > Live migration works well land has since Wido introduced the Ceph features
    > years ago.
    >
    > We have started looking at what it would take to support Ceph volume
    > replication between zones/regions, as that would be a great Business
    > Continuity feature.
    >
    >
    > ________________________________
    > From: Dag Sonstebo <dag.sonst...@shapeblue.com>
    > Sent: Friday, July 27, 2018 8:32 AM
    > To: dev@cloudstack.apache.org
    > Subject: Re: CEPH / CloudStack features
    >
    > Excellent, thanks Wido.
    >
    > When you say snapshotting – is this VM snapshots, volume snapshots or 
both?
    >
    > How about live migration, does this work?
    >
    > Regards,
    > Dag Sonstebo
    > Cloud Architect
    > ShapeBlue
    >
    > On 27/07/2018, 13:41, "Wido den Hollander" <w...@widodh.nl> wrote:
    >
    >     Hi,
    >
    >     On 07/27/2018 12:18 PM, Dag Sonstebo wrote:
    >     > Hi all,
    >     >
    >     > I’m trying to find out more about CEPH compatibility with CloudStack
    > / KVM – i.e. trying to put together a feature matrix of what works  and
    > what doesn’t compared to NFS (or other block storage platforms).
    >     > There’s not a lot of up to date information on this – the
    > configuration guide on [1] is all I’ve located so far apart from a couple
    > of one-liners in the official documentation.
    >     >
    >     > Could I get some feedback from the Ceph users in the community?
    >     >
    >
    >     Yes! So, at first, Ceph is KVM-only. Other hypervisors do not support
    >     RBD (RADOS Block Device) from Ceph.
    >
    >     What is supported:
    >
    >     - Thin provisioning
    >     - Discard / fstrim (Requires VirtIO-SCSI)
    >     - Volume cloning
    >     - Snapshots
    >     - Disk I/O throttling (done by libvirt)
    >
    >     Meaning, when a template is deployed for the first time in a Primary
    >     Storage it's written to Ceph and all other Instances afterwards are a
    >     clone of that primary image.
    >
    >     You can snapshot a RBD image and then have it copied to Secondary
    >     Storage. Now, I'm not sure if keeping the snapshot in Primary Storage
    >     and reverting works yet, I haven't looked at that in recent times.
    >
    >     The snapshotting part on Primary Storage is probably something that
    >     needs some love and attention, but otherwise I think all other 
features
    >     are supported.
    >
    >     I would recommend a CentOS 7 or Ubuntu 16.04/18.04 hypervisor, both
    > work
    >     just fine with Ceph.
    >
    >     Wido
    >
    >     > Regards,
    >     > Dag Sonstebo
    >     >
    >     > [1] http://docs.ceph.com/docs/master/rbd/rbd-cloudstack/
    >     >
    >     > dag.sonst...@shapeblue.com
    >     > www.shapeblue.com<http://www.shapeblue.com>
    >     > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
    >     > @shapeblue
    >     >
    >     >
    >     >
    >
    >
    >
    > dag.sonst...@shapeblue.com
    > www.shapeblue.com<http://www.shapeblue.com>
    > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
    > @shapeblue
    >
    >
    >
    >
    
    -- 
    
    Andrija Panić
    

Reply via email to