I am using openstack so need this to be fully automated and apply to all my VMs.

If I could do what you mention at the hypervisor level that would me much 
easier.

The options that you mention I guess are for very specific use cases and need 
to be configured on a per vm basis whilst I am looking for a general "ceph on 
steroids" approach for all my VMs without any maintenance.

Thanks again :)

-----Original Message-----
From: Jason Dillaman [mailto:dilla...@redhat.com] 
Sent: 16 March 2016 01:42
To: Daniel Niasoff <dan...@redactus.co.uk>
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Local SSD cache for ceph on each compute node.

Indeed, well understood.

As a shorter term workaround, if you have control over the VMs, you could 
always just slice out an LVM volume from local SSD/NVMe and pass it through to 
the guest.  Within the guest, use dm-cache (or similar) to add a cache 
front-end to your RBD volume.  Others have also reported improvements by using 
the QEMU x-data-plane option and RAIDing several RBD images together within the 
VM.

-- 

Jason Dillaman 


----- Original Message -----
> From: "Daniel Niasoff" <dan...@redactus.co.uk>
> To: "Jason Dillaman" <dilla...@redhat.com>
> Cc: ceph-users@lists.ceph.com
> Sent: Tuesday, March 15, 2016 9:32:50 PM
> Subject: RE: [ceph-users] Local SSD cache for ceph on each compute node.
> 
> Thanks.
> 
> Reassuring but I could do with something today :)
> 
> -----Original Message-----
> From: Jason Dillaman [mailto:dilla...@redhat.com]
> Sent: 16 March 2016 01:25
> To: Daniel Niasoff <dan...@redactus.co.uk>
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Local SSD cache for ceph on each compute node.
> 
> The good news is such a feature is in the early stage of design [1].
> Hopefully this is a feature that will land in the Kraken release timeframe.
> 
> [1]
> http://tracker.ceph.com/projects/ceph/wiki/Rbd_-_ordered_crash-consist
> ent_write-back_caching_extension
> 
> --
> 
> Jason Dillaman
> 
> 
> ----- Original Message -----
> > From: "Daniel Niasoff" <dan...@redactus.co.uk>
> > To: ceph-users@lists.ceph.com
> > Sent: Tuesday, March 15, 2016 8:47:04 PM
> > Subject: [ceph-users] Local SSD cache for ceph on each compute node.
> > 
> > Hi,
> > 
> > Let me start. Ceph is amazing, no it really is!
> > 
> > But a hypervisor reading and writing all its data off the network 
> > off the network will add some latency to read and writes.
> > 
> > So the hypervisor could do with a local cache, possible SSD or even NVMe.
> > 
> > Spent a while looking into this but it seems really strange that few 
> > people see the value of this.
> > 
> > Basically the cache would be used in two ways
> > 
> > a) cache hot data
> > b) writeback cache for ceph writes
> > 
> > There is the RBD cache but that isn't disk based and on a hypervisor 
> > memory is at a premium.
> > 
> > A simple solution would be to put a journal on each compute node and 
> > get each hypervisor to use its own journal. Would this work?
> > 
> > Something like this
> > http://sebastien-han.fr/images/ceph-cache-pool-compute-design.png
> > 
> > Can this be achieved?
> > 
> > A better explanation of what I am trying to achieve is here
> > 
> > http://opennebula.org/cached-ssd-storage-infrastructure-for-vms/
> > 
> > This talk if it was voted in looks interesting - 
> > https://www.openstack.org/summit/austin-2016/vote-for-speakers/Prese
> > nt
> > ation/6827
> > 
> > Can anyone help?
> > 
> > Thanks
> > 
> > Daniel
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > 
> 
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to