Hi Adam,

We provide various volume types which differ in

- performance (implemented via different IOPS QoS specifications, not via 
different hardware),
- service quality (e.g. volumes on a Ceph pool that is on Diesel-backed 
servers, so via separate hardware),
- a combination of the two,
- geographical location (with a second Ceph instance in another data centre).

I think it is absolutely realistic/manageable to use the same Ceph cluster for 
various use cases.

HTH,
 Arne

> On 26 Oct 2015, at 14:02, Adam Lawson <alaw...@aqorn.com> wrote:
> 
> Has anyone deployed Ceph and accommodate different disk/performance 
> requirements? I.e. Saving ephemeral storage and boot volumes on SSD and less 
> important content such as object storage, glance images on SATA or something 
> along those lines?
> 
> Just looking at it's realistic (or discover best practice) on using the same 
> Ceph cluster for both use cases...
> 
> //adam
> 
> Adam Lawson
> 
> AQORN, Inc.
> 427 North Tatnall Street
> Ste. 58461
> Wilmington, Delaware 19801-2230
> Toll-free: (844) 4-AQORN-NOW ext. 101
> International: +1 302-387-4660
> Direct: +1 916-246-2072
> 
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to