We've used ceph to address the storage requirement in small clouds pretty well. 
it works pretty well with only two storage nodes with replication set to 2, and 
because of the radosgw, you can share your small amount of storage between the 
object store and the block store avoiding the need to overprovision swift-only 
or cinder-only to handle usage unknowns. Its just one pool of storage.

Your right, using lvm is like telling your users, don't do pets, but then 
having pets at the heart of your system. when you loose one, you loose a lot. 
With a small ceph, you can take out one of the nodes, burn it to the ground and 
put it back, and it just works. No pets.

Do consider ceph for the small use case.

Thanks,
Kevin

________________________________
From: Robert Starmer [rob...@kumul.us]
Sent: Monday, February 08, 2016 1:30 PM
To: Ned Rhudy
Cc: OpenStack Operators
Subject: Re: [Openstack-operators] RAID / stripe block storage volumes

Ned's model is the model I meant by "multiple underlying storage services".  
Most of the systems I've built are LV/LVM only,  a few added Ceph as an 
alternative/live-migration option, and one where we used Gluster due to size.  
Note that the environments I have worked with in general are small (~20 
compute), so huge Ceph environments aren't common.  I am also working on a 
project where the storage backend is entirely NFS...

And I think users are more and more educated to assume that there is nothing 
guaranteed.  There is the realization, at least for a good set of the customers 
I've worked with (and I try to educate the non-believers), that the way you get 
best effect from a system like OpenStack is to consider everything disposable. 
The one gap I've seen is that there are plenty of folks who don't deploy SWIFT, 
and without some form of object store, there's still the question of where you 
place your datasets so that they can be quickly recovered (and how do you keep 
them up to date if you do have one).  With VMs, there's the concept that you 
can recover quickly because the "dataset" e.g. your OS, is already there for 
you, and in plenty of small environments, that's only as true as the glance 
repository (guess what's usually backing that when there's no SWIFT around...).

So I see the issue as a holistic one. How do you show operators/users that they 
should consider everything disposable if we only look at the current running 
instance as the "thing"   Somewhere you still likely need some form of 
distributed resilience (and yes, I can see using the distributed Canonical, 
Centos, RedHat, Fedora, Debian, etc. mirrors as your distributed Image backup 
but what about the database content, etc.).

Robert

On Mon, Feb 8, 2016 at 1:44 PM, Ned Rhudy (BLOOMBERG/ 731 LEX) 
<erh...@bloomberg.net<mailto:erh...@bloomberg.net>> wrote:
In our environments, we offer two types of storage. Tenants can either use 
Ceph/RBD and trade speed/latency for reliability and protection against 
physical disk failures, or they can launch instances that are realized as LVs 
on an LVM VG that we create on top of a RAID 0 spanning all but the OS disk on 
the hypervisor. This lets the users elect to go all-in on speed and sacrifice 
reliability for applications where replication/HA is handled at the app level, 
if the data on the instance is sourced from elsewhere, or if they just don't 
care much about the data.

There are some further changes to our approach that we would like to make down 
the road, but in general our users seem to like the current system and being 
able to forgo reliability or speed as their circumstances demand.

From: j...@topjian.net<mailto:j...@topjian.net>
Subject: Re: [Openstack-operators] RAID / stripe block storage volumes
Hi Robert,

Can you elaborate on "multiple underlying storage services"?

The reason I asked the initial question is because historically we've made our 
block storage service resilient to failure. Historically we also made our 
compute environment resilient to failure, too, but over time, we've seen users 
become more educated to cope with compute failure. As a result, we've been able 
to become more lenient with regard to building resilient compute environments.

We've been discussing how possible it would be to translate that same idea to 
block storage. Rather than have a large HA storage cluster (whether Ceph, 
Gluster, NetApp, etc), is it possible to offer simple single LVM volume servers 
and push the failure handling on to the user?

Of course, this doesn't work for all types of use cases and environments. We 
still have projects which require the cloud to own most responsibility for 
failure than the users.

But for environments were we offer general purpose / best effort compute and 
storage, what methods are available to help the user be resilient to block 
storage failures?

Joe

On Mon, Feb 8, 2016 at 12:09 PM, Robert Starmer 
<rob...@kumul.us<mailto:rob...@kumul.us>> wrote:
I've always recommended providing multiple underlying storage services to 
provide this rather than adding the overhead to the VM.  So, not in any of my 
systems or any I've worked with.

R



On Fri, Feb 5, 2016 at 5:56 PM, Joe Topjian 
<j...@topjian.net<mailto:j...@topjian.net>> wrote:
Hello,

Does anyone have users RAID'ing or striping multiple block storage volumes from 
within an instance?

If so, what was the experience? Good, bad, possible but with caveats?

Thanks,
Joe

_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org<mailto:OpenStack-operators@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org<mailto:OpenStack-operators@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org<mailto:OpenStack-operators@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to