>> Just out of curiosity. Why you are using cephfs instead of rbd?

[We're not using is *instead* of rbd, we're using it *in addition to*
 rbd.  For example, our OpenStack (users') cinder volumes are stored in
 rbd.]

To expand on what my colleague Jens-Christian wrote:

> Two reasons:

> - we are still on Folsom

What we want to achieve is to have a shared "instance store"
(i.e. "/var/lib/nova/instances") across all our nova-compute nodes, so
that we can e.g. live-migrate instances between different hosts.  And we
want to use Ceph for that.

In Folsom (but also in Grizzly, I think), this isn't straightforward to
do with RBD.  A feature[1] to make it more straightforward was merged in
Havana(-3) just two weeks ago.

> - Experience with "shared storage" as this is something our customers
> are asking for all the time

Yes, people want shared storage that they can access in a POSIXly way
from multiple VMs.  CephFS is a relatively easy way to give them that,
though I don't consider it "production-ready" - mostly because secure
isolation between different tenants is hard to achieve.
-- 
Simon.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to