I've recently accepted the fact CEPH-FS is not stable enough for production
based on 1) recent discussion this week with Inktank engineers, 2)
discovery that the documentation now explicitly states that all over the
place (http://eu.ceph.com/docs/wip-3060/cephfs/) and 3) a reading of the
recent buglist (http://tracker.ceph.com/issues/6613), which talks about
things such as SAMBA no longer working (and I absolutely need SAMBA to
work...).

I've come up with 3 alternatives, the last is my own and I think will work
really well, but I wanted people more knowledgeable than me to see if there
are any holes with it:

Alternatives

1) nfs over rbd (http://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/)
2) nfs-ganesha for ceph (
https://github.com/nfs-ganesha/nfs-ganesha/blob/master/src/nfs-ganesha.ceph.init)
3) create a large Centos 6.4 VM (eg 15 TB, 1 TB for OS using EXT4,
remaining 14 TB using either EXT4 or XTRFS) on RBD and then install NFS and
SAMBA on it.

What do people think of the above alternatives?

This is my best thoughts:

#1 --> seems a touch complicated to set up, and I'm not sure if it's
performant and "production stable".
#2 --> couldn't get much documentation on this, so I'm not sure exactly how
this works or how 1) performant and 2) "production stable" this is
#3 -->Since RBD is production stable, and since installing NFS and/or SAMBA
on a normal Linux VM is also production stable, this seems to give me
everything I'd need, right? Or am I missing something?

In my understanding, with option #3, the VM on RBD would be spread across
multiple OSDs on multiple computers, so I'll get both speed and reliability
and automatic-self healing. I'd probably put this VM onto a host with 6
bonded 1 GB NIC cards, so the CIFS and SAMBA sharing should be able to use
~6 GBs connections. And I might even manage the VM via cloudstack, so in
the rare circumstance that the host fails, cloudstack would restart this VM
onto another host. There would be some downtime, but for my purposes,
that's acceptable for now, at least until CEPH-FS becomes production
stable, which is supposed to Q3 2014.

I can't see any holes in alternative #3 -- it's simple, reasonably fast,
and uses production stable technologies. (I can even control quotas using
normal LVM sizes.) Thoughts?

-Sidharta
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to