> So .. the idea was that ceph would provide the required clustered filesystem 
> element,
>  and it was the only FS that provided the required "resize on the fly and 
> snapshotting" things that were needed.
> I can't see it working with one shared lun. In theory I can't see why it 
> couldn't work, but have no idea how the likes of
> vmfs achieve the locking across the cluster with single luns, I certainly 
> don't know of something within linux that could do it.

Linux also has a few clustered filesystems e.g. GFS2 or OCFS.
I'm not sure how suited these are for fileservers though since they will have 
to lots of locking of files.

> I guess I could see a clustered FS like Ceph providing something similar to a 
> software raid 1, where 
> two volumes were replicated and access from a couple of hosts point to the 
> two different backend "halves" of the raid via a load balancer?
* Ceph RDB (which is stable) just gives you a block device. 
This is "similar" to ISCSI except that the data is distributed accross x ceph 
nodes. 
Just as ISCSI you should mount this on two locations unless you run a clustered 
filesystem (e.g. GFS / OCFS)
* CephFS gives you a clustered posix filesystem. You can run NFS/CTDB directly 
on top of this.
In theory this is what you are looking for except that it isn't fully mature 
yet.

> The other option is to ditch the HA features and just go with samba on the 
> top of zfs, which could still provide the snapshots we need,
> although it's a step backwards, but then I don't like the sound of the ctdb 
> complexity or performance problems cited.
> I guess people just don't do HA in file sharing roles.. 
Most clusters will be HA but not active-active.
As mentioned above it is possible but you might run into performance issues 
with file locking. (Its been a while since I did things with GFS)

> What about NFS or the like for VM provision though - it's pretty similar just 
> with CTDB bolted on top?
With NFS you have the same issues. The posix filesystem it runs on needs to be 
clustered.
When there is a ceph driver for vmware (not sure what the status is but I think 
they are working on it) you could directly plugin your hypervisors into Ceph.
Still, running it ceph on a SAN still defeats the purpose, might as well just 
use ISCSI + vmfs directly on the SAN.

Cheers,
Robert



_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to