Hi,
  The logic of going clustered file system is that ctdb needs it. The brief is 
simply to provide a "windows file sharing" cluster without using windows, which 
would require us to buy loads of CALs for 2012, so isn't an option. The SAN 
would provide this, but only if we bought the standalone head units which do 
the job I'm trying to achieve here, and they've been quoted at £30-40k.

So .. the idea was that ceph would provide the required clustered filesystem 
element, and it was the only FS that provided the required "resize on the fly 
and snapshotting" things that were needed.
I can't see it working with one shared lun. In theory I can't see why it 
couldn't work, but have no idea how the likes of vmfs achieve the locking 
across the cluster with single luns, I certainly don't know of something within 
linux that could do it.

I guess I could see a clustered FS like Ceph providing something similar to a 
software raid 1, where two volumes were replicated and access from a couple of 
hosts point to the two different backend "halves" of the raid via a load 
balancer?

The other option is to ditch the HA features and just go with samba on the top 
of zfs, which could still provide the snapshots we need, although it's a step 
backwards, but then I don't like the sound of the ctdb complexity or 
performance problems cited.

I guess people just don't do HA in file sharing roles.. What about NFS or the 
like for VM provision though - it's pretty similar just with CTDB bolted on top?

Thanks
Andy


>Ceph is designed to handle reliability in its system rather than in an 
>external one. You could set it up to use that storage and not do its own 
>replication, but then you lose availability if the OSD ?>process hosts 
>disappear, etc. And the filesystem (which I guess is the part you're 
>interested in) is the least stable component of the overall system. Maybe if 
>you could describe more about what >you think the system stack will look like?
>-Greg
>Software Engineer #42 @ http://inktank.com | http://ceph.com

> We have a need to provide HA storage to a few thousand users, 
> replacing our aging windows storage server.
>
> Our storage all comes from a group of equallogic SANs, and since we've 
> invested in these and vmware, the obvious
>
> Our storage cluster mentioned above needs to export SMB and maybe NFS, 
> using samba CTDB and whatever NFS needs (not looked into that yet). My 
> question is how to present the storage ceph needs given that I'd like 
> the SAN itself to provide the resilience through it's replication and 
> snapshot capabilities, but for ceph to provide the logical HA (active/active 
> if possible).

For me it does not seem that Ceph is the most logical solution.
Currently you could look at Ceph as a SAN replacement. 
It can also act like an object store, similar to Amazon S3 / Openstack Swift.

The distibuted filesystem part (cephfs) might be a fit but is not really 
production ready yet as far as I know.
( I think people are using it but I would not put 1000s of users on it yet. 
E.g. it is missing active-active ha option )

Since you want to keep using the SAN and are using SMB and NFS clients  (e.g. 
no native/ ceph kernel client/ qemu clients) it seems to me you are just adding 
another layer of complexity without any of the benefits that Ceph can bring.

To be brutally honest I would look if the SAN supports NFS / SMB exports.

CTDB is nice but it requires a shared filesystem so you would have to look at 
GFS or something similar.
You can get it to work but it is a bit of a PITA. 
There are also some performance considerations with those filesystems so you 
should really do some proper testing before any large scale deployments.

Cheers,
Robert van Leeuwen
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to