Hi All,
  First post, so please excuse any ignorance!
We have a need to provide HA storage to a few thousand users, replacing
our aging windows storage server.
I would like to use Ceph alongside (being sensible) xfs - or zfs if I'm
feeling game, but need to understand something critical. Ceph it seems
hinges on multi server provided storage, i.e. building a collection of
protected storage from multiple hosts and managing those hosts'
underlying physical or virtual storage. Our storage all comes from a
group of equallogic SANs, and since we've invested in these and vmware,
the obvious choice for any hosts going forward are virtual, with iscsi
backend storage provision, or maybe vmfs provided partitions (not
preferred over direct iscsi).
Our storage cluster mentioned above needs to export SMB and maybe NFS,
using samba CTDB and whatever NFS needs (not looked into that yet). My
question is how to present the storage ceph needs given that I'd like
the SAN itself to provide the resilience through it's replication and
snapshot capabilities, but for ceph to provide the logical HA
(active/active if possible). 

How does it work with single shared backend iscsi targets, or am I going
down the wrong path altogether?
 I could just set up a load balancer to point at a couple of hosts with
the same single iscsi target mounted. Samba would take care of the file
locks, but any snapshotting or similar obviously need to quiesce the
volume, which might cause all sorts of nastiness in an active/active
cluster. I would want the active/active bit to be at a file level
obviously - multiple connections to the same individual files just
wouldn't work.
Any pointers? Anyone done this sort of thing before?
Perhaps I'm better off creating a replicated couple of volumes and using
ceph to manage those, providing active/active alongside replication? Or
should I stick to active/passive? There's no great need for
active/active but it seems a waste to have a redundant host doing
nothing unless it's absolutely necessary.
I've tried a couple of general linux forums but I'm either asking the
wrong questions, it's stupidly obvious or people just don't know!
Thanks
Andy
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to