> This is "similar" to ISCSI except that the data is distributed accross x ceph
> nodes.
> Just as ISCSI you should mount this on two locations unless you run a
> clustered filesystem (e.g. GFS / OCFS)
Oops I meant, should NOT mount this on two locations unles... :)
Cheers,
Robert
_
> So .. the idea was that ceph would provide the required clustered filesystem
> element,
> and it was the only FS that provided the required "resize on the fly and
> snapshotting" things that were needed.
> I can't see it working with one shared lun. In theory I can't see why it
> couldn't wor
Hi,
The logic of going clustered file system is that ctdb needs it. The brief is
simply to provide a "windows file sharing" cluster without using windows, which
would require us to buy loads of CALs for 2012, so isn't an option. The SAN
would provide this, but only if we bought the standalone
> We have a need to provide HA storage to a few thousand users, replacing our
> aging windows storage server.
>
> Our storage all comes from a group of equallogic SANs,
> and since we've invested in these and vmware, the obvious
>
> Our storage cluster mentioned above needs to export SMB and maybe
Ceph is designed to handle reliability in its system rather than in an
external one. You could set it up to use that storage and not do its
own replication, but then you lose availability if the OSD process
hosts disappear, etc. And the filesystem (which I guess is the part
you're interested in) is
Hi All,
First post, so please excuse any ignorance!
We have a need to provide HA storage to a few thousand users, replacing
our aging windows storage server.
I would like to use Ceph alongside (being sensible) xfs - or zfs if I'm
feeling game, but need to understand something critical. Ceph it se