We have an NFS to RBD gateway with a large number of smaller RBDs. In
our use case we are allowing users to request their own RBD containers
that are then served up via NFS into a mixed cluster of clients.Our
gateway is quite beefy, probably more than it needs to be, 2x8 core
cpus and 96GB ra
On Mon, Jun 9, 2014 at 3:41 AM, Stuart Longland wrote:
> On 03/06/14 02:24, Mark Nelson wrote:
>> It's kind of a tough call. Your observations regarding the downsides of
>> using NFS with RBD are apt. You could try throwing another distributed
>> storage system on top of RBD and use Ceph for the
On 03/06/14 02:24, Mark Nelson wrote:
> It's kind of a tough call. Your observations regarding the downsides of
> using NFS with RBD are apt. You could try throwing another distributed
> storage system on top of RBD and use Ceph for the replication/etc, but
> that's not really ideal either. C
On 06/02/2014 11:24 AM, Mark Nelson wrote:
>> A more or less obvious alternative for CephFS would be to simply create
>> a huge RBD and have a separate file server (running NFS / Samba /
>> whatever) use that block device as backend. Just put a regular FS on top
>> of the RBD and use it that way.
On 06/02/2014 10:54 AM, Erik Logtenberg wrote:
Hi,
In march 2013 Greg wrote an excellent blog posting regarding the (then)
current status of MDS/CephFS and the plans for going forward with
development.
http://ceph.com/dev-notes/cephfs-mds-status-discussion/
Since then, I understand progress ha
Hi,
In march 2013 Greg wrote an excellent blog posting regarding the (then)
current status of MDS/CephFS and the plans for going forward with
development.
http://ceph.com/dev-notes/cephfs-mds-status-discussion/
Since then, I understand progress has been slow, and Greg confirmed that
he didn't wa