I was noting that there are other access modalities.

CephFS and RGW obviate a client-side filesystem.  I would think ZFS suits the 
OP, allowing sophisticated data protection, compression, and straightforward 
NFS export.   

> On Mar 28, 2025, at 7:48 PM, Tim Holloway <t...@mousetech.com> wrote:
> 
> Good point, although if I read that properly, all of those present the data 
> as a raw block device and therefore have to have a client-side filesystem 
> mounted on them. Probably more sophisticated than what our intrepid 
> adventurer was looking for
> 
> RBD as a base image for a VM is great for cloud server VMs, though!
> 
>> On 3/28/25 16:53, Anthony D'Atri wrote:
>> 
>>>> On Mar 28, 2025, at 4:38 PM, Tim Holloway <t...@mousetech.com> wrote:
>>> 
>>> We're glad to have been of help.
>>> 
>>> There is no One Size Fits All solution. For you, it seems that speed is 
>>> more important than high availability. For me, it's HA+redundancy.
>>> 
>>> Ceph has 3 ways to deliver data to remote clients:
>>> 
>>> 1. As a direct ceph mount on the client. From experience, this is a pain 
>>> when the clients hibernate.
>>> 
>>> 2. As an internal ganesha NFS server running under ceph
>>> 
>>> 3. As an independent ganesha NFS server using ceph as a backend.
>> Some deployments also do KRBD mounts onto a VM or BM system that re-exports 
>> them via KNFS
>> 
>> CephFS and RGW (with caveats) can also be exported with SMB.
>> 
>> RBD mounts, either through KRBD or libvirt are popular as well.
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to