Hi, All.  

We are new to Ceph, and looking for any general best practices WRT exporting a 
Cephfs file system over NFS.  I see several options in the documentation and 
have tested several different configurations, but haven’t yet seen much 
difference in our testing and aren’t sure exactly which configuration is 
generally recommended to start with.

We have a single Cephfs filesystem in our cluster of 10 hosts.  Five of our 
hosts are OSDs with the spinning disks that make up our Cephfs data pool, and 
only run the osd services (osd, crash, ceph-exporter, node-exporter, and 
promtail).  The other five hosts are “admin” hosts that run everything else 
(mds, mgr, mon, etc.).

Our current setup follows the "HIGH-AVAILABILITY NFS” documentation, which 
gives us an Ingress.nfs.cephfs service with the haproxy and keepalived daemons 
and a nfs.cephfs service for the actual nfs daemons.  If there are no downsides 
to this approach, are there any recommendations on placement for these two 
services?  Given our cluster, would it be best to run both on the admin nodes?  
Or would it be better to have the ingress.nfs.cephfs service on the admin 
nodes, and the backend nfs.cephfs services on the osd nodes?

Alternatively, are there advantages in using the “keepalive only” mode (only 
keepalived, no haproxy)?  Or does anyone recommend doing something completely 
different, like using Pacemaker and Corosync to manage our NFS services?

Any recommendations one way or another would be greatly appreciated.

Many thanks,
Devin
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to