Hi,

I'm re-evaluating NFS again, testing on a virtual cluster with 18.2.4. For now, I don't need haproxy so I use "keepalive_only: true" as described in the docs [0]. I first create the ingress service, wait for it to start, then create the nfs cluster. I've added the specs at the bottom.

I can mount the export with the virtual ip. Then I just shut down the VM where the nfs service was running, the orchestrator successfully starts a nfs daemon elsewhere, but the keepalive daemon is not failed over. So mounting or accessing the export is impossible, of course. And after I power up the offline host again, nothing is "repaired", keepalive and nfs run on different servers until I intervene manually. This doesn't seem to work as expected, is this a known issue (couldn't find anything on tracker)? I have my doubts, but maybe it works better with haproxy? Or am I missing something in my configuration?
I haven't tried with a newer release yet. I'd appreciate any comments.

Thanks,
Eugen

---snip---
service_type: ingress
service_id: nfs.ebl-nfs-cephfs
service_name: ingress.nfs.ebl-nfs-cephfs
placement:
  count: 1
  hosts:
  - ceph01
  - ceph02
  - ceph03
spec:
  backend_service: nfs.ebl-nfs-cephfs
  first_virtual_router_id: 50
  keepalive_only: true
  monitor_port: 9049
  virtual_ip: 192.168.168.114/24


service_type: nfs
service_id: ebl-nfs-cephfs
service_name: nfs.ebl-nfs-cephfs
placement:
  count: 1
  hosts:
  - ceph01
  - ceph02
  - ceph03
spec:
  port: 2049
  virtual_ip: 192.168.168.114
---snip---

[0] https://docs.ceph.com/en/reef/cephadm/services/nfs/#nfs-with-virtual-ip-but-no-haproxy
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to