Hi Eugen,

yes, for me it's kind of "test-setting" for small setups.

Doc says:

Setting --ingress-mode keepalive-only deploys a simplified ingress service that provides a virtual IP with the nfs server directly binding to that virtual IP and leaves out any sort of load balancing or traffic redirection. This setup will restrict users to deploying only 1 nfs daemon as multiple cannot bind to the same port on the virtual IP.

Best,
Malte


On 25.03.25 13:46, Eugen Block wrote:
Yeah, it seems to work without the "keepalive-only" flag, at least from a first test. So keepalive-only is not working properly, it seems? Should I create a tracker for that or am I misunderstanding its purpose?

Zitat von Malte Stroem <malte.str...@gmail.com>:

Hi Eugen,

try omitting

--ingress-mode keepalive-only

like this

ceph nfs cluster create ebl-nfs-cephfs "1 ceph01 ceph02 ceph03" -- ingress --virtual_ip "192.168.168.114/24"

Best,
Malte

On 25.03.25 13:25, Eugen Block wrote:
Thanks for your quick response. The specs I pasted are actually the result of deploying a nfs cluster like this:

ceph nfs cluster create ebl-nfs-cephfs "1 ceph01 ceph02 ceph03" -- ingress --virtual_ip 192.168.168.114 --ingress-mode keepalive-only

I can try redeploying it via dashboard, but I don't have a lot of confidence that it will work differently with a failover.

Zitat von Malte Stroem <malte.str...@gmail.com>:

Hi Eugen,

try deploying the NFS service like this:

https://docs.ceph.com/en/latest/mgr/nfs/

Some had only success deploying it via the dashboard.

Best,
Malte

On 25.03.25 13:02, Eugen Block wrote:
Hi,

I'm re-evaluating NFS again, testing on a virtual cluster with 18.2.4. For now, I don't need haproxy so I use "keepalive_only: true" as described in the docs [0]. I first create the ingress service, wait for it to start, then create the nfs cluster. I've added the specs at the bottom.

I can mount the export with the virtual ip. Then I just shut down the VM where the nfs service was running, the orchestrator successfully starts a nfs daemon elsewhere, but the keepalive daemon is not failed over. So mounting or accessing the export is impossible, of course. And after I power up the offline host again, nothing is "repaired", keepalive and nfs run on different servers until I intervene manually. This doesn't seem to work as expected, is this a known issue (couldn't find anything on tracker)? I have my doubts, but maybe it works better with haproxy? Or am I missing something in my configuration?
I haven't tried with a newer release yet. I'd appreciate any comments.

Thanks,
Eugen

---snip---
service_type: ingress
service_id: nfs.ebl-nfs-cephfs
service_name: ingress.nfs.ebl-nfs-cephfs
placement:
  count: 1
  hosts:
  - ceph01
  - ceph02
  - ceph03
spec:
  backend_service: nfs.ebl-nfs-cephfs
  first_virtual_router_id: 50
  keepalive_only: true
  monitor_port: 9049
  virtual_ip: 192.168.168.114/24


service_type: nfs
service_id: ebl-nfs-cephfs
service_name: nfs.ebl-nfs-cephfs
placement:
  count: 1
  hosts:
  - ceph01
  - ceph02
  - ceph03
spec:
  port: 2049
  virtual_ip: 192.168.168.114
---snip---

[0] https://docs.ceph.com/en/reef/cephadm/services/nfs/#nfs-with- virtual-ip-but-no-haproxy
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io






_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to