Hi Eugen,
I’m not sure if this helps, and I would greatly appreciate any suggestions for
improving our setup, but so far we’ve had good luck with our service deployed
using:
ceph nfs cluster create cephfs "label:_admin" --ingress --virtual_ip virtual_ip
And then we manually updated the nfs.ceph
I tried something else, but the result is not really satifying. I
edited the keepalive.conf files which had no peers at all or only one
peer, so they were all identical. Restarting the daemons helped having
only one virtual ip assigned, so now the daemons did communicate and I
see messages
Thanks, I removed the ingress service and redeployed it again, with
the same result. The interesting part here is, the configs are
identical compared to the previous deployment, so the same peers (or
no peers) as before.
Zitat von Robert Sander :
Am 3/25/25 um 18:55 schrieb Eugen Block:
O
Am 3/25/25 um 18:55 schrieb Eugen Block:
Okay, so I don't see anything in the keepalive log about communicating
between each other. The config files are almost identical, no difference
in priority, but in unicast_peer. ceph03 has no entry at all for
unicast_peer, ceph02 has only ceph03 in there
Hi Eugen,
yes, for me it's kind of "test-setting" for small setups.
Doc says:
Setting --ingress-mode keepalive-only deploys a simplified ingress
service that provides a virtual IP with the nfs server directly binding
to that virtual IP and leaves out any sort of load balancing or traffic
red
Okay, so I don't see anything in the keepalive log about communicating
between each other. The config files are almost identical, no
difference in priority, but in unicast_peer. ceph03 has no entry at
all for unicast_peer, ceph02 has only ceph03 in there while ceph01 has
both of the others
>
> I just tried it with 3 keepalive daemons and one nfs daemon, it
> doesn't really work because all three hosts have the virtual IP
> assigned, preventing my client from mounting. So this doesn't really
> work as a workaround, it seems.
That's a bit surprising. The keepalive daemons are meant t
Thanks, Adam.
I just tried it with 3 keepalive daemons and one nfs daemon, it
doesn't really work because all three hosts have the virtual IP
assigned, preventing my client from mounting. So this doesn't really
work as a workaround, it seems. I feel like the proper solution would
be to inc
Which daemons get moved around like that is controlled by
https://github.com/ceph/ceph/blob/main/src/pybind/mgr/cephadm/utils.py#L30,
which appears to only include nfs and haproxy, so maybe this keepalive only
case was missed in that sense. I do think that you could alter the
placement of the ingre
Yeah, it seems to work without the "keepalive-only" flag, at least
from a first test. So keepalive-only is not working properly, it
seems? Should I create a tracker for that or am I misunderstanding its
purpose?
Zitat von Malte Stroem :
Hi Eugen,
try omitting
--ingress-mode keepalive-on
Thanks for your quick response. The specs I pasted are actually the
result of deploying a nfs cluster like this:
ceph nfs cluster create ebl-nfs-cephfs "1 ceph01 ceph02 ceph03"
--ingress --virtual_ip 192.168.168.114 --ingress-mode keepalive-only
I can try redeploying it via dashboard, but I
Hi Eugen,
try omitting
--ingress-mode keepalive-only
like this
ceph nfs cluster create ebl-nfs-cephfs "1 ceph01 ceph02 ceph03"
--ingress --virtual_ip "192.168.168.114/24"
Best,
Malte
On 25.03.25 13:25, Eugen Block wrote:
Thanks for your quick response. The specs I pasted are actually the
Hi Eugen,
try deploying the NFS service like this:
https://docs.ceph.com/en/latest/mgr/nfs/
Some had only success deploying it via the dashboard.
Best,
Malte
On 25.03.25 13:02, Eugen Block wrote:
Hi,
I'm re-evaluating NFS again, testing on a virtual cluster with 18.2.4.
For now, I don't ne
13 matches
Mail list logo