Hi Eugen,

Thanks for your confirmation, it works following your steps. In addition, I had to restart the third mds service in order to take into account the change from standby-replay to standby.

Regards,
Hervé

On 15/04/2020 11:01, Eugen Block wrote:
Hi,

I didn't find any clear procedure regarding this operation, and my question is about if I can add an active rank directly or if I have to unset the standby-replay status first ?

I was thinking of the second option, that is:

$ sudo ceph fs set /my_fs/ allow_standby_replay false
$ sudo ceph fs set /my_fs/ max_mds 2

Is it the correct way ?

both ways should work. You can first enable the second active MDS with

$ sudo ceph fs set /my_fs/ max_mds 2

and afterwards disable standby-replay or the other way around. I don't think there's "the one correct" way.

Regards,
Eugen



Zitat von Herve Ballans <herve.ball...@ias.u-psud.fr>:

Hello to all confined people (and the others too) !

On one of my Ceph cluster (Nautilus 14.2.3), I previously set up 3 MDS daemons in active/standy-replay/standby configuration.

For design reasons, I would like to replace this configuration by an active/active/standby one.

It means replace the standby-replay daemon by an active one.

I didn't find any clear procedure regarding this operation, and my question is about if I can add an active rank directly or if I have to unset the standby-replay status first ?

I was thinking of the second option, that is:

$ sudo ceph fs set /my_fs/ allow_standby_replay false
$ sudo ceph fs set /my_fs/ max_mds 2

Is it the correct way ?

Thanks in advance,
Hervé

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to