Yes, standby (as opposed to standby-replay) MDS' form a shared pool
from which the mons will promote an MDS to the required role.

On Tue, Mar 31, 2020 at 12:52 PM Jarett DeAngelis <jar...@reticulum.us> wrote:
>
> So, for the record, this doesn’t appears to work in Nautilus.
>
>
>
> Does this mean that I should just count on my standby MDS to “step in” when a 
> new FS is created?
>
> > On Mar 31, 2020, at 3:19 AM, Eugen Block <ebl...@nde.ag> wrote:
> >
> >> This has changed in Octopus. The above config variables are removed.
> >> Instead, follow this procedure.:
> >>
> >> https://docs.ceph.com/docs/octopus/cephfs/standby/#configuring-mds-file-system-affinity
> >
> > Thanks for the clarification, IIRC I had troubles applying the mds_standby 
> > settings in Nautilus already, but I haven't verified yet so I didn't 
> > mention that in my response. I'll take another look at it.
> >
> >
> > Zitat von Patrick Donnelly <pdonn...@redhat.com>:
> >
> >> On Mon, Mar 30, 2020 at 11:57 PM Eugen Block <ebl...@nde.ag> wrote:
> >>> For the standby daemon you have to be aware of this:
> >>>
> >>> > By default, if none of these settings are used, all MDS daemons
> >>> > which do not hold a rank will
> >>> > be used as 'standbys' for any rank.
> >>> > [...]
> >>> > When a daemon has entered the standby replay state, it will only be
> >>> > used as a standby for
> >>> > the rank that it is following. If another rank fails, this standby
> >>> > replay daemon will not be
> >>> > used as a replacement, even if no other standbys are available.
> >>>
> >>> Some of the mentioned settings are for example:
> >>>
> >>> mds_standby_for_rank
> >>> mds_standby_for_name
> >>> mds_standby_for_fscid
> >>>
> >>> The easiest way is to have one standby daemon per CephFS and let them
> >>> handle the failover.
> >>
> >> This has changed in Octopus. The above config variables are removed.
> >> Instead, follow this procedure.:
> >>
> >> https://docs.ceph.com/docs/octopus/cephfs/standby/#configuring-mds-file-system-affinity
> >>
> >> --
> >> Patrick Donnelly, Ph.D.
> >> He / Him / His
> >> Senior Software Engineer
> >> Red Hat Sunnyvale, CA
> >> GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
> >
> >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to