On Wed, Nov 20, 2024 at 2:05 PM Rajmohan Ramamoorthy
<ram.rajmoh...@gmail.com> wrote:
>
> Hi Patrick,
>
> Few other follow up questions.
>
> Is directory fragmentation applicable only when multiple active MDS is 
> enabled for a Ceph FS?

It has no effect when applied with only one rank (active). It can be
useful to have it already set in case you increase max_mds.

> Will directory fragmenation and distribution of fragments amongs active MDS 
> happen if we turn off balancer for a Ceph FS volume `ceph fs set midline-a 
> balance_automate false` ? In Squide, the CephFS automatic metadata load 
> (sometimes called “default”) balancer is now disabled by default. 
> (https://docs.ceph.com/en/latest/releases/squid/)

Yes.

> Is there a way for us to ensure that the directory tree of a Subvolume 
> (Kubernetes PV) is part of a same fragment and handled by a single MDS so 
> that a client operations are handled by one MDS?

A subvolume would not be split across two MDS.

> What is the trigger to start fragmenting directories within a Subvolumegroup?

You don't need to do anything more than set the distribute ephemeral pin.

> With the `balance_automate` set to false and `ephemeral distributed pin` 
> enabled for a Subvolumegroup, can we expect (almost) equal distribution of 
> Subvolumes (Kubernetes PVs) amongst the active MDS daemons and stable 
> operation without hotspot migrations?

Yes.

-- 
Patrick Donnelly, Ph.D.
He / Him / His
Red Hat Partner Engineer
IBM, Inc.
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to