when restarting.
On Thu, Jul 8, 2021 at 6:29 AM Zachary Ulissi wrote:
> We're running a rook-ceph cluster that has gotten stuck in "1 MDSs behind
> on trimming".
>
> * 1 filesystem, three active MDS servers each with standby
> * Quite a few files (20M objects), daily s
We're running a rook-ceph cluster that has gotten stuck in "1 MDSs behind
on trimming".
* 1 filesystem, three active MDS servers each with standby
* Quite a few files (20M objects), daily snapshots. This might be a
problem?
* Ceph pacific 16.2.4
* `ceph health detail` doesn't provide much help (s