On 7/20/23 11:36, dxo...@naver.com wrote:
This issue has been closed.
If any rook-ceph users see this, when mds replay takes a long time, look at the
logs in mds pod.
If it's going well and then abruptly terminates, try describing the mds pod,
and if liveness probe terminated, try increasing
On Thu, Jul 20, 2023 at 11:19 PM wrote:
>
> If any rook-ceph users see the situation that mds is stuck in replay, then
> look at the logs of the mds pod.
>
> When it runs and then terminates repeatedly, check if there is "liveness
> probe termninated" error message by typing "kubectl describe p
If any rook-ceph users see the situation that mds is stuck in replay, then look
at the logs of the mds pod.
When it runs and then terminates repeatedly, check if there is "liveness probe
termninated" error message by typing "kubectl describe pod -n (namspace) (mds'
pod name)"
If there is the
This issue has been closed.
If any rook-ceph users see this, when mds replay takes a long time, look at the
logs in mds pod.
If it's going well and then abruptly terminates, try describing the mds pod,
and if liveness probe terminated, try increasing the threadhold of liveness
probe.
___
I think the rook-ceph is not responding to the liveness probe (confirmed by k8s
describe mds pod) I don't think it's the memory as I don't limit it, and I have
the cpu set to 500m per mds, but what direction should I go from here?
___
ceph-users mailing
if possible, could you share the mds logs at debug level 20
you'll need to set debug_mds = 20 in the conf file until the crash and
revert the level to the default after mds crash
On Tue, Jul 18, 2023 at 9:12 PM wrote:
> hello.
> I am using ROK CEPH and have 20 MDSs in use. 10 are in rank 0-9 an