Question 1) makes me wonder too.

This results in errors:

2022-10-25T11:20:00.000109+0200 mon.ceph00 [INF] overall HEALTH_OK
2022-10-25T11:21:05.422793+0200 mon.ceph00 [WRN] Health check failed:
failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)
2022-10-25T11:22:06.037456+0200 mon.ceph00 [INF] Health check cleared:
CEPHADM_REFRESH_FAILED (was: failed to probe daemons or devices)
2022-10-25T11:22:06.037491+0200 mon.ceph00 [INF] Cluster is now healthy
2022-10-25T11:30:00.000071+0200 mon.ceph00 [INF] overall HEALTH_OK

I would like to stop this behavior. But how?

Am Di., 25. Okt. 2022 um 09:44 Uhr schrieb Marc <m...@f1-outsourcing.eu>:

> >
> > 1) Why does ceph delete /etc/ceph/ceph.client.admin.keyring several
> > times a
> > day?
> >
> > 2) Why was it turned into a directory? It contains one file
> > "ceph.client.admin.keyring.new". This then causes an error in the ceph
> > logs
> > when ceph tries to remove the file: "rm: cannot remove
> > '/etc/ceph/ceph.client.admin.keyring': Is a directory".
> >
>
> Are you using the ceph-csi driver? The ceph csi people just delete your
> existing ceph files and mount your root fs when you are not running the
> driver in a container. They seem to think that checking for files and
> validating parameters is not necessary.
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to