[ceph-users] Re: Why did ceph turn /etc/ceph/ceph.client.admin.keyring into a directory?

2022-10-25 Thread Martin Johansen
Could you explain? I have just deployed Ceph CSI just like the docs specified. What mode is it running in if not container mode? Best Regards, Martin Johansen On Tue, Oct 25, 2022 at 10:56 AM Marc wrote: > Wtf, unbelievable that it is still like this. You can't fix it, I had to &g

[ceph-users] Re: Why did ceph turn /etc/ceph/ceph.client.admin.keyring into a directory?

2022-10-25 Thread Martin Johansen
How should we fix it? Should we remove the directory and add back the keyring file? Best Regards, Martin Johansen On Tue, Oct 25, 2022 at 9:45 AM Martin Johansen wrote: > Yes, we are using the ceph-csi driver in a kubernetes cluster. Is it that > that is causing this? > >

[ceph-users] Re: Why did ceph turn /etc/ceph/ceph.client.admin.keyring into a directory?

2022-10-25 Thread Martin Johansen
Yes, we are using the ceph-csi driver in a kubernetes cluster. Is it that that is causing this? Best Regards, Martin Johansen On Tue, Oct 25, 2022 at 9:44 AM Marc wrote: > > > > 1) Why does ceph delete /etc/ceph/ceph.client.admin.keyring several > > times a > > da

[ceph-users] Why did ceph turn /etc/ceph/ceph.client.admin.keyring into a directory?

2022-10-25 Thread Martin Johansen
ot remove '/etc/ceph/ceph.client.admin.keyring': Is a directory". Best Regards, Martin Johansen ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Debug cluster warnings "CEPHADM_HOST_CHECK_FAILED", "CEPHADM_REFRESH_FAILED" etc

2022-10-24 Thread Martin Johansen
Hi, thank you, we replaced the domain of the service in text before reporting the issue. Sorry, I should have mentioned. admin.ceph.example.com was turned into admin.ceph. for privacy sake. Best Regards, Martin Johansen On Mon, Oct 24, 2022 at 2:53 PM Murilo Morais wrote: > Hello Mar

[ceph-users] Debug cluster warnings "CEPHADM_HOST_CHECK_FAILED", "CEPHADM_REFRESH_FAILED" etc

2022-10-24 Thread Martin Johansen
ck cleared: CEPHADM_REFRESH_FAILED (was: failed to probe daemons or devices) 10/24/22 1:57:54 PM [INF] Health check cleared: CEPHADM_HOST_CHECK_FAILED (was: 1 hosts fail cephadm check) 10/24/22 1:56:38 PM [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED) 10/24/22 1:56:38 PM [WRN] Health check failed: 1 hosts fail cephadm check (CEPHADM_HOST_CHECK_FAILED) 10/24/22 1:52:18 PM [INF] Cluster is now healthy --- These statuses go offline and online sporadically. The block devices seem to be working fine all along. The cluster alternates between HEALTH_OK and HEALTH_WARN Best Regards, Martin Johansen ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io