Ah! I guess I got it!
So, once all OSDs (made by specification I'd like to delete) are gone - service
will disappear as well, right?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On 29.06.24 12:11, Alex from North wrote:
osd.osd_using_paths 108 9m ago -
prometheus ?:9095 3/3 9m ago 6d
ceph1;ceph6;ceph10;count:3
And then ceph orch rm
root@ceph1:~/ceph-rollout# ceph orch rm osd.osd_using_paths
Invalid service 'osd.
Hi everybody!
never seen this before and google stay silent. Just found the same question in
2021 but no answer there (((
so, by ceph orch ls I see:
root@ceph1:~/ceph-rollout# ceph orch ls
NAME PORTSRUNNING REFRESHED AGE PLACEMENT
alertmanager
Hi all,
finally we were able to repair the filesystem and it seems that we did
not lose any data. Thanks for all suggestions and comments.
Here is a short summary of our journey:
1. At some point all our 6 MDS were going to error state one after another
2. We tried to restart them but they
Hi Enrico,
thanks so much for your comment. You are right, that's what I figured
out a bit later, see below.
BTW, I was able to repair the filesystem and all is working fine again,
it seems that we did not lose any data (will post a summary, for the record)
Thanks again,
DIetmar
On 6/28