Hi, maybe you could add some more details, for example if your cluster
is managed by cephadm. If so, you can remove orphaned OSDs with the
orchestrator:
ceph orch osd rm 0 3 6 7 8 (force might be required, zap-osds flag can
be also helpful)
Or if the orchestrator isn’t able to do that, you can use cephadm
locally on the node:
ceph osd purge 0
cephadm rm-daemon --name osd.0
Does that help?
Zitat von lejeczek <[email protected]>:
Hi guys.
Is there a way to "clean" those up, both orderly & not manners would be ok.
-> $ ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 1.02539 root default
-3 0.34180 host podster1
9 hdd 0.04880 osd.9 up 1.00000 1.00000
10 hdd 0.29300 osd.10 up 1.00000 1.00000
-7 0.34180 host podster2
2 hdd 0.04880 osd.2 up 1.00000 1.00000
4 hdd 0.29300 osd.4 up 1.00000 1.00000
-5 0.34180 host podster3
1 hdd 0.04880 osd.1 up 1.00000 1.00000
5 hdd 0.29300 osd.5 up 1.00000 1.00000
0 0 osd.0 down 0 1.00000
3 0 osd.3 down 0 1.00000
6 0 osd.6 down 0 1.00000
7 0 osd.7 down 0 1.00000
8 0 osd.8 down 0 1.00000
ID CLASS WEIGHT TYPE NAME
-1 1.02539 root default
-3 0.34180 host podster1
9 hdd 0.04880 osd.9
10 hdd 0.29300 osd.10
-7 0.34180 host podster2
2 hdd 0.04880 osd.2
4 hdd 0.29300 osd.4
-5 0.34180 host podster3
1 hdd 0.04880 osd.1
5 hdd 0.29300 osd.5
-> $ ceph node ls
...
"osd": {
"podster1.mine.priv": [
0,
3,
6,
7,
8,
9,
10
],
"podster2.mine.priv": [
2,
4
],
"podster3.mine.priv": [
1,
5
]
},
...
many thanks, L.
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]